doi
stringlengths 16
27
| title
stringlengths 18
435
| authors
stringlengths 6
600
| author_corresponding
stringlengths 5
52
| author_corresponding_institution
stringlengths 1
160
⌀ | date
stringlengths 10
10
| version
int64 0
26
| type
stringclasses 3
values | license
stringclasses 7
values | category
stringclasses 51
values | jatsxml
stringlengths 68
79
| abstract
stringlengths 4
38.7k
| published
stringlengths 13
46
⌀ | server
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.1101/19001370 | FDA oversight of NSIGHT genomic research: the need for an integrated systems approach to regulation | Milko, L. V.; Chen, F.; Chan, K.; Brower, A. M.; Agrawal, P. B.; Beggs, A. H.; Berg, J. S.; Brenner, S. E.; Holm, I. A.; Koenig, B. A.; Parad, R. B.; Powell, C. M.; Kingsmore, S. F. | Stephen F Kingsmore | Rady Childrens Institute for Genomic Medicine | 2019-07-30 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | public and global health | https://www.medrxiv.org/content/early/2019/07/30/19001370.source.xml | The National Institutes of Health (NIH) funded the Newborn Sequencing In Genomic medicine and public HealTh (NSIGHT) Consortium to investigate the implications, challenges and opportunities associated with the possible use of genomic sequence information in the newborn period. Following announcement of the NSIGHT awardees in 2013, the Food and Drug Administration (FDA) contacted investigators and requested that pre-submissions to investigational device exemptions (IDE) be submitted for the use of genomic sequencing under Title 21 of the Code of Federal Regulations (21 CFR) part 812. IDE regulation permits clinical investigation of medical devices that have not been approved by the FDA. To our knowledge, this marked the first time the FDA determined that NIH-funded clinical genomic research projects are subject to IDE regulation. Here we review the history of and rationale behind FDA oversight of clinical research and the NSIGHT Consortiums experiences in navigating the IDE process. Overall, NSIGHT investigators found that FDAs application of existing IDE regulations and medical device definitions aligned imprecisely with the aims of publicly funded exploratory clinical research protocols. IDE risk assessments by the FDA were similar to, but distinct from, protocol risk assessments conducted by local Institutional Review Boards (IRBs), and had the potential to reflect novel oversight of emerging genomic technologies. However, the pre-IDE and IDE process delayed the start of NSIGHT research studies by an average of 10 months, and significantly limited the scope of investigation in two of the four NIH approved projects. Based on the experience of the NSIGHT Consortium, we conclude that policies and practices governing the development and use of novel genomic technologies in clinical research urgently need clarification in order to mitigate potentially conflicting or redundant oversight by IRBs, NIH, FDA, and state authorities. | 10.1038/s41525-019-0105-8 | medrxiv |
10.1101/19003269 | Individuals with Parkinson's Disease Retain Spatiotemporal Gait Control With Music and Metronome Cues | Chawla, G.; Wygand, M.; Browner, N.; Lewek, M. | Michael Lewek | UNC-Chapel Hill | 2019-07-30 | 1 | PUBLISHAHEADOFPRINT | cc_no | rehabilitation medicine and physical therapy | https://www.medrxiv.org/content/early/2019/07/30/19003269.source.xml | BackgroundParkinsons disease (PD) is marked by a loss of motor automaticity, resulting in decreased control of step length during gait. Rhythmic auditory cues (metronomes or music) may enhance automaticity by adjusting cadence. Both metronomes and music may offer distinct advantages, but prior attempts at quantifying their influence on spatiotemporal aspects of gait have been confounded by altered gait speeds from overground walking. We hypothesized that when gait speed is fixed, individuals with PD would experience difficulty in modifying cadence due to the concomitant requirement to alter step length, with greater changes noted with metronomes compared to music cues.
Research QuestionCan a metronome or music promote spatiotemporal adjustments when decoupled from changes in gait speed in individuals with PD?
Methods21 participants with PD were instructed to time their steps to a metronome and music cues (at 85%, 100%, and 115% of overground cadence) during treadmill walking. We calculated cadence, cadence accuracy, and step length during each cue condition and an uncued control condition. We compared the various cue frequencies and auditory modalities.
ResultsAt fixed gait speeds, participants were able to increase and decrease cadence in response to auditory cues. Music and metronome cues produced comparable results in cadence manipulation with greater cadence errors noted at slower intended frequencies. Nevertheless, the induced cadence changes created a concomitant alteration in step length, with music and metronomes producing comparable changes. Notably, longer step lengths were induced with both music and metronome during slow frequency cueing.
SignificanceThis important change conflicts with conventional prescriptive approaches, which advocate for faster cue frequencies, if applied on a treadmill. The music and metronome cues produced comparable changes to gait, suggesting that either cue may be effective at overcoming the shortened step lengths during treadmill walking if slower frequencies are used. | 10.1123/mc.2020-0038 | medrxiv |
10.1101/19003319 | Toric Intraocular Lens Implantation in Cataract Patients with Corneal Opacity. | Ra, H.; Kim, H. S.; Kim, M. S.; Kim, E. C. | Eun Chul Kim | Bucheon St. Mary\'s Hospital, Catholic University of Korea | 2019-07-30 | 1 | PUBLISHAHEADOFPRINT | cc_no | ophthalmology | https://www.medrxiv.org/content/early/2019/07/30/19003319.source.xml | AimsTo evaluate the effect of toric intraocular lens implantation in cataract patient with corneal opacity and high astigmatism.
Methods31 eyes of 31 patients who underwent cataract surgery with toric intraocular lens implantation were included. All patients had corneal opacity with regular astigmatism. Preoperative total corneal astigmatism was determined considering posterior astigmatism using a rotating Scheimpflug camera (Pentacam(R): Oculus, Wetzlar, Germany). At 2 months after toric intraocular lens implantation, we evaluated residual astigmatism, uncorrected visual acuity (UCVA) and best corrected visual acuity (BCVA).
ResultsPostoperative UCVA and BCVA (0.30 {+/-} 0.17, 0.22 {+/-} 0.16LogMAR) statistically improved compared to preoperative UCVA and BCVA (1.2 {+/-} 0.34, 1.1 {+/-} 0.30LogMAR, respectively) (P<0.01). Postoperative residual refractive astigmatism (1.2 {+/-} 0.35D) was statistically reduced compared to preoperative refractive astigmatism (2.4 {+/-} 0.65D) (P<0.05). Preoperative and postoperative total corneal astigmatism values were not statistically different. All cases achieved visual acuity were as good as or better than that preoperatively. The percentage of corneal opacity covering pupillary area had significant negative correlation with postoperative UCVA and BCVA (logMAR) (R=-0.88 P<0.00001 and R=-0.87 P<0.00001, respectively)
ConclusionToric intraocular lens implantation can improve UCVA, BCVA, and refractive astigmatism in cataract patient with corneal opacity. The percentage of central corneal opacity covering pupillary area is the major prognostic factor for postoperative visual improvement. Therefore, toric intraocular lens implantation should be considered for cataract patients who have corneal opacity with high astigmatism. | 10.1186/s12886-020-01352-w | medrxiv |
10.1101/19002659 | Fox Insight Collects Online, Longitudinal Patient-Reported Outcomes and Genetic Data on Parkinson's Disease | Smolensky, L.; Amondikar, N.; Crawford, K.; Neu, S.; Kopil, C. M.; Daeschler, M.; Riley, L.; 23andMe Research Team, ; Brown, E.; Toga, A. W.; Tanner, C. | Luba Smolensky | The Michael J. Fox Foundation for Parkinson\'s Research | 2019-07-30 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/30/19002659.source.xml | Fox Insight is an online, longitudinal health study of people with and without Parkinsons disease with targeted enrollment set to at least 125,000 individuals. Fox Insight data is a rich data set facilitating discovery, validation, and reproducibility in Parkinsons disease research. The dataset is generated through routine longitudinal assessments (health and medical questionnaires evaluated at regular cycles), one-time questionnaires about environmental exposure and healthcare preferences, and genetic data collection.
Qualified Researchers can explore, analyze, and download patient-reported outcomes (PROs) data and Parkinsons disease-related genetic variants at https://foxden.michaeljfox.org. The full Fox Insight genetic data set, including approximately 600,000 single nucleotide polymorphisms (SNPs), can be requested separately with institutional review and are described outside of this data descriptor.
Fox Insight is sponsored by The Michael J. Fox Foundation for Parkinsons Research. | 10.1038/s41597-020-0401-2 | medrxiv |
10.1101/19003525 | Incubation periods impact the spatial predictability of outbreaks: analysis of cholera and Ebola outbreaks in Sierra Leone | Kahn, R.; Peak, C. M.; Fernandez-Gracia, J.; Hill, A.; Jambai, A.; Ganda, L.; Castro, M. C.; Buckee, C. | Caroline Buckee | Harvard T.H. Chan School of Public Health | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/01/19003525.source.xml | Forecasting the spatiotemporal spread of infectious diseases during an outbreak is an important component of epidemic response. However, it remains challenging both methodologically and with respect to data requirements as disease spread is influenced by numerous factors, including the pathogens underlying transmission parameters and epidemiological dynamics, social networks and population connectivity, and environmental conditions. Here, using data from Sierra Leone we analyze the spatiotemporal dynamics of recent cholera and Ebola outbreaks and compare and contrast the spread of these two pathogens in the same population. We develop a simulation model of the spatial spread of an epidemic in order to examine the impact of a pathogens incubation period on the dynamics of spread and the predictability of outbreaks. We find that differences in the incubation period alone can determine the limits of predictability for diseases with different natural history, both empirically and in our simulations. Our results show that diseases with longer incubation periods, such as Ebola, where infected individuals can travel further before becoming infectious, result in more long-distance sparking events and less predictable disease trajectories, as compared to the more predictable wave-like spread of diseases with shorter incubation periods, such as cholera.
Significance statementUnderstanding how infectious diseases spread is critical for preventing and containing outbreaks. While advances have been made in forecasting epidemics, much is still unknown. Here we show that the incubation period - the time between exposure to a pathogen and onset of symptoms - is an important factor in predicting spatiotemporal spread of disease and provides one explanation for the different trajectories of the recent Ebola and cholera outbreaks in Sierra Leone. We find that outbreaks of pathogens with longer incubation periods, such as Ebola, tend to have less predictable spread, whereas pathogens with shorter incubation periods, such as cholera, spread in a more predictable, wavelike pattern. These findings have implications for the scale and timing of reactive interventions, such as vaccination campaigns. | 10.1073/pnas.1913052117 | medrxiv |
10.1101/19003509 | Risk factors for heart failure with preserved or reduced ejection fraction among Medicare beneficiaries: Applications of completing risks analysis and gradient boosted model. | Lee, M. P.; Glynn, R. J.; Schneeweiss, S.; Lin, K. J.; Patorno, E.; Barberio, J.; Levin, R.; Evers, T.; Wang, S. V.; Desai, R. | Rishi Desai | Brigham and Women\'s Hospital and Harvard Medical School | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/01/19003509.source.xml | BackgroundThe differential impact of various demographic characteristics and comorbid conditions on development of heart failure (HF) with preserved (pEF) and reduced ejection fraction (rEF) is not well studied among the elderly.
Methods and ResultsUsing Medicare claims data linked to electronic health records, we conducted an observational cohort study of individuals [≥] 65 years of age without HF. A Cox proportional hazards model accounting for competing risk of HFrEF and HFpEF incidence was constructed. A gradient boosted model (GBM) assessed the relative influence (RI) of each predictor in development of HFrEF and HFpEF. Among 138,388 included individuals, 9,701 developed HF (IR= 20.9 per 1,000 person-year). Males were more likely to develop HFrEF than HFpEF (HR = 2.07, 95% CI: 1.81-2.37 vs. 1.11, 95% CI: 1.02-1.20, P for heterogeneity < 0.01). Atrial fibrillation and pulmonary hypertension had stronger associations with the risk of HFpEF (HR = 2.02, 95% CI: 1.80-2.26 and 1.66, 95% CI: 1.23-2.22) while cardiomyopathy and myocardial infarction were more strongly associated with HFrEF (HR = 4.37, 95% CI: 3.21-5.97 and 1.94, 95% CI: 1.23-3.07). Age was the strongest predictor across all HF subtypes with RI from GBM >35%. Atrial fibrillation was the most influential comorbidity for development of HFpEF (RI = 8.4%) while cardiomyopathy was most influential for HFrEF (RI = 20.7%).
ConclusionsThese findings of heterogeneous relationships between several important risk factors and heart failure types underline the potential differences in the etiology of HFpEF and HFrEF.
Key QuestionsO_LIWhat is already known about this subject?
Previous epidemiologic studies describe the differences in risk factors involved in developing heart failure with preserved (HFpEF) and reduced ejection fraction (HFrEF), however, there has been no large study in an elderly population.
C_LIO_LIWhat does this study add?
This study provides further insights into the heterogeneous impact of various clinical characteristics on the risk of developing HFpEF and HFrEF in a population of elderly individuals.
Employing an advanced machine learning technique allows assessing the relative importance of each risk factor on development of HFpEF and HFrEF.
C_LIO_LIHow might this impact on clinical practice?
Our findings provide further insights into the potential differences in the etiology of HFpEF and HFrEF, which are critical in prioritizing populations for close monitoring and targeting prevention efforts.
C_LI | 10.2147/CLEP.S253612 | medrxiv |
10.1101/19003509 | Risk factors for heart failure with preserved or reduced ejection fraction among Medicare beneficiaries: Applications of competing risks analysis and gradient boosted model. | Lee, M. P.; Glynn, R. J.; Schneeweiss, S.; Lin, K. J.; Patorno, E.; Barberio, J.; Levin, R.; Evers, T.; Wang, S. V.; Desai, R. | Rishi Desai | Brigham and Women\'s Hospital and Harvard Medical School | 2019-09-26 | 2 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/09/26/19003509.source.xml | BackgroundThe differential impact of various demographic characteristics and comorbid conditions on development of heart failure (HF) with preserved (pEF) and reduced ejection fraction (rEF) is not well studied among the elderly.
Methods and ResultsUsing Medicare claims data linked to electronic health records, we conducted an observational cohort study of individuals [≥] 65 years of age without HF. A Cox proportional hazards model accounting for competing risk of HFrEF and HFpEF incidence was constructed. A gradient boosted model (GBM) assessed the relative influence (RI) of each predictor in development of HFrEF and HFpEF. Among 138,388 included individuals, 9,701 developed HF (IR= 20.9 per 1,000 person-year). Males were more likely to develop HFrEF than HFpEF (HR = 2.07, 95% CI: 1.81-2.37 vs. 1.11, 95% CI: 1.02-1.20, P for heterogeneity < 0.01). Atrial fibrillation and pulmonary hypertension had stronger associations with the risk of HFpEF (HR = 2.02, 95% CI: 1.80-2.26 and 1.66, 95% CI: 1.23-2.22) while cardiomyopathy and myocardial infarction were more strongly associated with HFrEF (HR = 4.37, 95% CI: 3.21-5.97 and 1.94, 95% CI: 1.23-3.07). Age was the strongest predictor across all HF subtypes with RI from GBM >35%. Atrial fibrillation was the most influential comorbidity for development of HFpEF (RI = 8.4%) while cardiomyopathy was most influential for HFrEF (RI = 20.7%).
ConclusionsThese findings of heterogeneous relationships between several important risk factors and heart failure types underline the potential differences in the etiology of HFpEF and HFrEF.
Key QuestionsO_LIWhat is already known about this subject?
Previous epidemiologic studies describe the differences in risk factors involved in developing heart failure with preserved (HFpEF) and reduced ejection fraction (HFrEF), however, there has been no large study in an elderly population.
C_LIO_LIWhat does this study add?
This study provides further insights into the heterogeneous impact of various clinical characteristics on the risk of developing HFpEF and HFrEF in a population of elderly individuals.
Employing an advanced machine learning technique allows assessing the relative importance of each risk factor on development of HFpEF and HFrEF.
C_LIO_LIHow might this impact on clinical practice?
Our findings provide further insights into the potential differences in the etiology of HFpEF and HFrEF, which are critical in prioritizing populations for close monitoring and targeting prevention efforts.
C_LI | 10.2147/CLEP.S253612 | medrxiv |
10.1101/19002212 | Comparative functional survival and equivalent annual cost of three long lasting insecticidal net (LLIN) products in Tanzania | Lorenz, L. M.; Bradley, J.; Yukich, J.; Massue, D. J.; Mboma, Z. M.; Pigeon, O.; Moore, J. D.; Killian, A.; Lines, J.; Kisinza, W.; Overgaard, H. J.; Moore, S. J. | Sarah J Moore | Ifakara Health Institue | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_by | public and global health | https://www.medrxiv.org/content/early/2019/08/01/19002212.source.xml | Almost 1.2 billion long-lasting insecticidal nets (LLINs) have been procured for malaria control. Institutional buyers often assume that World Health Organization (WHO) prequalified LLINs are functionally identical with a three-year lifespan. We measured the lifespans of three LLIN products, and calculated their cost-per-year of functional life, through a randomised double-blinded prospective evaluation among 3,420 study households in Tanzania using WHO-recommended methods. Primary outcome was LLIN functional survival (LLINs present in serviceable condition). Secondary outcomes were 1) bioefficacy and chemical content (residual insecticidal activity) and 2) protective efficacy for volunteers sleeping under LLINs (bite reduction and mosquitoes killed). LLIN median functional survival was significantly different: 2{middle dot}0 years for Olyset, 2{middle dot}5 years for PermaNet and 2{middle dot}6 years for NetProtect. Functional survival was affected by accumulation of holes resulting in users discarding nets. Protective efficacy also significantly differed between products as they aged. The longer-lived nets were 20% cheaper than the shorter-lived product. | 10.1371/journal.pmed.1003248 | medrxiv |
10.1101/19003475 | The development of competency frameworks in healthcare professions: a scoping review. | Batt, A. M.; Tavares, W.; Williams, B. | Alan M Batt | Monash University | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_no | medical education | https://www.medrxiv.org/content/early/2019/08/01/19003475.source.xml | BackgroundCompetency frameworks serve various roles including outlining characteristics of a competent workforce, facilitating mobility, and analysing or assessing expertise. Given these roles and their relevance in the health professions, we sought to understand the methods and strategies used in the development of existing competency frameworks.
MethodsWe applied the Arksey and OMalley framework to undertake this scoping review. We searched six electronic databases (MEDLINE, CINAHL, PsycINFO, EMBASE, Scopus, and ERIC) and three grey literature sources (greylit.org, Trove and Google Scholar) using keywords related to competency frameworks. We screened studies for inclusion by title and abstract, and we included studies of any type that described the development of a competency framework in a healthcare profession. Two reviewers independently extracted data including study characteristics. Data synthesis was both quantitative and qualitative.
ResultsAmong 5,710 citations, we selected 190 for analysis. The majority of studies were conducted in medicine and nursing professions. Literature reviews and group techniques were conducted in 116 studies each (61%), and 85 (45%) outlined some form of stakeholder deliberation. We observed a significant degree of diversity in methodological strategies, inconsistent adherence to existing guidance on the selection of methods, who was involved, and based on the variation we observed in timeframes, combination, function, application and reporting of methods and strategies, there is no apparent gold standard or standardised approach to competency framework development.
ConclusionsWe observed significant variation within the conduct and reporting of the competency framework development process. While some variation can be expected given the differences across and within professions, our results suggest there is some difficulty in determining whether methods were fit-for-purpose, and therefore in making determinations regarding the appropriateness of the development process. This uncertainty may unwillingly create and legitimise uncertain or artificial outcomes. There is a need for improved guidance in the process for developing and reporting competency frameworks. | 10.1007/s10459-019-09946-w | medrxiv |
10.1101/19003475 | The development of competency frameworks in healthcare professions: a scoping review. | Batt, A. M.; Tavares, W.; Williams, B. | Alan M Batt | Monash University | 2019-11-02 | 2 | PUBLISHAHEADOFPRINT | cc_no | medical education | https://www.medrxiv.org/content/early/2019/11/02/19003475.source.xml | BackgroundCompetency frameworks serve various roles including outlining characteristics of a competent workforce, facilitating mobility, and analysing or assessing expertise. Given these roles and their relevance in the health professions, we sought to understand the methods and strategies used in the development of existing competency frameworks.
MethodsWe applied the Arksey and OMalley framework to undertake this scoping review. We searched six electronic databases (MEDLINE, CINAHL, PsycINFO, EMBASE, Scopus, and ERIC) and three grey literature sources (greylit.org, Trove and Google Scholar) using keywords related to competency frameworks. We screened studies for inclusion by title and abstract, and we included studies of any type that described the development of a competency framework in a healthcare profession. Two reviewers independently extracted data including study characteristics. Data synthesis was both quantitative and qualitative.
ResultsAmong 5,710 citations, we selected 190 for analysis. The majority of studies were conducted in medicine and nursing professions. Literature reviews and group techniques were conducted in 116 studies each (61%), and 85 (45%) outlined some form of stakeholder deliberation. We observed a significant degree of diversity in methodological strategies, inconsistent adherence to existing guidance on the selection of methods, who was involved, and based on the variation we observed in timeframes, combination, function, application and reporting of methods and strategies, there is no apparent gold standard or standardised approach to competency framework development.
ConclusionsWe observed significant variation within the conduct and reporting of the competency framework development process. While some variation can be expected given the differences across and within professions, our results suggest there is some difficulty in determining whether methods were fit-for-purpose, and therefore in making determinations regarding the appropriateness of the development process. This uncertainty may unwillingly create and legitimise uncertain or artificial outcomes. There is a need for improved guidance in the process for developing and reporting competency frameworks. | 10.1007/s10459-019-09946-w | medrxiv |
10.1101/19003467 | Patient Benefit and Risk in Anticancer Drug Development: A Systematic Review of the Ixabepilone Trial Portfolio | Carlisle, B. G.; Mattina, J.; Zheng, T.; Kimmelman, J. | Jonathan Kimmelman | McGill University | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | medical ethics | https://www.medrxiv.org/content/early/2019/08/01/19003467.source.xml | OBJECTIVETo describe the patient burden and benefit, and the dynamics of trial success in the development of ixabepilone--a drug that was approved in the US but not in Europe.
DATA SOURCESTrials were captured by searching Embase and MEDLINE on July 27, 2015.
STUDY SELECTIONInclusion: 1) primary trial reports, 2) interventional trials, 3) human subjects, 4) phase 1 to phase 3, 5) trials of ixabepilone in monotherapy or combination therapy of 6) pre-licensure cancer indications. Exclusion: 1) secondary reports, 2) interim results, 3) meta-analyses, 4) retrospective/observational studies, 5) laboratory analyses (ex vivo tissues), 6) reviews, 7) letters, editorials, guidelines, interviews, abstract-only and poster presentations.
DATA EXTRACTION AND SYNTHESISData were independently double-extracted and differences between coders were reconciled by discussion.
MAIN OUTCOMES AND MEASURESWe measured risk using the number of drug-related adverse events that were grade 3 or higher, benefit by objective response rate and trial outcomes by whether studies met their primary endpoint with acceptable safety.
RESULTSWe identified 39 publications of ixabepilone monotherapy and 23 primary publications of combination therapy, representing 5615 patients and 1598 patient-years of involvement over 11 years and involving 17 different malignancies. In total, 830 patients receiving ixabepilone experienced objective tumour response (16%, 95% CI 12.5%-20.1%), and 74 died from drug-related toxicites (2.2%, 95% CI 1.6%-2.9%). Responding indications and combinations were identified very quickly; thereafter, the search for additional responding indications or combinations did not lead to labelling additions. A total of 11 "uninformative" trials were found, representing 27% of studies testing efficacy, 208 grade 3-4 events and 226 patient-years of involvement (21% and 26% of the portfolio total, respectively). After the European Medicines Agency rejected ixabepilone for licensing, all further trial activity involving ixabepilone was pursued outside of Europe.
DISCUSSIONRisk/benefit for patients who enrolled in trials of non-approved indications of ixabepilone did not improve over the course of the drugs development. Clinical value was discovered very quickly; however, a large fraction of trials were uninformative. | null | medrxiv |
10.1101/19003491 | Driving Me Crazy: The effects of stress on the driving abilities of paramedic students. | Hines Duncliffe, T.; D'Angelo, B.; Brock, M.; Fraser, C.; Lamarra, J.; Austin, N.; Pusateri, M.; Batt, A. M. | Alan M Batt | Fanshawe College | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_by | occupational and environmental health | https://www.medrxiv.org/content/early/2019/08/01/19003491.source.xml | BackgroundPrevious research has suggested that stress may have a negative effect on the clinical performance of paramedics. In addition, stress has been demonstrated to have a negative impact the driving abilities of the general population, increasing the number of driving errors. However, to date no studies have explored stress and its potential impact on non-clinical performance of paramedics, particularly their driving abilities.
MethodsParamedic students underwent emergency driving assessment in a driving simulator before and after exposure to a stressful medical scenario. Number and type of errors were documented before and after by both driving simulator software and observation by two observers from the research team. The NASA Task Load Index (TLX) was utilised to record self-reported stress levels.
Results36 students participated in the study. Following exposure to a stressful medical scenario, paramedic students demonstrated no increase in overall error rate, but demonstrated an increase in three critical driving errors, namely failure to wear a seatbelt (3 baseline v 10 post stress), failing to stop for red lights or stop signs (7 v 35), and losing control of the vehicle (2 v 11). Self-reported stress levels also increased after the clinical scenario, particularly in the area of mental (cognitive) demand.
ConclusionParamedics are routinely exposed to acute stress in their everyday work, and this stress could affect their non-clinical performance. The critical errors committed by participants in this study closely matched those considered to be contributory factors in many ambulance collisions. These results stimulate the need for further research into the effects of stress on non-clinical performance in general, and highlight the potential need to consider additional driver training and stress management education in order to mitigate the frequency and severity of driving errors.
Key pointsO_LIParamedics are exposed to stressful clinical scenarios during the course of their work
C_LIO_LIMany critical and serious clinical calls require transport to hospital
C_LIO_LIAmbulance crashes occur regularly and pose a significant risk to the safety and wellbeing of both patients and paramedics
C_LIO_LIThis simulated clinical scenario followed by a simulated driving scenario has highlighted that stress appears to affect driving abilities in paramedic students
C_LIO_LIThe findings of this study, although conducted in paramedic students in simulated environments, highlight the need to further investigate the effects of stress on driving abilities among paramedics
C_LI | null | medrxiv |
10.1101/19003491 | Driving Me Crazy: The effects of stress on the driving abilities of paramedic students - a pilot study. | Hines Duncliffe, T.; D'Angelo, B.; Brock, M.; Fraser, C.; Lamarra, J.; Austin, N.; Pusateri, M.; Batt, A. M. | Alan M Batt | Fanshawe College | 2019-10-10 | 2 | PUBLISHAHEADOFPRINT | cc_by | occupational and environmental health | https://www.medrxiv.org/content/early/2019/10/10/19003491.source.xml | BackgroundPrevious research has suggested that stress may have a negative effect on the clinical performance of paramedics. In addition, stress has been demonstrated to have a negative impact the driving abilities of the general population, increasing the number of driving errors. However, to date no studies have explored stress and its potential impact on non-clinical performance of paramedics, particularly their driving abilities.
MethodsParamedic students underwent emergency driving assessment in a driving simulator before and after exposure to a stressful medical scenario. Number and type of errors were documented before and after by both driving simulator software and observation by two observers from the research team. The NASA Task Load Index (TLX) was utilised to record self-reported stress levels.
Results36 students participated in the study. Following exposure to a stressful medical scenario, paramedic students demonstrated no increase in overall error rate, but demonstrated an increase in three critical driving errors, namely failure to wear a seatbelt (3 baseline v 10 post stress), failing to stop for red lights or stop signs (7 v 35), and losing control of the vehicle (2 v 11). Self-reported stress levels also increased after the clinical scenario, particularly in the area of mental (cognitive) demand.
ConclusionParamedics are routinely exposed to acute stress in their everyday work, and this stress could affect their non-clinical performance. The critical errors committed by participants in this study closely matched those considered to be contributory factors in many ambulance collisions. These results stimulate the need for further research into the effects of stress on non-clinical performance in general, and highlight the potential need to consider additional driver training and stress management education in order to mitigate the frequency and severity of driving errors.
Key pointsO_LIParamedics are exposed to stressful clinical scenarios during the course of their work
C_LIO_LIMany critical and serious clinical calls require transport to hospital
C_LIO_LIAmbulance crashes occur regularly and pose a significant risk to the safety and wellbeing of both patients and paramedics
C_LIO_LIThis simulated clinical scenario followed by a simulated driving scenario has highlighted that stress appears to affect driving abilities in paramedic students
C_LIO_LIThe findings of this study, although conducted in paramedic students in simulated environments, highlight the need to further investigate the effects of stress on driving abilities among paramedics
C_LI | null | medrxiv |
10.1101/19003491 | The effects of stress on the driving abilities of paramedic students - a pilot, simulator-based study. | Hines Duncliffe, T.; D'Angelo, B.; Brock, M.; Fraser, C.; Lamarra, J.; Austin, N.; Pusateri, M.; Batt, A. M. | Alan M Batt | Fanshawe College | 2019-11-15 | 3 | PUBLISHAHEADOFPRINT | cc_by | occupational and environmental health | https://www.medrxiv.org/content/early/2019/11/15/19003491.source.xml | BackgroundPrevious research has suggested that stress may have a negative effect on the clinical performance of paramedics. In addition, stress has been demonstrated to have a negative impact the driving abilities of the general population, increasing the number of driving errors. However, to date no studies have explored stress and its potential impact on non-clinical performance of paramedics, particularly their driving abilities.
MethodsParamedic students underwent emergency driving assessment in a driving simulator before and after exposure to a stressful medical scenario. Number and type of errors were documented before and after by both driving simulator software and observation by two observers from the research team. The NASA Task Load Index (TLX) was utilised to record self-reported stress levels.
Results36 students participated in the study. Following exposure to a stressful medical scenario, paramedic students demonstrated no increase in overall error rate, but demonstrated an increase in three critical driving errors, namely failure to wear a seatbelt (3 baseline v 10 post stress), failing to stop for red lights or stop signs (7 v 35), and losing control of the vehicle (2 v 11). Self-reported stress levels also increased after the clinical scenario, particularly in the area of mental (cognitive) demand.
ConclusionParamedics are routinely exposed to acute stress in their everyday work, and this stress could affect their non-clinical performance. The critical errors committed by participants in this study closely matched those considered to be contributory factors in many ambulance collisions. These results stimulate the need for further research into the effects of stress on non-clinical performance in general, and highlight the potential need to consider additional driver training and stress management education in order to mitigate the frequency and severity of driving errors.
Key pointsO_LIParamedics are exposed to stressful clinical scenarios during the course of their work
C_LIO_LIMany critical and serious clinical calls require transport to hospital
C_LIO_LIAmbulance crashes occur regularly and pose a significant risk to the safety and wellbeing of both patients and paramedics
C_LIO_LIThis simulated clinical scenario followed by a simulated driving scenario has highlighted that stress appears to affect driving abilities in paramedic students
C_LIO_LIThe findings of this study, although conducted in paramedic students in simulated environments, highlight the need to further investigate the effects of stress on driving abilities among paramedics
C_LI | null | medrxiv |
10.1101/19003202 | Changing trends in proportional incidence and 5-year net survival of screened and nonscreened invasive breast cancers among women in England | Wu, H.; Wong, K.; Lu, S.-E.; Broggio, J.; Zhang, L. | Lanjing Zhang | Princeton Medical Center/Rutgers University | 2019-08-01 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | primary care research | https://www.medrxiv.org/content/early/2019/08/01/19003202.source.xml | BackgroundUptake of breast cancer screening has been decreasing in England since 2007, and may increase proportional incidence of nonscreened cancers. However, recent trends in proportional incidence and net-survivals of screened and nonscreened breast cancers are unclear.
MethodsWe extracted population-based proportional incidence and age-standardized 5-year net-survivals from Public Health England, for English women with invasive breast cancer diagnosed during 1995-2011 (linked to death certificates, followed through 2016). Piecewise log-linear models with change-point/joinpoint were used to estimate temporal trends. We conducted a quasi-experimental study to test the hypothesis that the trend-change year of proportional incidence coincided with that of 5-year net-survival.
ResultsAmong 254,063 women in England with invasive breast cancer diagnosed during 1995-2011, there was downward-to-upward trend-change in proportional incidence of nonscreened breast cancers (annual percent change[APC]=5.6 after 2007 versus APC=-3.5 before 2007, P<0.001) in diagnosis-year 2007, when steeper upward-trend in age-standardized 5-year net survival started (APC=5.7 after 2007/2008 versus APC=0.3 before 2007/2008, P<0.001). Net-survival difference of screened versus nonscreened cancers also significantly narrowed (18% in 2007/2008 versus 5% in 2011). Similar associations were found in all strata of race, cancer stage, grade and histology, except in Black patients or patients with stage I, stage III, or grade I cancer.
ConclusionsThe downward-to-upward trend-change in proportional incidence of nonscreened breast cancers is associated with steeper upward-trend in age-standardized 5-year net survival among English women in recent years. Survival benefits of breast cancer screening appear decreasing in recent years. The data support reduction of breast cancer screening in some patients. | 10.14218/jctp.2022.00003 | medrxiv |
10.1101/19003335 | Smoking, DNA methylation and lung function: a Mendelian randomization analysis to investigate causal relationships | Jamieson, E.; Korogolou-Linden, R.; Wootton, R.; Guyatt, A.; Battram, T.; Burrows, K.; Gaunt, T.; Tobin, M.; Munafo, M.; Davey Smith, G.; Tilling, K.; Relton, C.; Richardson, T.; Richmond, R. | Rebecca Richmond | University of Bristol | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_by | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/08/02/19003335.source.xml | Whether smoking-associated DNA methylation has a causal effect on lung function has not been thoroughly evaluated. We investigated the causal effects of 474 smoking-associated CpGs on forced expiratory volume in one second (FEV1) in two-sample Mendelian randomization (MR) using methylation quantitative trait loci and genome-wide association data for FEV1. We found evidence of a possible causal effect for DNA methylation on FEV1 at 18 CpGs (p<1.2x10-4). Replication analysis supported a causal effect at three CpGs (cg21201401 (ZGPAT), cg19758448 (PGAP3) and cg12616487 (AHNAK) (p<0.0028). DNA methylation did not clearly mediate the effect of smoking on FEV1, although DNA methylation at some sites may influence lung function via effects on smoking. Using multiple-trait colocalization, we found evidence of shared causal variants between lung function, gene expression and DNA methylation. Findings highlight potential therapeutic targets for improving lung function and possibly smoking cessation, although large, tissue-specific datasets are required to confirm these results. | 10.1016/j.ajhg.2020.01.015 | medrxiv |
10.1101/19003541 | Effectiveness of four oral antifungal drugs (fluconazole, griseofulvin, itraconazole, terbinafine) in current epidemic of altered dermatophytosis in India: A randomized pragmatic trial | Singh, S.; Chandra, U.; Verma, P.; Anchan, V. N.; Tilak, R. | Sanjay Singh | Institute of Medical Sciences, Banaras Hindu University, Varanasi, India | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | dermatology | https://www.medrxiv.org/content/early/2019/08/02/19003541.source.xml | BackgroundDermatophyte infections have undergone unprecedented changes in India in recent past. Clinical trials comparing effectiveness of 4 main oral antifungal drugs are not available. We tested effectiveness of oral fluconazole, griseofulvin, itraconazole and terbinafine in chronic and chronic-relapsing tinea corporis, tinea cruris and tinea faciei.
MethodsTwo hundred microscopy confirmed patients were allocated to 4 groups, fluconazole (5mg/kg/day), griseofulvin (10 mg/kg/day), itraconazole (5mg/kg/day), and terbinafine (7.5mg/kg/day), by concealed block randomization and treated for 8 weeks or cure. Effectiveness was calculated based on intention to treat analysis.
ResultsAt 4 weeks, 4, 1, 2, and 4 patients were cured with fluconazole, griseofulvin, itraconazole and terbinafine, respectively (P=0.417). At 8 weeks, 21 (42%), 7 (14%), 33 (66%) and 14 (28%) patients were cured, respectively (P=0.000); itraconazole was superior to fluconazole, griseofulvin and terbinafine (P[≤]0.016). Relapse rates after 4 and 8 weeks of cure in different groups were similar. Numbers-needed-to-treat (NNT) (versus griseofulvin), calculated based on cure rates at 8 weeks, for itraconazole, fluconazole, and terbinafine were 2, 4 and 8, respectively.
ConclusionIn view of cure rates and NNT, itraconazole is the most effective drug, followed by fluconazole (daily), terbinafine and then griseofulvin, in chronic and chronic-relapsing dermatophytosis in India.
One Sentence SummaryEffectiveness of all four antifungals has declined, with itraconazole being the most effective currently in dermatophytosis in India. | 10.1111/bjd.19146 | medrxiv |
10.1101/19003558 | A phenome-wide analysis of short- and long-run disease incidence following Recurrent Pregnancy Loss using data from a 39-year period | Westergaard, D.; Nielsen, A. P.; Mortensen, L. H.; Nielsen, H. S.; Brunak, S. | Søren Brunak | University of Copenhagen | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_no | obstetrics and gynecology | https://www.medrxiv.org/content/early/2019/08/02/19003558.source.xml | BackgroundPregnancy loss is one of the most frequent pregnancy complications. It is unclear how recurrent pregnancy loss (RPL) impacts disease risk later in life and if later disease risk is different in women with or without a live birth prior to RPL (primary vs. secondary RPL). We sought to investigate if women have an increased risk of disease following RPL, and if there was a difference between primary and secondary RPL.
MethodsUsing population-wide health care registry databases from Denmark we identified a cohort of 1,370,896 women between 12 and 40 years in the period January 1, 1977, to October 5, 2016 who had been pregnant. Each woman was followed on average for 15.8 years. Of these, 10,691 (0.77%) women fulfilled the criteria for RPL (50.0% had primary RPL). Relative Risk Ratios (RR) were calculated in a phenome-wide manner for diagnoses with a cumulative incidence proportion >0.1% in women with RPL. Diagnoses related to assessment and diagnosis of RPL and those appearing later in life were separated using a Mixture Model.
ResultsIn the full cohort of pregnant women, 0.77% (10,691) fulfilled the criteria for RPL (50.0% primary RPL). Compared to women without RPL, primary RPL increased the risk of subsequent cardiovascular disorders, including atherosclerosis (RR=2.45, 1.65-3.51 95% Uncertainty Interval (UI)), cerebral infarction (RR=1.87, 1.43-2.4 95% UI), heart failure (RR=1.97, 1.44-2.63 95% UI), and pulmonary embolism (RR=1.82, 1.32-2.46 95% UI). Women with secondary RPL had an increased risk of obstetric complications, e.g. placenta previa (RR=3.76, 2.9-4.8 95% UI), premature rupture of membrane (RR=2.55, 2.21-2.91 95% UI), intrapartum hemorrhage (RR=2.8, 1.77-4.31 95% UI), gestational hypertension (RR=2.2, 1.67-2.87 95% UI), and puerperal sepsis (RR=2.54, 1.8-3.5 95% UI). We also noticed associations to autoimmune, respiratory, gastro-intestinal and mental disorders in both subtypes.
ConclusionOur findings show that RPL is a risk factor for a spectrum of disorders. This can in part be due to increased screening following RPL, but it also suggests that RPL may directly influence or share etiology with a number of diseases later in life. Research into the pathophysiology of both pregnancy loss and later diseases merits further investigation. | 10.1161/JAHA.119.015069 | medrxiv |
10.1101/19003665 | Eating behavior trajectories in the first ten years of life and their relationship with BMI | Herle, M.; De Stavola, B.; Huebel, C.; Santos Ferreira, D.; Abdulkadir, M.; Yilmaz, Z.; Loos, R.; Bryant-Waugh, R.; Bulik, C.; Micali, N. | Nadia Micali | University College London | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pediatrics | https://www.medrxiv.org/content/early/2019/08/02/19003665.source.xml | BackgroundChild eating behaviors are highly heterogeneous and their longitudinal impact on childhood weight is unclear. The objective of this study was to characterize eating behaviors during the first ten years of life and evaluate associations with BMI at age 11 years.
MethodData were parental reports of eating behaviors from 15 months to age 10 years (n=12,048) and standardized body mass index (zBMI) at age 11 years (n=4884) from the Avon Longitudinal Study of Parents and Children. Latent class growth analysis was used to derive latent classes of over-, under-, and fussy eating. Linear regression models for zBMI at 11 years on each set of classes were fitted to assess associations with eating behavior trajectories.
ResultsWe identified four classes of overeating; "low stable" (70%), "low transient" (15%), "late increasing" (11%), and "early increasing" (6%). The "early increasing" class was associated with higher zBMI (boys: {beta}=0.83, 95%CI:0.65, 1.02; girls: {beta}=1.1; 0.92, 1.28) compared to "low stable". Six classes were found for undereating; "low stable" (25%), "low transient" (37%), "low decreasing" (21%), "high transient" (11%), "high decreasing" (4%), and "high stable" (2%). The latter was associated with lower zBMI (boys: {beta}=-0.79; -1.15, - 0.42; girls: {beta}=-0.76; -1.06, -0.45). Six classes were found for fussy eating; "low stable" (23%), "low transient" (15%), "low increasing" (28%), "high decreasing" (14%), "low increasing" (13%), "high stable" (8%). The "high stable" class was associated with lower zBMI (boys: {beta} =-0.49; -0.68 -0.30; girls: {beta} =-0.35; -0.52, -0.18).
ConclusionsEarly increasing overeating during childhood is associated with higher zBMI at age 11. High persistent levels of undereating and fussy eating are associated with lower zBMI. Longitudinal trajectories of eating behaviors may help identify children potentially at risk of adverse weight outcomes. | 10.1038/s41366-020-0581-z | medrxiv |
10.1101/19003665 | Eating behavior trajectories in the first ten years of life and their relationship with BMI | Herle, M.; De Stavola, B.; Huebel, C.; Santos Ferreira, D.; Abdulkadir, M.; Yilmaz, Z.; Loos, R.; Bryant-Waugh, R.; Bulik, C.; Micali, N. | Nadia Micali | University College London | 2020-01-09 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pediatrics | https://www.medrxiv.org/content/early/2020/01/09/19003665.source.xml | BackgroundChild eating behaviors are highly heterogeneous and their longitudinal impact on childhood weight is unclear. The objective of this study was to characterize eating behaviors during the first ten years of life and evaluate associations with BMI at age 11 years.
MethodData were parental reports of eating behaviors from 15 months to age 10 years (n=12,048) and standardized body mass index (zBMI) at age 11 years (n=4884) from the Avon Longitudinal Study of Parents and Children. Latent class growth analysis was used to derive latent classes of over-, under-, and fussy eating. Linear regression models for zBMI at 11 years on each set of classes were fitted to assess associations with eating behavior trajectories.
ResultsWe identified four classes of overeating; "low stable" (70%), "low transient" (15%), "late increasing" (11%), and "early increasing" (6%). The "early increasing" class was associated with higher zBMI (boys: {beta}=0.83, 95%CI:0.65, 1.02; girls: {beta}=1.1; 0.92, 1.28) compared to "low stable". Six classes were found for undereating; "low stable" (25%), "low transient" (37%), "low decreasing" (21%), "high transient" (11%), "high decreasing" (4%), and "high stable" (2%). The latter was associated with lower zBMI (boys: {beta}=-0.79; -1.15, - 0.42; girls: {beta}=-0.76; -1.06, -0.45). Six classes were found for fussy eating; "low stable" (23%), "low transient" (15%), "low increasing" (28%), "high decreasing" (14%), "low increasing" (13%), "high stable" (8%). The "high stable" class was associated with lower zBMI (boys: {beta} =-0.49; -0.68 -0.30; girls: {beta} =-0.35; -0.52, -0.18).
ConclusionsEarly increasing overeating during childhood is associated with higher zBMI at age 11. High persistent levels of undereating and fussy eating are associated with lower zBMI. Longitudinal trajectories of eating behaviors may help identify children potentially at risk of adverse weight outcomes. | 10.1038/s41366-020-0581-z | medrxiv |
10.1101/19003798 | Characteristics of Patients that Substitute Medical Cannabis for Alcohol | Hayat, A.; Piper, B. J. | Brian James Piper | Geisinger Commonwealth School of Medicine | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pharmacology and therapeutics | https://www.medrxiv.org/content/early/2019/08/02/19003798.source.xml | AimsA substitution effect occurs when patients substitute Medical Cannabis (MC) for another drug. Over three-quarters (76.7%) of New England dispensary members reported reducing their use of opioids and two-fifths (42.0%) decreased their use of alcohol after starting MC (Piper et al. 2017). The objective of this exploratory study was to identify any factors which differentiate alcohol substituters from those that do not modify their alcohol use after starting MC (non-substituters).
MethodsAmong dispensary patients (N=1,477), over two-thirds with chronic pain, that completed an online survey, 7.4% indicated that they regularly consumed alcohol. Comparisons were made to identify any demographic or health history characteristics which differentiated alcohol substituters (N=47) from non-substituters (N=65). Respondents selected from among a list of 37 diseases and health conditions (e.g. diabetes, sleep disorders) and the total number was calculated.
ResultsSubstituters and non-substituters were indistinguishable in terms of sex, age, or prior drug history. Substituters were significantly more likely to be employed (68.1%) than non-substituters (51.1%). Substituters also reported having significantly more health conditions and diseases (3.3{+/-}2.0) than non-substituters (2.4{+/-}1.4).
ConclusionsThis small study offers some insights into the profile of patients whose self-reported alcohol intake decreased following initiation of MC. Alcohol substituters had more other health conditions but also were more likely to be employed which may indicate that they fit a social drinker profile. Additional prospective or controlled research into the alcohol substitution effect following MC with a sample with more advanced alcohol misuse may be warranted.
Short summaryA substitution effect with medical cannabis replacing prescription opioids has been reported but less is known for alcohol. This study evaluated characteristics which might differentiate alcohol substituters (N=47) from non-substituters (N=65) among dispensary members. Substituters were significantly more likely to be employed and have more health conditions than non-substituters. | 10.1080/02791072.2019.1694199 | medrxiv |
10.1101/19003749 | Role of miRNAs induced by oxidized low-density lipoproteins in coronary artery disease: the REGICOR Study. | Degano, I. R.; Subirana, I.; Garcia-Mateo, N.; Cidad, P.; Munoz-Aguayo, D.; Puigdecanet, E.; Nonell, L.; Vila, J.; Camps, A.; Crepaldi, F.; de Gonzalo-Calvo, D.; Llorente-Cortes, V.; Perez-Garcia, M. T.; Elosua, R.; Fito, M.; Marrugat, J. | Jaume Marrugat | Hospital del Mar Medical Research Institute, Barcelona, Spain | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_no | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/08/02/19003749.source.xml | AimsCurrent risk prediction tools are not accurate enough to identify most individuals at high coronary risk. On the other hand, oxidized low-density lipoproteins (ox-LDLs) and miRNAs are actively involved in atherosclerosis. Our aim was to examine the association of ox-LDL-induced miRNAs with coronary artery disease (CAD), and to assess their predictive capacity of future CAD.
Methods and resultsHuman endothelial and vascular smooth muscle cells were treated with oxidized or native LDLs (nLDL), and their miRNA expression was measured with the miRNA 4.0 array, and analyzed with moderated t-tests. Differently expressed miRNAs and others known to be associated with CAD, were examined in serum samples of 500 acute myocardial infarction (AMI) patients and 500 healthy controls, and baseline serum of 117 incident CAD cases and c 485 randomly-selected cohort participants (case-cohort). Both were developed within the REGICOR AMI Registry and population cohorts from Girona. miRNAs expression in serum was measured with custom OpenArray plates, and analyzed with fold change (age and sex-paired case-control) and survival models (case-cohort). Improvement in discrimination and reclassification by miRNAs was assessed. Twenty-one miRNAs were up- or down-regulated with ox-LDL in cell cultures. One of them, 1 (has-miR-122-5p, fold change=4.85) was upregulated in AMI cases. Of the 28 known CAD-associated miRNAs, 11 were upregulated in AMI cases, and 1 (hsa-miR-143-3p, hazard ratio=0.56 [0.38-0.82]) was associated with CAD incidence and improved reclassification.
ConclusionWe identified 2 novel miRNAs associated with ox-LDLs (hsa-miR-193b-5p and hsa-miR-1229-5p), and 1 miRNA that improved reclassification of healthy individuals (hsa-miR-143-3p). | 10.3390/jcm9051402 | medrxiv |
10.1101/19003822 | Meat intake and cancer risk: prospective analyses in UK Biobank | Knuppel, A.; Papier, K.; Fensom, G. K.; Appleby, P. N.; Schmidt, J. A.; Tong, T. Y. N.; Travis, R. C.; Key, T. J.; Perez-Cornago, A. | Anika Knuppel | Cancer Epidemiology Unit, Nuffield Department of Population Health, University of Oxford | 2019-08-02 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/02/19003822.source.xml | BackgroundRed and processed meat has been consistently associated with risk for colorectal cancer, but evidence for other cancer sites is limited and few studies have examined the association between poultry intake and cancer risk. We examined associations between total meat, red meat, processed meat and poultry intake and incidence for 20 common cancer sites.
Methods and FindingsWe analysed data from 475,023 participants (54% women) in UK Biobank. Participants were aged 37-73 years and cancer free at baseline. Information on meat consumption was based on a touchscreen questionnaire completed at baseline covering type and frequency of meat intake. Diet intake was re-measured a minimum of three times in a subsample (15%) using a web-based 24h dietary recall questionnaire. Multivariable-adjusted Cox proportional hazards models were used to determine the association between baseline meat intake and cancer incidence. Trends in risk across baseline meat intake categories were calculated by assigning a mean value to each category using estimates from the re-measured meat intakes. During a mean follow-up of 6.9 years, 28,955 participants were diagnosed with a malignant cancer. Total, red and processed meat intakes were each positively associated with risk of colorectal cancer (e.g. hazard ratio (HR) per 70 g/day higher intake of red and processed meat combined 1.31, 95%-confidence interval (CI) 1.14-1.52).
Red meat intake was positively associated with breast cancer (HR per 50 g/day higher intake 1.12, 1.01-1.24) and prostate cancer (1.15, 1.03-1.29). Poultry intake was positively associated with risk for cancers of the lymphatic and hematopoietic tissues (HR per 30g/day higher intake 1.16, 1.03-1.32). Only the associations with colorectal cancer were robust to Bonferroni correction for multiple comparisons. Study limitations include unrepresentativeness of the study sample for the UK population, low case numbers for less common cancers and the possibility of residual confounding.
ConclusionsHigher intakes of red and processed meat were associated with a higher risk of colorectal cancer. The observed positive associations of red meat consumption with breast and prostate cancer, and poultry intake with cancers of the lymphatic and hematopoietic tissues, require further investigation. | 10.1093/ije/dyaa142 | medrxiv |
10.1101/19002998 | Validation of Serum Neurofilaments as Prognostic & Potential Pharmacodynamic Biomarkers for ALS | Benatar, M.; Zhang, L.; Wang, L.; Granit, V.; Statland, J.; Barohn, R. J.; Swenson, A.; Ravits, J.; Jackson, C.; Burns, T. M.; Trivedi, J.; Pioro, E. P.; Caress, J.; Katz, J.; McCauley, J. L.; Rademakers, R.; Malaspina, A.; Ostrow, L. W.; Wuu, J.; CReATe Consortium, | Michael Benatar | University of Miami | 2019-08-03 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | neurology | https://www.medrxiv.org/content/early/2019/08/03/19002998.source.xml | ObjectiveIdentify preferred neurofilament assays, and clinically validate serum NfL and pNfH as prognostic and potential pharmacodynamic biomarkers relevant to ALS therapy development.
MethodsProspective, multi-center, longitudinal observational study of patients with ALS (n=229), primary lateral sclerosis (PLS, n=20) and progressive muscular atrophy (PMA, n=11). Biological specimens were collected, processed and stored according to strict standard operating procedures (SOPs) 1. Neurofilament assays were performed in a blinded manner by independent contract research organizations (CROs).
ResultsFor serum NfL and pNfH measured using the Simoa assay, missing data (i.e. both technical replicates below the lower limit of detection (LLD) was not encountered. For the Iron Horse and Euroimmun pNfH assays, such missingness was encountered in [~]4% and [~]10% of serum samples respectively. Mean coefficients of variation (CVs) for pNfH in serum and CSF were [~]4-5% and [~]2-3% respectively in all assays. Baseline NfL concentration, but not pNfH, predicted the future ALSFRS-R slope and survival.
Incorporation of baseline serum NfL into mixed effects models of ALSFRS-R slopes yields an estimated sample size saving of [~]8%. Depending on the method used to estimate effect size, use of serum NfL (and perhaps pNfH) as pharmacodynamic biomarkers, instead of the ALSFRS-R slope, yields significantly larger sample size savings.
ConclusionsSerum NfL may be considered a clinically validated prognostic biomarker for ALS. Serum NfL (and perhaps pNfH), quantified using the Simoa assay, have potential utility as pharmacodynamic biomarkers of treatment effect. | 10.1212/WNL.0000000000009559 | medrxiv |
10.1101/19002998 | Validation of Serum Neurofilaments as Prognostic & Potential Pharmacodynamic Biomarkers for ALS | Benatar, M.; Zhang, L.; Wang, L.; Granit, V.; Statland, J.; Barohn, R. J.; Swenson, A.; Ravits, J.; Jackson, C.; Burns, T. M.; Trivedi, J.; Pioro, E. P.; Caress, J.; Katz, J.; McCauley, J. L.; Rademakers, R.; Malaspina, A.; Ostrow, L. W.; Wuu, J.; CReATe Consortium, | Michael Benatar | University of Miami | 2019-10-10 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | neurology | https://www.medrxiv.org/content/early/2019/10/10/19002998.source.xml | ObjectiveIdentify preferred neurofilament assays, and clinically validate serum NfL and pNfH as prognostic and potential pharmacodynamic biomarkers relevant to ALS therapy development.
MethodsProspective, multi-center, longitudinal observational study of patients with ALS (n=229), primary lateral sclerosis (PLS, n=20) and progressive muscular atrophy (PMA, n=11). Biological specimens were collected, processed and stored according to strict standard operating procedures (SOPs) 1. Neurofilament assays were performed in a blinded manner by independent contract research organizations (CROs).
ResultsFor serum NfL and pNfH measured using the Simoa assay, missing data (i.e. both technical replicates below the lower limit of detection (LLD) was not encountered. For the Iron Horse and Euroimmun pNfH assays, such missingness was encountered in [~]4% and [~]10% of serum samples respectively. Mean coefficients of variation (CVs) for pNfH in serum and CSF were [~]4-5% and [~]2-3% respectively in all assays. Baseline NfL concentration, but not pNfH, predicted the future ALSFRS-R slope and survival.
Incorporation of baseline serum NfL into mixed effects models of ALSFRS-R slopes yields an estimated sample size saving of [~]8%. Depending on the method used to estimate effect size, use of serum NfL (and perhaps pNfH) as pharmacodynamic biomarkers, instead of the ALSFRS-R slope, yields significantly larger sample size savings.
ConclusionsSerum NfL may be considered a clinically validated prognostic biomarker for ALS. Serum NfL (and perhaps pNfH), quantified using the Simoa assay, have potential utility as pharmacodynamic biomarkers of treatment effect. | 10.1212/WNL.0000000000009559 | medrxiv |
10.1101/19003012 | Improvements in the incidence and survival of cancer and cardiovascular but not infectious disease have driven recent mortality improvements in Scotland: nationwide cohort study of linked hospital admission and death records 2001-2016 | Timmers, P. R. H. J.; Kerssens, J. J.; Minton, J. W.; Grant, I.; Wilson, J. F.; Campbell, H.; Fischbacher, C. M.; Joshi, P. K. | Paul R. H. J. Timmers | Centre for Global Health Research, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, UK | 2019-08-05 | 1 | PUBLISHAHEADOFPRINT | cc_by | public and global health | https://www.medrxiv.org/content/early/2019/08/05/19003012.source.xml | ObjectivesTo identify the causes and future trends underpinning improvements in life expectancy in Scotland and quantify the relative contributions of disease incidence and survival.
DesignPopulation-based study.
SettingLinked secondary care and mortality records across Scotland.
Participants1,967,130 individuals born between 1905 and 1965, and resident in Scotland throughout 2001-2016.
Main outcome measuresHospital admission rates and survival in the five years following admission for 28 diseases, stratified by sex and socioeconomic status.
ResultsThe five hospital admission diagnoses associated with the greatest burden of death subsequent to admission were "Influenza and pneumonia", "Symptoms and signs involving the circulatory and respiratory systems", "Malignant neoplasm of respiratory and intrathoracic organs", "Symptoms and signs involving the digestive system and abdomen", and "General symptoms and signs". Using disease trends, we modelled a mean mortality hazard ratio of 0.737 (95% CI 0.730-0.745) across decades of birth, equivalent to a life extension of [~]3 years per decade. This improvement was 61% (30%-93%) accounted for by improvements in disease survival after hospitalisation (principally cancer) with the remainder accounted for by a fall in hospitalisation incidence (principally heart disease and cancer). In contrast, deteriorations in the incidence and survival of infectious diseases reduced mortality improvements by 9% ([~]3.3 months per decade). Overall, health-driven mortality improvements were slightly greater for men than women (due to greater falls in disease incidence), and generally similar across socioeconomic deciles. We project mortality improvements will continue over the next decade but will slow down by 21% because much of the progress in disease survival has already been achieved.
ConclusionMorbidity improvements broadly explain observed improvements in overall mortality, with progress on the prevention and treatment of heart disease and cancer making the most significant contributions. The gaps between men and womens morbidity and mortality are closing, but the gap between socioeconomic groups is not. A slowing trend in improvements in morbidity may explain the stalling in improvements of period life expectancies observed in recent studies in the UK. However, our modelled slowing of improvements could be offset if we achieve even faster improvements in the major diseases contributing to the burden of death, or if we improve prevention and survival of diseases which have deteriorated recently, such as infectious disease, in the future.
Summary boxO_ST_ABSWhat is already known on this topicC_ST_ABSO_LILong term improvements in Scottish mortality have slowed down recently, while life expectancy inequalities between socioeconomic classes are increasing.
C_LIO_LIDeaths attributed to ischaemic heart disease and stroke in Scotland have declined in the last two decades.
C_LI
What this study addsO_LIGains in life expectancy can largely be attributed to improvements in cancer survival and falls in incidence of cancer and cardiovascular disease.
C_LIO_LIThe hospitalisation rate and survival of several infectious diseases have deteriorated, and for urinary infections, this decline has been more rapid in more socioeconomically deprived classes.
C_LIO_LIImprovements in morbidity are projected to slow down, with much progress in survival of heart disease and cancer already achieved, and align with the recently observed slow-down in mortality improvements.
C_LI | 10.1136/bmjopen-2019-034299 | medrxiv |
10.1101/19001214 | Mental Health in UK Biobank Revised | Davis, K. A. S.; Coleman, J. R. I.; Adams, M.; Allen, N.; Breen, G.; Cullen, B.; Dickens, C. M.; Fox, E.; Graham, N.; Holliday, J.; Howard, L. M.; John, A.; Lee, W.; McCabe, R.; McIntosh, A. M.; Pearsall, R.; Smith, D. J.; Sudlow, C.; Ward, J.; Zammit, S.; Hotopf, M. | Matthew Hotopf | KCL Institute of Psychiatry Psychology and Neuroscience and UK Biobank | 2019-08-05 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/05/19001214.source.xml | This paper corrects and updates a paper published in BJPsych Open 2018 "Mental Health in UK Biobank" (https://doi.org/10.1192/bjo.2018.12) that was voluntarily retracted following the finding of errors in the coding of the variable for alcohol use disorder. Notably, the percentage of participants reaching threshold for alcohol use disorder on the Alcohol Use Disorder Identification Tool increased from 7% to 21%.
BackgroundUK Biobank is a well-characterised cohort of over 500,000 participants that offers unique opportunities to investigate multiple diseases and risk factors. An online mental health questionnaire completed by UK Biobank participants expands the potential for research into mental disorders.
MethodsAn expert working group designed the questionnaire, using established measures where possible, and consulting with a service user group regarding acceptability. Operational criteria were agreed for defining likely disorder and risk states, including lifetime depression, mania/hypomania, generalised anxiety disorder, unusual experiences and self-harm, and current post-traumatic stress and alcohol use disorders.
Results157,366 completed online questionnaires were available by August 2017. Comparison of self-reported diagnosed mental disorder with a contemporary study shows a similar prevalence, despite respondents being of higher average socioeconomic status. Lifetime depression was the most common finding in 24% of participants (37,434), with current alcohol use disorder criteria met by 21% (32,602), while other criteria were met by less than 8% of the participants. There was extensive comorbidity among the syndromes. Mental disorders were associated with a high neuroticism score, adverse life events and long-term illness; addiction and bipolar affective disorder in particular were associated with measures of deprivation.
ConclusionsThe questionnaire represents a very large mental health survey in itself, and the results presented here show high face validity, although caution is needed due to selection bias. Built into UK Biobank, these data intersect with other health data to offer unparalleled potential for crosscutting biomedical research involving mental health. | 10.1192/bjo.2019.100 | medrxiv |
10.1101/19003483 | Investigating the attitudes of Canadian paramedic students towards homelessness. | Cochrane, A.; Pithia, P.; Laird, E.; Mifflin, K.; Sonley-Long, V.; Batt, A. M. | Alan M Batt | Fanshawe College | 2019-08-06 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | public and global health | https://www.medrxiv.org/content/early/2019/08/06/19003483.source.xml | IntroductionWhen paramedics are dispatched, it is expected that every patient receives the same level of care regardless of variable factors. Homelessness is a growing social issue across Canada that is particularly prevalent in urban areas. The quality of healthcare delivered to individuals experiencing homelessness may be influenced by negative attitudes held by healthcare professionals. There is an absence of literature quantifying the perspectives of paramedics towards homelessness; therefore, the focus of this study was to identify the attitudes of paramedic students towards homelessness and to continue the conversation in regards to the evolving educational needs of paramedic students.
MethodsThis study employed a longitudinal design of a convenience sample of first year paramedic students in a college program in Ontario, Canada. The Health Professionals Attitude Towards the Homeless Inventory (HPATHI) was distributed to participants before and after placement and clinical exposure. The questionnaire includes 19 statements which participants respond to on a Likert scale. Mean scores were calculated, and statements were categorized into attitudes, interest, and confidence. Data were collected post-placement on interactions with persons experiencing homelessness.
ResultsA total of 52 first year paramedic students completed the HPATHI pre-placement and 47 completed the questionnaire post-placement. Mean scores for attitudes (pre 3.64, SD 0.49; post 3.85, SD 0.38, p=0.032), interest (pre 3.91, SD 0.40; post 3.84, SD 0.39,p=0.51) and confidence (pre 4.02, SD 0.50; post 3.71, SD 0.67, p=0.004) were largely positive, but there was a demonstrated decreasing trend in confidence with, and interest in, working with those experiencing homelessness. Participants reported an average of 60 hours of placement, during which 15 participants (32%) reported interactions with people experiencing homelessness.
ConclusionFirst year paramedic students demonstrate overall positive attitudes towards those experiencing homelessness, and the mean score for attitudes improved over the surveys. However, there were demonstrable decreases in confidence and interest over time, which may be related to the type and frequency of interactions during clinical placement. Paramedic education programs may benefit from the inclusion of focused education on homelessness, specific clinical experiences, and education related to social determinants of health. | null | medrxiv |
10.1101/19004069 | Enhanced mindfulness based stress reduction (MBSR+) in episodic migraine: a randomized clinical trial with MRI outcomes | Seminowicz, D. A.; Burrowes, S.; Kearson, A.; Zhang, J.; Krimmel, S.; Samawi, L.; Furman, A.; Keaser, M.; Gould, N.; Magyari, T.; White, L.; Goloubeva, O.; Goyal, M.; Peterlin, L.; Haythornthwaite, J. | David A Seminowicz | University of Maryland Baltimore | 2019-08-06 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pain medicine | https://www.medrxiv.org/content/early/2019/08/06/19004069.source.xml | We aimed to evaluate the efficacy of an enhanced mindfulness based stress reduction (MBSR+) versus stress management for headache (SMH). We performed a randomized, assessor-blind, clinical trial of 98 adults with episodic migraine recruited at a single academic center comparing MBSR+ (n=50) to SMH (n=48). MBSR+ and SMH were delivered weekly by group for 8 weeks, then bi-weekly for another 8 weeks. The primary clinical outcome was reduction in headache days from baseline to 20 weeks. MRI outcomes included activity of left dorsolateral prefrontal cortex (DLPFC) and cognitive task network during cognitive challenge, resting state connectivity of right dorsal anterior insula (daINS) to DLPFC and cognitive task network, and gray matter volume of DLPFC, daINS, and anterior midcingulate. Secondary outcomes were headache-related disability, pain severity, response to treatment, migraine days, and MRI whole-brain analyses. Reduction in headache days from baseline to 20 weeks was greater for MBSR+ (7.8 [95%CI, 6.9-8.8] to 4.6 [95%CI, 3.7-5.6]) than for SMH (7.7 [95%CI 6.7-8.7] to 6.0 [95%CI, 4.9-7.0]) (P=0.04). 52% of the MBSR+ group showed a response to treatment (50% reduction in headache days) compared with 23% in the SMH group (P=0.004). Reduction in headache-related disability was greater for MBSR+ (59.6 [95%CI, 57.9-61.3] to 54.6 [95%CI, 52.9-56.4]) than SMH (59.6 [95%CI, 57.7-61.5] to 57.5 [95%CI, 55.5-59.4]) (P=0.02). There were no differences in clinical outcomes at 52 weeks or MRI outcomes at 20 weeks, although changes related to cognitive networks with MBSR+ were observed. MBSR+ is an effective treatment option for episodic migraine. | 10.1097/j.pain.0000000000001860 | medrxiv |
10.1101/19004069 | Enhanced mindfulness based stress reduction (MBSR+) in episodic migraine: a randomized clinical trial with MRI outcomes | Seminowicz, D. A.; Burrowes, S.; Kearson, A.; Zhang, J.; Krimmel, S.; Samawi, L.; Furman, A.; Keaser, M.; Gould, N.; Magyari, T.; White, L.; Goloubeva, O.; Goyal, M.; Peterlin, L.; Haythornthwaite, J. | David A Seminowicz | University of Maryland Baltimore | 2020-02-07 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pain medicine | https://www.medrxiv.org/content/early/2020/02/07/19004069.source.xml | We aimed to evaluate the efficacy of an enhanced mindfulness based stress reduction (MBSR+) versus stress management for headache (SMH). We performed a randomized, assessor-blind, clinical trial of 98 adults with episodic migraine recruited at a single academic center comparing MBSR+ (n=50) to SMH (n=48). MBSR+ and SMH were delivered weekly by group for 8 weeks, then bi-weekly for another 8 weeks. The primary clinical outcome was reduction in headache days from baseline to 20 weeks. MRI outcomes included activity of left dorsolateral prefrontal cortex (DLPFC) and cognitive task network during cognitive challenge, resting state connectivity of right dorsal anterior insula (daINS) to DLPFC and cognitive task network, and gray matter volume of DLPFC, daINS, and anterior midcingulate. Secondary outcomes were headache-related disability, pain severity, response to treatment, migraine days, and MRI whole-brain analyses. Reduction in headache days from baseline to 20 weeks was greater for MBSR+ (7.8 [95%CI, 6.9-8.8] to 4.6 [95%CI, 3.7-5.6]) than for SMH (7.7 [95%CI 6.7-8.7] to 6.0 [95%CI, 4.9-7.0]) (P=0.04). 52% of the MBSR+ group showed a response to treatment (50% reduction in headache days) compared with 23% in the SMH group (P=0.004). Reduction in headache-related disability was greater for MBSR+ (59.6 [95%CI, 57.9-61.3] to 54.6 [95%CI, 52.9-56.4]) than SMH (59.6 [95%CI, 57.7-61.5] to 57.5 [95%CI, 55.5-59.4]) (P=0.02). There were no differences in clinical outcomes at 52 weeks or MRI outcomes at 20 weeks, although changes related to cognitive networks with MBSR+ were observed. MBSR+ is an effective treatment option for episodic migraine. | 10.1097/j.pain.0000000000001860 | medrxiv |
10.1101/19004234 | Time to death and its Predictors among Adult TB/HIV Co-Infected Patients in Mizan Tepi University Teaching Hospital, South West Ethiopia | Wondimu, W.; Kabeta, T.; Dube, L. | Wondimagegn Wondimu | Mizan Tepi University | 2019-08-07 | 1 | PUBLISHAHEADOFPRINT | cc_no | hiv aids | https://www.medrxiv.org/content/early/2019/08/07/19004234.source.xml | BackgroundTuberculosis (TB) and Human Immuno Deficiency Virus (HIV) co-infection represents a complex pathogenic scenario with synergistic effect and leads to about 300,000 HIV-associated TB deaths in the world in 2017. Despite this burden of death, time to death and its predictors among TB-HIV co-infected patient was not adequately studied; and the existing evidences are inconsistent. Therefore, this study was aimed to determine time to death and identify its predictors among adult TB/HIV co-infected patients.
MethodRetrospective cohort study was conducted by reviewing registers of randomly selected 364 TB/HIV co-infected patients enrolled in health care from July 2, 2007 up to July 1, 2017 at Mizan Tepi University Teaching Hospital. The hospital was located in Bench Maji Zone, South West Ethiopia. Data were collected from March 1 through 31, 2018, entered to Epi data 3.1 and exported to SPSS version 21. Each patient was followed from date of TB treatment initiation till death, loss to follow up and treatment completed. On the other hand, events other than death were considered as censored. After checking the proportional hazard model assumption, Cox-regression was used to identify the predictors. In bivariable analyses, P-value[≤]0.25 was used to identify candidate variables for multivariable analysis. The 95% CI of hazard ratio (HR) with respective P-value <0.05 was used to declare significance in the final model.
ResultAll the 364 patients were followed for 1,654 person months. There were 83 (22.8%) deaths and most 38 (45.8%) were occurring within the first two months of anti-TB treatment initiation. The overall incidence rate and median survival time were 5.02 per 100 person months (95% CI: 4.05, 6.22) and 10 months respectively. Statistically significant better survival was observed among patients: with CD4 [≥] 200 cells/mm3 (P<0.001), who had a history of cotrimoxazole preventive therapy (CPT) use (P<0.001), who disclose their HIV status (P<0.001) and with working functional status (P<0.001). Not using CPT (adjusted hazard ratio [AHR] =1.72; P=0.023), bedridden functional status (AHR=2.55; P=0.007), not disclosing HIV status (AHR=4.03; P<0.001) and CD4 < 200 cells/mm3 (AHR=6.05; P<0.001) were predictors of time to death among TB/HIV co-infected patients.
ConclusionThe median survival time was 10 months and poor survival was associated with low CD4 count, not using CPT, not disclosing HIV status and having bedridden functional status. Close monitoring of bedridden and low CD4 count patients, prompt CPT initiation and encouraging HIV status disclosure are recommended. | 10.2147/HIV.S242756 | medrxiv |
10.1101/19004309 | Self-blaming emotions in major depression: a randomised pilot trial comparing fMRI neurofeedback training with self-guided psychological strategies (NeuroMooD) | Jaeckle, T.; Williams, S. C. R.; Barker, G. J.; Basilio, R.; Carr, E.; Goldsmith, K.; Colasanti, A.; Giampietro, V.; Cleare, A.; Young, A. H.; Moll, J.; Zahn, R. | Roland Zahn | King\'s College London | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_no | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/08/09/19004309.source.xml | BackgroundOvergeneralised self-blame and worthlessness are key symptoms of major depressive disorder (MDD) and were previously associated with self-blame-selective changes in connectivity between right superior anterior temporal lobe (rSATL) and subgenual frontal areas. In a previous study, remitted MDD patients successfully modulated guilt-selective rSATL-subgenual cingulate connectivity using real-time functional magnetic resonance imaging (rtfMRI) neurofeedback training, thereby increasing their self-esteem. The feasibility and potential of using this approach in symptomatic MDD were unknown.
MethodsThis single-blind pre-registered randomised controlled pilot trial tested the clinical potential of a novel self-guided psychological intervention with and without additional rSATL-posterior subgenual cortex (SC) rtfMRI neurofeedback, targeting self-blaming emotions in insufficiently recovered people with MDD and early treatment-resistance (n=43, n=35 completers). Following a diagnostic baseline assessment, patients completed three self-guided sessions to rebalance self-blaming biases and a post-treatment assessment. The fMRI neurofeedback software FRIEND was used to measure rSATL-posterior SC connectivity, while the BDI-II was administered to assess depressive symptom severity as a primary outcome measure.
ResultsBoth interventions were demonstrated to be safe and beneficial, resulting in a mean reduction of MDD symptom severity by 46% and response rates of more than 55%, with no group difference. Secondary analyses, however, revealed a differential response on our primary outcome measure between MDD patients with and without DSM-5 defined anxious distress. Stratifying by anxious distress features was investigated, because this was found to be the most common subtype in our sample. MDD patients without anxious distress showed a higher response to rtfMRI neurofeedback training compared to the psychological intervention, with the opposite pattern found in anxious MDD. We explored potentially confounding clinical differences between subgroups and found that anxious MDD patients were much more likely to experience anger towards others as measured on our psychopathological interview which might play a role in their poorer response to neurofeedback. In keeping with the hypothesis that self-worth plays a key role in MDD, improvement on our primary outcome measure was correlated with increases in self-esteem after the intervention and this correlated with the frequency with which participants employed the strategies to tackle self-blame outside of the treatment sessions.
ConclusionsThese findings suggest that self-blame-selective rtfMRI neurofeedback training may be superior over a solely psychological intervention in non-anxious MDD, although further confirmatory studies are needed. The self-guided psychological intervention showed a surprisingly high clinical potential in the anxious MDD group which needs further confirmation compared versus treatment-as-usual. Future studies need to investigate whether self-blame-selective rSATL-SC connectivity changes are irrelevant in anxious MDD, which could explain their response being better to the psychological intervention without interfering neurofeedback.
https://doi.org/10.1186/ISRCTN10526888 | 10.1017/s0033291721004797 | medrxiv |
10.1101/19003251 | The Effects of Metronome Frequency Differentially Affects Gait on a Treadmill and Overground in People with Parkinson's Disease | Wygand, M.; Chawla, G.; Browner, N.; Lewek, M. | Michael Lewek | UNC-Chapel Hill | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_no | rehabilitation medicine and physical therapy | https://www.medrxiv.org/content/early/2019/08/09/19003251.source.xml | ObjectiveTo determine the effect of different metronome cue frequencies on spatiotemporal gait parameters when walking overground compared to walking on a treadmill in people with Parkinsons disease
DesignRepeated-measures, within-subject design
SettingResearch laboratory
ParticipantsTwenty-one people with Parkinsons disease (Hoehn & Yahr stage 1-3)
InterventionsParticipants walked overground and on a treadmill with and without metronome cues of 85%, 100%, and 115% of their baseline cadence for one minute each.
Main Outcome MeasuresGait speed, step length, and cadence
ResultsAn interaction effect between cue frequency and walking environment revealed that participants took longer steps during the 85% condition on the treadmill only. When walking overground, metronome cues of 85% and 115% of baseline cadence yielded decreases and increases, respectively, in both cadence and gait speed with no concomitant change in step length.
ConclusionsThese data suggest that people with PD are able to alter spatiotemporal gait parameters immediately when provided the appropriate metronome cue and walking environment. We propose to target shortened step lengths by stepping to the beat of slow frequency auditory cues while walking on a treadmill, whereas the use of fast frequency cues during overground walking can facilitate faster walking speeds. | 10.1016/j.gaitpost.2020.04.003 | medrxiv |
10.1101/19002535 | Comparison of student perception and exam validity, reliability and items difficulty: cross-sectional study. | Rezigalla, A. A.; Eleragi, A. M. E.; Elkhalifa, M. I.; Mohammed, A. M. A. | Assad Ali Rezigalla | Department of Basic medical sciences, College of Medicine, University of Bisha, Bisha 61922, P.O. Box 551, Saudi Arabia. | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | medical education | https://www.medrxiv.org/content/early/2019/08/09/19002535.source.xml | IntroductionStudent perception of an exam is a reflection of their feelings towards the exam items, while item analysis is a statistical analysis of students responses to exam items. The study was formulated to compare the students perception of the results of item analysis.
Material and methodsType of the study is cross-sectional. The study was conducted in the college of medicine, in the duration from January to April 2019. The study uses a structured questionnaire and standardized item analysis of students exam. Participants are students registered for semester two level year (2018-2019). Exclusion criteria included all students who refused to participate in the study or do not fill the questionnaire.
ResultThe response rate of the questionnaire was 88.9% (40/45). Students considered the exam as easy (70.4%). The average difficulty index of the exam is acceptable. KR-20 of the exam was 0.906. A significant correlation was reported between student perceptions towards exam difficulty and standard exam difficulty.
DiscussionStudent perceptions support the evidances of exam vlaitdity. Students can estimate exam difficulty. | 10.18502/sjms.v15i2.5503 | medrxiv |
10.1101/19002949 | Association between the ACE I/D gene polymorphism and progressive renal failure in autosomal dominant polycystic kidney disease: A meta-analysis | Pabalan, N.; Tharabenjasin, P.; Parcharoen, Y.; Tasanarong, A. | Noel Pabalan | Thammasat University | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | nephrology | https://www.medrxiv.org/content/early/2019/08/09/19002949.source.xml | ObjectiveThe angiotensin converting enzyme insertion/deletion (ACE I/D) gene polymorphism is involved in a wide range of clinical outcomes. This makes ACE I/D an important genetic marker. Updating the genetic profile of ACE I/D and raising the evidence for its role in renal disease is therefore needed. Reported associations of ACE I/D with progressive renal failure (PRF) in autosomal dominant polycystic kidney disease (ADPKD) have been inconsistent, prompting a meta-analysis to obtain more precise estimates.
MethodsMulti-database search yielded 18 articles for inclusion in the meta-analysis. Risks (odds ratios [ORs] and 95% confidence intervals) were estimated by comparing the ACE genotypes (heterozygote ID, homozygotes DD and II). Heterogeneous (random-effects) pooled associations were subjected to outlier treatment which yielded fixed-effects outcomes and split the findings into pre- (PRO) and post- (PSO) outlier status. Subgroup analysis was based on ethnicity (Asian/Caucasian) and minor allele frequency (maf). The [≥] 0.50 maf subgroup indicates higher frequency of the variant II genotype over that of the common DD genotype, otherwise, the subgroup is considered < 0.50 maf. Stability of the associative effects was assessed with sensitivity treatment. Temporal trend of association was examined with cumulative meta-analysis.
ResultsIn the PSO analysis, overall effects were null (ORs 0.99-1.02) but not in the subgroups (Asian and [≥] 0.50 maf), where in presence of the D allele (DD/ID) and the I allele (II), increased (ORs 1.63-5.62) and reduced (OR 0.22) risks were observed, respectively. Of these pooled effects, the Asian and [≥] 0.50 maf homozygous DD genotypes had high ORs (5.01-5.63) indicating elevated magnitude of effects that were highly significant (Pa < 10-5) and homogeneous (I2 = 0%), in addition to their robustness. In contrast, the Caucasian and < 0.50 maf subgroup effects were: (i) non-heterogeneous (fixed-effects) at the outset, which did not require outlier treatment and (ii) non-significant (ORs 0.91-1.10, Pa = 0.15-0.79). Cumulative meta-analysis revealed increased precision of effects over time.
ConclusionsPRF in ADPKD impacted the Asian and [≥] 0.50 maf subgroups where DD homozygote carriers were up to 6-fold susceptible. The high magnitude of these effects were highly significant, homogeneous and robust indicating strong evidence of association. | null | medrxiv |
10.1101/19001032 | Identifying Ashkenazi Jewish BRCA1/2 founder variants in individuals who do not self-report Jewish ancestry | Tennen, R. I.; Laskey, S. B.; Koelsch, B. L.; McIntyre, M. H.; The 23andMe Health Team, ; Tung, J. Y. | Joyce Y Tung | 23andMe | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_by | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/08/09/19001032.source.xml | Current guidelines recommend BRCA1 and BRCA2 genetic testing for individuals with a personal or family history of certain cancers. Three BRCA1/2 founder variants -- 185delAG (c.68_69delAG), 5382insC (c.5266dupC), and 6174delT (c.5946delT) -- are common in the Ashkenazi Jewish population. We characterized a cohort of more than 2,800 research participants in the 23andMe database who carry one or more of the three Ashkenazi Jewish founder variants, evaluating two characteristics that are typically used to recommend individuals for BRCA testing: self-reported Jewish ancestry and family history of breast, ovarian, prostate, or pancreatic cancer. Of the 1,967 carriers who provided self-reported ancestry information, 21% did not self-report any Jewish ancestry; of these individuals, more than half (62%) do have detectable Ashkenazi Jewish genetic ancestry. In addition, of the 343 carriers who provided both ancestry and family history information, 44% did not have a first-degree family history of a BRCA-related cancer and, in the absence of a personal history of cancer, would therefore be unlikely to qualify for clinical genetic testing. These findings provide support for the growing call for broader access to BRCA genetic testing. | 10.1038/s41598-020-63466-x | medrxiv |
10.1101/19003087 | Host genetic variants near the PAX5 gene locus associate with susceptibility to invasive group A streptococcal disease | Parks, T.; Auckland, K.; Lamagni, T. L.; Mentzer, A. J.; Elliott, K.; Guy, R.; Cartledge, D.; Strakova, L.; O'Connor, D.; Pollard, A. J.; Chapman, S. J.; Thomas, M.; Brodlie, M.; Colot, J.; D'Ortenzio, E.; Baroux, N.; Mirabel, M.; Gilchrist, J. J.; Scott, J. A. G.; Williams, T. N.; Knight, J. C.; Steer, A. C.; Hill, A. V. S.; Sriskandan, S. | Tom Parks | University of Oxford | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_by | infectious diseases | https://www.medrxiv.org/content/early/2019/08/09/19003087.source.xml | We undertook a genome-wide association study of susceptibility to invasive group A streptococcal (GAS) disease combining data from distinct clinical manifestations and ancestral populations. Amongst other signals, we identified a susceptibility locus located 18kb from PAX5, an essential B-cell gene, which conferred a nearly two-fold increased risk of disease (rs1176842, odds ratio 1.8, 95% confidence intervals 1.5-2.3, P=3.2x10-7). While further studies are needed, this locus could plausibly explain some inter-individual differences in antibody-mediated immunity to GAS, perhaps providing insight into the effects of intravenous immunoglobulin in streptococcal toxic shock. | null | medrxiv |
10.1101/19003376 | Increasing Heroin, Cocaine, and Buprenorphine Arrests Reported to the Maine Diversion Alert Program | Simpson, K. J.; Moran, M. T.; McCall, K. L.; Hebert, J.; Foster, M.; Simoyan, O.; Shah, D. T.; Desrosiers, C.; Nichols, S. D.; Piper, B. J. | Brian James Piper | Geisinger Commonwealth School of Medicine | 2019-08-09 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | addiction medicine | https://www.medrxiv.org/content/early/2019/08/09/19003376.source.xml | BackgroundThe opioid overdose crisis is especially pronounced in Maine. The Diversion Alert Program (DAP) was developed to combat illicit drug use and prescription drug diversion by facilitating communication between law enforcement and healthcare providers with the goal of limiting drug-related harms and criminal behaviors. Our objectives in this report were to analyze 2014-2017 DAP for: 1) trends in drug arrests and, 2) differences in arrests by offense, demographics (sex and age) and by region.
MethodsDrug charges (N = 8,193, 31.3% female, age = 33.1 {+/-} 9.9) reported to the DAP were examined by year, demographics, and location.
ResultsThe most common substances of the 10,064 unique arrests reported were heroin (N = 2,203, 21.9%), crack/cocaine (N = 945, 16.8%), buprenorphine (N = 812, 8.1%), and oxycodone (N = 747, 7.4%). While the overall number of arrests reported to the DAP declined in 2017, the proportion of arrests involving opioids (heroin, buprenorphine, or fentanyl) and stimulants (cocaine/crack cocaine, or methamphetamine), increased (p < .05). Women had significantly increased involvement in arrests involving sedatives and miscellaneous pharmaceuticals (e.g. gabapentin) while men had an elevation in stimulant arrests. Heroin accounted for a lower percentage of arrests among individuals age > 60 (6.6%) relative to young-adults (18-29, 22.3%, p < .0001). Older-adults had significantly more arrests than younger-adults for oxycodone, hydrocodone, and marijuana.
ConclusionHeroin had the most arrests from 2014-2017. Buprenorphine, fentanyl and crack/cocaine arrests increased appreciably suggesting that improved treatment is needed to prevent further nonmedical use and overdoses. The Diversion Alert Program provided a unique data source for research, a harm-reduction tool for health care providers, and an informational resource for law enforcement. | 10.1016/j.forsciint.2019.109924 | medrxiv |
10.1101/19004382 | Elevated blood pressure in the emergency department - a risk factor for incident cardiovascular disease: An EHR-based cohort study | Oras, P.; Habel, H.; Skoglund, P.; Svensson, P. | Pontus Oras | Department of Clinical Science and Education, Sodersjukhuset, Karolinska Institutet, Stockholm, Sweden | 2019-08-10 | 1 | PUBLISHAHEADOFPRINT | cc_no | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/08/10/19004382.source.xml | ObjectivesIn the emergency department (ED), high blood pressure (BP) is commonly observed but mostly used to evaluate patients health in the short-term. We aimed to study whether ED-measured BP is associated with incident atherosclerotic cardiovascular disease (ASCVD), myocardial infarction (MI), or stroke in long-term, and to estimate the number needed to screen (NNS) to prevent ASCVD.
DesignElectronic Health Records (EHR) and national register-based cohort study. The association between BP and incident ASCVD was studied with Cox-regression.
SettingTwo university hospital emergency departments in Sweden.
Data sourcesBP data were obtained from EDs EHR, and outcome information was acquired through the Swedish National Patient Register for all participants.
ParticipantsAll patients [≥]18 years old who visited the EDs between 2010 to 2016, with an obtained BP (n=300,193).
Main outcome measuresIncident ASCVD, MI, and stroke during follow-up.
ResultsThe subjects were followed for a median of 42 months. 8,999 incident ASCVD events occurred (MI: 4,847, stroke: 6,661). Both diastolic and systolic BP (SBP) was associated with incident ASCVD, MI, and stroke with a progressively increased risk for SBP within hypertension grade 1 (HR 1.15, 95% CI 1.06 to 1.24), 2 (HR 1.35, 95% CI 1.25 to 1.47), and 3 (HR 1.63, 95% CI 1.49 to 1.77). The six-year cumulative incidence of ASCVD was 12% for patients with SBP [≥]180 mmHg compared to 2% for normal levels. To prevent one ASCVD event during the median follow-up, NNS was estimated to 151, whereas NNT to 71.
ConclusionsBP in the ED is associated with incident ASCVD, MI, and stroke. High BP recordings in EDs should not be disregarded as isolated events, but an opportunity to detect and improve treatment of hypertension. ED-measured BP provides an important and under-used tool with great potential to reduce morbidity and mortality associated with hypertension. | 10.1161/hypertensionaha.119.14002 | medrxiv |
10.1101/19004242 | Causal mediation analysis of Subclinical Hypothyroidism in children with obesity and Non-Alcoholic Fatty Liver Disease | Nichols, P.; Pan, Y.; May, B.; Pavlicova, M.; Mencin, A.; Thaker, V. V. | Vidhu V Thaker | Columbia University Irving Medical Center | 2019-08-10 | 1 | PUBLISHAHEADOFPRINT | cc_no | pediatrics | https://www.medrxiv.org/content/early/2019/08/10/19004242.source.xml | ContextNonalcoholic Fatty Liver Disease (NAFLD) is a common co-morbidity of obesity and a leading cause of liver disease in children. Subclinical hypothyroidism (SH) occurs with obesity and may contribute to dysmetabolic state that predisposes to NAFLD.
ObjectiveTo assess the relationship between SH and NAFLD in children with biopsy-proven NAFLD compared to controls.
Design and MethodsIn this retrospective study of cases from a registry of children with biopsy-proven NAFLD and age-matched controls, the association of SH with NAFLD and other cardiometabolic parameters was assessed followed by causal mediation analyses under the counter-factual framework.
ResultsSixty-six cases and 4067 age-matched controls were included in the study. Children with NAFLD were more likely to be male (74.6 vs 39.4%, p < 0.001), have higher modified BMI-z scores (2.3 {+/-}1.6 vs 1{+/-}1.6, p < 0.001), and abnormal metabolic parameters (TSH, ALT, HDL-C, non-HDL-C, LDL-C, and TG). Multivariate analyses controlling for age, sex and severity of obesity showed significant association between the 4th quartile of TSH and NAFLD. Causal mediation analysis demonstrates that TSH mediates 44% of the effect of modified BMI-z score on NAFLD. This comprises of 16.2% (OR = 1.1, p < 0.001) caused by the indirect effect of TSH and its interaction with modified BMI-z, and 26.5% (OR = 1.1, p = 0.01) as an autonomous effect of TSH on NAFLD regardless of the obesity.
ConclusionsThe association of SH and biopsy-proven NAFLD is demonstrated in a predominately Latino population. Further, a causal mediation analysis implicates an effect of TSH on NAFLD, independent of obesity. | 10.1371/journal.pone.0234985 | medrxiv |
10.1101/19004242 | Causal mediation analysis of Subclinical Hypothyroidism in children with obesity and Non-Alcoholic Fatty Liver Disease | Nichols, P.; Pan, Y.; May, B.; Pavlicova, M.; Rausch, J.; Mencin, A.; Thaker, V. V. | Vidhu V Thaker | Columbia University Irving Medical Center | 2019-11-26 | 2 | PUBLISHAHEADOFPRINT | cc_no | pediatrics | https://www.medrxiv.org/content/early/2019/11/26/19004242.source.xml | ContextNonalcoholic Fatty Liver Disease (NAFLD) is a common co-morbidity of obesity and a leading cause of liver disease in children. Subclinical hypothyroidism (SH) occurs with obesity and may contribute to dysmetabolic state that predisposes to NAFLD.
ObjectiveTo assess the relationship between SH and NAFLD in children with biopsy-proven NAFLD compared to controls.
Design and MethodsIn this retrospective study of cases from a registry of children with biopsy-proven NAFLD and age-matched controls, the association of SH with NAFLD and other cardiometabolic parameters was assessed followed by causal mediation analyses under the counter-factual framework.
ResultsSixty-six cases and 4067 age-matched controls were included in the study. Children with NAFLD were more likely to be male (74.6 vs 39.4%, p < 0.001), have higher modified BMI-z scores (2.3 {+/-}1.6 vs 1{+/-}1.6, p < 0.001), and abnormal metabolic parameters (TSH, ALT, HDL-C, non-HDL-C, LDL-C, and TG). Multivariate analyses controlling for age, sex and severity of obesity showed significant association between the 4th quartile of TSH and NAFLD. Causal mediation analysis demonstrates that TSH mediates 44% of the effect of modified BMI-z score on NAFLD. This comprises of 16.2% (OR = 1.1, p < 0.001) caused by the indirect effect of TSH and its interaction with modified BMI-z, and 26.5% (OR = 1.1, p = 0.01) as an autonomous effect of TSH on NAFLD regardless of the obesity.
ConclusionsThe association of SH and biopsy-proven NAFLD is demonstrated in a predominately Latino population. Further, a causal mediation analysis implicates an effect of TSH on NAFLD, independent of obesity. | 10.1371/journal.pone.0234985 | medrxiv |
10.1101/19004432 | Identifying barriers and facilitators to physical activity for people with scleroderma: A nominal group technique study | Harb, S.; Cumin, J.; Rice, D. B.; Pelaez, S.; Hudson, M.; Bartlett, S. J.; Roren, A.; Furst, D. E.; Frech, T. M.; Nguyen, C.; Nielson, W. R.; Thombs, B. D.; Shrier, I.; Patient Advisory Team, S.-P. | Ian Shrier | Lady Davis Institute for Medical Research, Jewish General Hospital | 2019-08-11 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | rheumatology | https://www.medrxiv.org/content/early/2019/08/11/19004432.source.xml | PurposeTo identify physical activity barriers and facilitators experienced by people with systemic sclerosis (SSc; scleroderma).
Materials and MethodsWe conducted nominal group technique sessions with SSc patients who shared barriers to physical activities, barrier-specific facilitators, and general facilitators. Participants rated importance of barriers and likelihood of using facilitators from 0-10, and indicated whether they had tried facilitators. Barriers and facilitators across sessions were subsequently merged to eliminate overlap; edited based on feedback from investigators, patient advisors, and clinicians; and categorized.
ResultsWe conducted nine sessions (n=41 total participants) and initially generated 181 barriers, 457 barrier-specific facilitators, and 20 general facilitators. The number of consolidated barriers (barrier-specific facilitators in parentheses) for each category were: 14 (61) for health and medical; 4 (23) for social and personal; 1 (3) for time, work, and lifestyle; and 1 (4) for environmental. There were 12 consolidated general facilitators. The consolidated items with [≥] 1/3 of participants ratings [≥] 8 were: 15 barriers, 69 barrier-specific facilitators, and 9 general facilitators.
ConclusionsPeople with SSc reported many barriers related to health and medical aspects of SSc and several barriers in other categories. They reported facilitators to remain physically active despite the barriers.
Implications for RehabilitationO_LIPeople with scleroderma experience difficulty being physically active due to the diverse and often severe manifestations of the disease, including involvement of the skin, musculoskeletal system, and internal organs.
C_LIO_LIIn addition to regular care of scleroderma-related symptoms, patients overcome many exercise challenges by selecting physical activities that are comfortable for them, adjusting the intensity and duration of activities, adapting activities, and using adapted equipment or other materials to reduce discomfort.
C_LIO_LIRehabilitation professionals should help people with scleroderma to tailor activity options to their capacity and needs when providing care and advice to promote physical activity.
C_LI | 10.1080/09638288.2020.1742391 | medrxiv |
10.1101/19003194 | Insomnia, excessive daytime sleepiness, anxiety, depression and socioeconomic status among customer service employees in Canada | Etindele Sosso, F. | FA Etindele Sosso | Center for Advanced Research in Sleep Medicine | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by | occupational and environmental health | https://www.medrxiv.org/content/early/2019/08/12/19003194.source.xml | ObjectivesThe present study alerts on the potential effect of working full time in a call center as a risk factor for neuropsychiatric illnesses. It is the first study investigating deeply presence of anxiety, depression, insomnia and excessive daytime sleepiness among a large population of customer service employees. It has 3 specifics goals which were (1) document presence of sleep disorders among customer service advisors (2) document presence of anxiety and depression in this population (3) determine the influence of the socioeconomic status, duration in position and full time or part-time shift on the diseases above.
FindingsIt was found that the majority of people working in customer service are undergraduate students or at a secondary/high school degree. They worked full time, are single and have reported at least two of the neuropsychiatric disorders assessed in the present study. Among customer service advisors enrolled in this study, all neuropsychiatric disorders investigated were present and significantly higher for those working full time. Perceived socioeconomic status (pSES) was almost similar for full time and part time workers with a mean score of 4.8 on the MacArthur scale of subjective social status. Results revealed that duration in position was an excellent predictor of insomnia, sleepiness and anxiety (respectively with R2=91,83%, R2=81,23% and R2=87,46%) but a moderate predictor of depression (R2=69,14%). The pSES was a moderate predictor of sleep disorders (respectively R2=62,04% for insomnia and R2=53,62% for sleepiness) but had a strong association with anxiety and depression (R2=82,95% for anxiety and R2=89,77% for depression). It was found that insomnia and anxiety are more prevalent for immigrants and international students compared to Canadians, while depression was similarly higher for Canadian and immigrants compared to international students. It was found that sleepiness has the same trend in the three subgroups.
ConclusionCustomer service employees are exposed to a continuous stimulation of their cognitive functions in addition to different stressors which can progressively and silently damage the nervous system. Investigations on mental and physical health of customer service advisors are worthy of interest, and understanding how their work, their rotating shifts and their socioeconomic status influence their resilience and their performance at work; may help comprehension of similar health issues emerging in similar populations with similar occupations. | 10.5935/1984-0063.20190133 | medrxiv |
10.1101/19004333 | Immunoglobulin E as a biomarker for the overlap of atopic asthma and chronic obstructive pulmonary disease | Hersh, C. P.; Zacharia, S.; Prakash Arivu Chelvan, R.; Hayden, L. P.; Mirtar, A.; Zarei, S.; Putcha, N.; the COPDGene Investigators, | Craig P Hersh | Brigham and Women\'s Hospital | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | respiratory medicine | https://www.medrxiv.org/content/early/2019/08/12/19004333.source.xml | Asthma-COPD overlap (ACO) is a common clinical syndrome, yet there is no single objective definition. We hypothesized that Immunoglobulin E measurements could be used to refine the definition of ACO. In baseline plasma samples from 2870 subjects in the COPDGene Study, we measured total IgE levels and specific IgE levels to six common allergens. Compared to usual COPD, subjects with ACO had higher total IgE levels (median 67.0 vs 42.2 IU/ml) and more frequently had at least one positive specific IgE (43.5 vs 24.5%). We previously used a strict definition of ACO in subjects with COPD, based on self-report of a doctors diagnosis of asthma before the age of 40. This strict ACO definition was refined by the presence of atopy, determined by total IgE >100 IU/ml or at least one positive specific IgE, as was a broader definition of ACO based on any asthma history. Subjects will all three ACO definitions were younger (mean age 60.0-61.3), were more commonly African American (36.8-44.2%), had a higher exacerbation frequency (1.0-1.2 in the past year), and had more airway wall thickening on quantitative analysis of chest CT scans. Among subjects with clinical ACO, 37-46% did not have atopy; these subjects had more emphysema on chest CT scan. Based on associations with exacerbations and CT airway disease, IgE did not clearly improve the clinical definition of ACO. However, IgE measurements could be used to subdivide subjects with atopic and non-atopic ACO, who might have different biologic mechanisms and potential treatments. | 10.15326/jcopdf.7.1.2019.0138 | medrxiv |
10.1101/19001727 | Transcriptional survey of peripheral blood links lower oxygen saturation during sleep with reduced expressions of CD1D and RAB20 that is reversed by CPAP therapy | Sofer, T.; Li, R.; Joehanes, R.; Lin, H.; Gower, A. C.; Wang, H.; Kurniansyah, N.; Cade, B. E.; Li, J.; Williams, S.; Mehra, R.; Patel, S. R.; Quan, S. F.; Liu, Y.; Rotter, J. I.; Rich, S. S.; Spira, A.; Levy, D.; Gharib, S. A.; Redline, S.; Gottlieb, D. J. | Tamar Sofer | Harvard Medical School, Brigham and Women\'s Hospital | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_no | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/08/12/19001727.source.xml | Sleep Disordered Breathing (SDB) is associated with a wide range of physiological changes due, in part, to the influence of hypoxemia during sleep. We studied gene expression in peripheral blood mononuclear cells in association with three measures of SDB: Apnea Hypopnea Index (AHI); average oxyhemoglobin saturation (avgO2) during sleep; and minimum oxyhemoglobin saturation (minO2) during sleep. We performed discovery association analysis in two community-based studies, the Framingham Offspring Study (FOS; N=571) and the Multi-Ethnic Study of Atherosclerosis (MESA; N = 580). An association with false discovery rate (FDR) q < 0.05 in one study was considered "replicated" if a p < 0.05 was observed in the other study. Those genes that replicated across MESA and FOS, or with FDR q < 0.05 in meta-analysis, were used for analysis of gene expression in the blood of 15 participants from the Heart Biomarkers In Apnea Treatment (HeartBEAT) trial. HeartBEAT participants had moderate or severe obstructive sleep apnea (OSA) and were studied pre- and post-treatment (three months) with continuous positive airway pressure (CPAP). We also performed Gene Set Enrichment Analysis (GSEA) on all traits and cohort analyses. Twenty-two genes were associated with SDB traits in both MESA and FOS. Of these, lower expression of CD1D and RAB20 was associated with lower avgO2 in MESA and FOS. CPAP treatment increased the expression of these genes in HeartBEAT participants. Immunity and inflammation pathways were up-regulated in subjects with lower avgO2; i.e., in those with a more severe SDB phenotype (MESA), whereas immuno-inflammatory processes were down-regulated in response to CPAP treatment (HeartBEAT).
One Sentence SummaryWe studied the association of gene expression in blood with obstructive sleep apnea traits, including oxygen saturation during sleep, and identified mechanisms that are reversed by treatment with Continuous Positive Airway Pressure. | 10.1016/j.ebiom.2020.102803 | medrxiv |
10.1101/19003780 | A flexible formula for incorporating distributive concerns into cost-effectiveness analyses: priority weights | Haaland, O. A.; Lindemark, F.; Johansson, K. A. | Øystein Ariansen Haaland | Bergen center for ethics and priority setting (BCEPS), Department of global public health and primary care, University of Bergen, Norway | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | health economics | https://www.medrxiv.org/content/early/2019/08/12/19003780.source.xml | Cost effectiveness analyses (CEAs) are widely used to evaluate the opportunity cost of health care investments. However, few functions that take equity concerns into account are available for such CEA methods, and these concerns are therefore at risk of being disregarded. Among the functions that have been developed, most focus on the distribution of health gains, as opposed to the distribution of lifetime health. This is despite the fact that there are good reasons to give higher priority to individuals and groups with a low health adjusted life expectancy. Also, an even distribution of health gains may imply an uneven distribution of lifetime health.
We develop a systematic and explicit approach that allows for the inclusion of lifetime health concerns in CEAs, by creating a new priority weight function, PW=+(t-{gamma}){middle dot}C{middle dot}e-{beta}{middle dot}(t-{gamma}), where t is the health measure. PW has several desirable properties. First, it is continuous and smooth, ensuring that people with similar health characteristics are treated alike. Second, it is flexible regarding shape and outcome measure, so that a broad range of values may be modelled.
Third, the coefficients have distinct roles. This allows for the easy manipulation of the PWs shape. In order to demonstrate how PW may be applied, we use data from a previous study and estimated the coefficients of PW based on two approaches.
The first considers the mean weights assigned by the respondents:
PWmean=-0.42+(t+22.2){middle dot}0.27{middle dot}e-0.031{middle dot}(t+22.2).
The second considers the median weights:
PWmedian=0.79+(t+8.85){middle dot}0.17{middle dot}e-0.053{middle dot}(t+8.85).
This illustrates that our framework allows for the estimation of PWs based on empirical data. | 10.1371/journal.pone.0223866 | medrxiv |
10.1101/19003533 | Examining Gender Differences in Opioid, Benzodiazepine, and Antibiotic Prescribing | Hamamsy, T.; Tamang, S.; Lembke, A. | Tymor Hamamsy | New York University | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | health informatics | https://www.medrxiv.org/content/early/2019/08/12/19003533.source.xml | While gender differences have been explored across several areas of medicine, our study is the first to present a systematic comparison of drug prescribing behavior of male and female providers, including opioid, benzodiazepine, and antibiotic prescribing. Our work is of particular relevance to the current opioid crisis and other iatrogenic harms related to injudicious prescribing. Our objective is to explore prescribing differences between male and female providers across medical specialties and for different prescription drug categories in Medicare Part D. To this end, we performed a descriptive, retrospective study of 1.13 million medical providers who made drug claims to Medicare Part D in 2016, analyzing by gender, specialty, and drug category. We found that male providers across diverse specialties prescribe significantly more medications, including opioids, benzodiazepines, and antibiotics than female providers by volume, cost, and per patient. These observed gender differences in prescribing, while agnostic to the quality of care provided, nonetheless inform the design of prevention strategies that seek to reduce iatrogenic harms related to prescribing. | null | medrxiv |
10.1101/19003871 | Does the early nutritional environment and in utero HIV exposure, in the absence of infant infection, impact infant development? | White, M.; Duffley, E.; Feucht, U. D.; Rossouw, T.; Connor, K. L. | Kristin L Connor | Department of Health Sciences, Carleton University | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_no | hiv aids | https://www.medrxiv.org/content/early/2019/08/12/19003871.source.xml | Malnutrition and infectious disease often coexist in socially inequitable contexts. Malnutrition in the perinatal period adversely affects offspring development and lifelong non-communicable disease risk. Less is known about the effects of infectious disease exposure during critical windows of development and health, and links between in utero HIV-exposure in the absence of neonatal infection, perinatal nutritional environments, and infant development are poorly defined. In a pilot feasibility study at Kalafong Hospital, Pretoria, South Africa, we aimed to better understand relationships between maternal HIV infection and the early nutritional environment of in utero HIV exposed uninfected (HEU) infants. We also undertook exploratory analyses to investigate relationships between food insecurity and infant development. Mother-infant dyads were recruited after delivery and followed until 12 weeks postpartum. Household food insecurity, nutrient intakes and dietary diversity scores did not differ between mothers living with or without HIV. Maternal reports of food insecurity were associated with lower maternal nutrient intakes 12 weeks postpartum, and in infants, higher brain-to-body weight ratio at birth and 12 weeks of age, and attainment of fewer large movement and play activities milestones at 12 weeks of age, irrespective of maternal HIV status. Reports of worry about food runout were associated with increased risk of stunting for HEU, but not unexposed, uninfected infants. Our findings suggest that food insecurity, in a vulnerable population, adversely affects maternal nutritional status and infant development. In utero exposure to HIV may further perpetuate these effects, which has implications for early child development and lifelong human capital. | null | medrxiv |
10.1101/19003871 | How do maternal HIV infection and the early nutritional environment influence the development of infants exposed to HIV in utero? | White, M.; Duffley, E.; Feucht, U. D.; Rossouw, T.; Connor, K. L. | Kristin L Connor | Department of Health Sciences, Carleton University | 2019-09-14 | 2 | PUBLISHAHEADOFPRINT | cc_no | hiv aids | https://www.medrxiv.org/content/early/2019/09/14/19003871.source.xml | Malnutrition and infectious disease often coexist in socially inequitable contexts. Malnutrition in the perinatal period adversely affects offspring development and lifelong non-communicable disease risk. Less is known about the effects of infectious disease exposure during critical windows of development and health, and links between in utero HIV-exposure in the absence of neonatal infection, perinatal nutritional environments, and infant development are poorly defined. In a pilot feasibility study at Kalafong Hospital, Pretoria, South Africa, we aimed to better understand relationships between maternal HIV infection and the early nutritional environment of in utero HIV exposed uninfected (HEU) infants. We also undertook exploratory analyses to investigate relationships between food insecurity and infant development. Mother-infant dyads were recruited after delivery and followed until 12 weeks postpartum. Household food insecurity, nutrient intakes and dietary diversity scores did not differ between mothers living with or without HIV. Maternal reports of food insecurity were associated with lower maternal nutrient intakes 12 weeks postpartum, and in infants, higher brain-to-body weight ratio at birth and 12 weeks of age, and attainment of fewer large movement and play activities milestones at 12 weeks of age, irrespective of maternal HIV status. Reports of worry about food runout were associated with increased risk of stunting for HEU, but not unexposed, uninfected infants. Our findings suggest that food insecurity, in a vulnerable population, adversely affects maternal nutritional status and infant development. In utero exposure to HIV may further perpetuate these effects, which has implications for early child development and lifelong human capital. | null | medrxiv |
10.1101/19003616 | Depletion-of-susceptibles bias in influenza vaccine waning studies: how to ensure robust results | Lipsitch, M.; Goldstein, E. M.; Ray, G. T.; Fireman, B. | Marc Lipsitch | Harvard Chan School of Public Health | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by | epidemiology | https://www.medrxiv.org/content/early/2019/08/12/19003616.source.xml | Vaccine effectiveness (VE) studies are subject to biases due to depletion of at-risk persons or of highly susceptible persons at different rates from different groups (depletion-of-susceptibles bias), a problem that can also lead to biased estimates of waning effectiveness, including spurious inference of waning when none exists. An alternative study design to identify waning is to study only vaccinated persons, and compare for each day the incidence in persons with earlier or later dates of vaccination. Prior studies suggested under what conditions this alternative would yield correct estimates of waning. Here we define the depletion-of-susceptibles process formally and show mathematically that for influenza vaccine waning studies, a randomized trial or corresponding observational study that compares incidence at a specific calendar time among individuals vaccinated at different times before the influenza season begins will not be vulnerable depletion-of-susceptibles bias in its inference of waning under the null hypothesis that none exists, and will - if waning does actually occur - underestimate the extent of waning. Such a design is thus robust in the sense that a finding of waning in that inference framework reflects actual waning of vaccine-induced immunity. We recommend such a design for future studies of waning, whether observational or randomized. | 10.1017/S0950268819001961 | medrxiv |
10.1101/19003657 | Impact of pediatric influenza vaccination on antibiotic resistance in England and Wales | Chae, C.; Davies, N. G.; Jit, M.; Atkins, K. E. | Nicholas G Davies | London School of Hygiene and Tropical Medicine | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/12/19003657.source.xml | Vaccines against viral infections have been proposed to reduce antibiotic prescribing and thereby help control resistant bacterial infections. However, by combining published data sources, we predict that pediatric live attenuated influenza vaccination in England and Wales will not have a major impact upon antibiotic consumption or health burdens of resistance. | 10.3201/eid2601.191110 | medrxiv |
10.1101/19003772 | Associations of childcare type, age at start, and intensity with body mass index trajectories from 10 to 42 years of age in the 1970 British Cohort Study | Costa, S.; Bann, D.; Benjamin-Neelon, S. E.; Adams, J.; Johnson, W. | Silvia Costa | Loughborough University | 2019-08-12 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/12/19003772.source.xml | BackgroundAttending childcare is related to greater childhood obesity risk, but there are few long-term follow-up studies. We aimed to examine the associations of childcare type, duration, and intensity with BMI trajectories from ages 10-42 years.
MethodsThe sample comprised 8234 individuals in the 1970 British Cohort Study, who had data on childcare attendance (no, yes), type (formal, informal), duration (4-5, 3-3.99, 0-2.99 years old when started), and intensity (1, 2, 3, 4-5 days/week) reported at age five years and 32563 BMI observations. Multilevel linear spline models were used to estimate the association of each exposure with the sample-average BMI trajectory, with covariate adjustment. A combined duration and intensity exposure was also examined.
ResultsChildcare attendance and type were not strongly related to BMI trajectories. Among participants who attended childcare 1-2 days a week, those who started when 3-3.99 years old had a 0.197 (-0.004, 0.399) kg/m2 higher BMI at age 10 years than those who started when 4-5 years old, and those who started when 0-2.99 years old had a 0.289 (0.049, 0.529) kg/m2 higher BMI. A similar dose-response pattern for intensity was observed when holding duration constant. By age 42 years, individuals who started childcare at age 0-2.99 years and attended 3-5 days/week had a 1.356 kg/m2 (0.637, 2.075) higher BMI than individuals who started at age 4-5 years and attended 1-2 days/week.
ConclusionsChildren who start childcare earlier and/ or attend more frequently may have greater long-term obesity risk. | 10.1111/ijpo.12644 | medrxiv |
10.1101/19003061 | Prevalence of diabetic foot ulcer and its association with duration of illness and residence in Ethiopia: a systematic review and meta-analysis. | Mulugeta, H.; Wagnew, F.; Zeleke, H.; Tesfaye, B.; Amha, H.; Leshargie, C. T.; Biresaw, H.; Dessie, G.; Belay, Y. A.; Habtewold, T. D. | Henok Mulugeta | Debre Markos University | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | endocrinology | https://www.medrxiv.org/content/early/2019/08/13/19003061.source.xml | BackgroundDiabetic foot ulcer (DFU), devastating complications of diabetes mellitus, is a major public health problem, and one of the leading reasons for hospital admission, amputations, and even death among diabetic patients in Ethiopia. Despite its catastrophic health consequences, the national burden of diabetic foot ulcer remains unknown in Ethiopia. Hence, the objective of this systematic review and meta-analysis was to estimate the national prevalence of diabetic foot ulcer and investigate the association with duration of illness and patient residence among diabetic patients.
MethodsWe searched PubMed, Google Scholar, Cochrane Library, CINAHL, EMBASE, and PsycINFO databases for studies of diabetic foot ulcers prevalence that published from conception up to June 30, 2019. Quality of each article was assessed using a modified version of the Newcastle-Ottawa Scale for cross-sectional studies. All statistical analyses were done using STATA version 14 software for Windows, and meta-analysis was carried out using a random-effects method. The pooled national prevalence of diabetic foot ulcers was presented using a forest plot.
ResultsA total of 10 studies with 3,029 diabetic patients were included. The pooled national prevalence of diabetic foot ulcers among Ethiopian diabetic patients was 11.27% (95% CI 7.22, 15.31%, I2=94.6). Duration of illness (OR: 3.91, 95%CI 2.03, 7.52, I2=63.4%) and patients residence (OR: 3.40, 95%CI 2.09, 5.54, I2=0.0%) were significantly associated with a diabetic foot ulcer.
ConclusionIn Ethiopia, at least one out of ten diabetic patients had diabetic foot ulcers. Healthcare policymakers (FMoH) need to improve the standard of diabetic care and should design effective preventive strategies to improve health care delivery for people with diabetes and reduce the risk of foot ulceration. | null | medrxiv |
10.1101/19004705 | Generalisability of Results from UK Biobank: Comparison With a Pooling of 18 Cohort Studies | Batty, G. D.; Gale, C.; Kivimaki, M.; Deary, I.; Bell, S. | George David Batty | University College London | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/13/19004705.source.xml | BackgroundThe UK Biobank cohort study has become a much-utilised and influential scientific resource. With a primary goal of understanding disease aetiology, the low response to the original survey of 5.5% has, however, led to debate as to the generalisability of these findings. We therefore compared risk factor-disease estimations in UK Biobank with those from 18 nationally representative studies with conventional response rates.
MethodsWe used individual-level baseline data from UK Biobank (N=502,655) and a pooling of data from the Health Surveys for England (HSE) and the Scottish Health Surveys (SHS), comprising 18 studies and 89,895 individuals (mean response rate 68%). Both study populations were aged 40-69 years at study induction and linked to national cause-specific mortality registries.
FindingsDespite a typically more favourable risk factor profile and lower mortality rates in UK Biobank participants relative to the HSE-SHS consortium, risk factors-endpoints associations were directionally consistent between studies, albeit with some heterogeneity in magnitude. For instance, for cardiovascular disease mortality, the age- and sex-adjusted hazard ratio (95% confidence interval) for ever having smoked cigarettes (versus never) was 2.04 (1.87, 2.24) in UK Biobank and 1.99 (1.78, 2.23) in HSE-SHS, yielding a ratio of hazard ratios close to unity (1.02, 0.88, 1.19; p-value 0.76). For hypertension (versus none), corresponding results were again in same direction but with a lower effect size in UK Biobank (1.89; 1.69, 2.11) than in HSE-SHS (2.56; 2.20, 2.98), producing a ratio of hazard ratios below unity (0.74; 0.62, 0.89; p-value 0.001). A similar pattern of observations were made for risk factors (smoking, obesity, educational attainment, and physical stature) in relation to different cancer presentations and suicide whereby the ratios of hazard ratios ranged from 0.57 (0.40, 0.81) and 1.07 (0.42, 2.74).
InterpretationDespite a low response rate, aetiological findings from UK Biobank appear to be generalisable to England and Scotland. | null | medrxiv |
10.1101/19004317 | Efficient prediction of vitamin B deficiencies via machine-learning using routine blood test results in patients with intense psychiatric episode | Tamune, H.; Ukita, J.; Hamamoto, Y.; Tanaka, H.; Narushima, K.; Yamamoto, N. | Hidetaka Tamune | Department of Neuropsychiatry, Tokyo Metropolitan Tama Medical Center, Tokyo, Japan | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/08/13/19004317.source.xml | BackgroundVitamin B deficiency is common worldwide and may lead to psychiatric symptoms; however, vitamin B deficiency epidemiology in patients with intense psychiatric episode has rarely been examined. Moreover, vitamin deficiency testing is costly and time-consuming. It hampered to effectively rule out vitamin deficiency-induced intense psychiatric symptoms. In this study, we aimed to clarify the epidemiology of these deficiencies and efficiently predict them using machine-learning models from patient characteristics and routine blood test results that can be obtained within one hour.
MethodsWe reviewed 497 consecutive patients deemed to be at imminent risk of seriously harming themselves or others over 2 years. Machine-learning models were trained to predict each deficiency from age, sex, and 29 routine blood test results.
ResultsWe found that 112 (22.5%), 80 (16.1%), and 72 (14.5%) patients had vitamin B1, vitamin B12, and folate (vitamin B9) deficiency, respectively. Also, the machine-learning models well generalized to predict the deficiency in the future unseen data; areas under the receiver operating characteristic curves for the validation dataset (i.e. dataset not used for training the models) were 0.716, 0.599, and 0.796, respectively. The Gini importance of these vitamins provided further evidence of a relationship between these vitamins and the complete blood count, while also indicating a hitherto rarely considered, potential association between these vitamins and alkaline phosphatase (ALP) or thyroid stimulating hormone (TSH).
DiscussionThis study demonstrates that machine-learning can efficiently predict some vitamin deficiencies in patients with active psychiatric symptoms, based on the largest cohort to date with intense psychiatric episode. The prediction method may expedite risk stratification and clinical decision-making regarding whether replacement therapy should be prescribed. Further research includes validating its external generalizability in other clinical situations and clarify whether interventions based on this method can improve patient care and cost-effectiveness. | 10.3389/fpsyt.2019.01029 | medrxiv |
10.1101/19004317 | Efficient prediction of vitamin B deficiencies via machine-learning using routine blood test results in patients with intense psychiatric episode | Tamune, H.; Ukita, J.; Hamamoto, Y.; Tanaka, H.; Narushima, K.; Yamamoto, N. | Hidetaka Tamune | Department of Neuropsychiatry, Tokyo Metropolitan Tama Medical Center, Tokyo, Japan | 2019-11-13 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/11/13/19004317.source.xml | BackgroundVitamin B deficiency is common worldwide and may lead to psychiatric symptoms; however, vitamin B deficiency epidemiology in patients with intense psychiatric episode has rarely been examined. Moreover, vitamin deficiency testing is costly and time-consuming. It hampered to effectively rule out vitamin deficiency-induced intense psychiatric symptoms. In this study, we aimed to clarify the epidemiology of these deficiencies and efficiently predict them using machine-learning models from patient characteristics and routine blood test results that can be obtained within one hour.
MethodsWe reviewed 497 consecutive patients deemed to be at imminent risk of seriously harming themselves or others over 2 years. Machine-learning models were trained to predict each deficiency from age, sex, and 29 routine blood test results.
ResultsWe found that 112 (22.5%), 80 (16.1%), and 72 (14.5%) patients had vitamin B1, vitamin B12, and folate (vitamin B9) deficiency, respectively. Also, the machine-learning models well generalized to predict the deficiency in the future unseen data; areas under the receiver operating characteristic curves for the validation dataset (i.e. dataset not used for training the models) were 0.716, 0.599, and 0.796, respectively. The Gini importance of these vitamins provided further evidence of a relationship between these vitamins and the complete blood count, while also indicating a hitherto rarely considered, potential association between these vitamins and alkaline phosphatase (ALP) or thyroid stimulating hormone (TSH).
DiscussionThis study demonstrates that machine-learning can efficiently predict some vitamin deficiencies in patients with active psychiatric symptoms, based on the largest cohort to date with intense psychiatric episode. The prediction method may expedite risk stratification and clinical decision-making regarding whether replacement therapy should be prescribed. Further research includes validating its external generalizability in other clinical situations and clarify whether interventions based on this method can improve patient care and cost-effectiveness. | 10.3389/fpsyt.2019.01029 | medrxiv |
10.1101/19003426 | Efficacy of a spatial repellent for control of malaria in Indonesia: a cluster-randomized controlled trial. | Syafruddin, D.; Asih, P. B.; Rozi, I. E.; Permana, D. H.; Hidayati, A. P. N.; Syahrani, L.; Zubaidah, S.; Sidik, D.; Bangs, M. J.; Bogh, C.; Liu, F.; Eugenio, E. C.; Hendrickson, J.; Burton, T. A.; Baird, J. K.; Collins, F. H.; Grieco, J. P.; Lobo, N. F.; Achee, N. L. | Nicole L Achee | Department of Biological Sciences, Eck Institute for Global Health, University of Notre Dame, USA | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | public and global health | https://www.medrxiv.org/content/early/2019/08/13/19003426.source.xml | A cluster randomized, double-blinded, placebo-controlled trial was conducted to estimate protective efficacy of a spatial repellent against malaria infection at Sumba, Indonesia. Following radical cure in 1,341 children aged [≥] 6 months - [≤]5 years in 24 clusters, households were given transfluthrin or placebo passive emanators (devices designed to release vaporized chemical). Monthly blood screening and biweekly human-landing mosquito catches were performed during 10-months baseline (June 2015 to March 2016) and a 24-month intervention period (April 2016 to April 2018). Screening detected 164 first-time infections and an accumulative total of 459 infections in 667 subjects in placebo-control households; and 134 first-time and 253 accumulative total infections among 665 subjects in active intervention households. The 24-cluster protective effect of 27.7% and 31.3%, for time to first-event and overall (total new) infections, respectively, was not statistically significant. Purportedly, this was due in part to zero to low incidence in some clusters, undermining the ability to detect a protective effect. Subgroup analysis of 19 clusters where at least one infection occurred during baseline showed 33.3% (p-value = 0.083) and 40.9% (p-value = 0.0236, statistically significant at the 1-sided 5% significance level) protective effect to first-infection and overall infections, respectively. Among 12 moderate-to high-risk clusters, a statistically significant decrease on infection by intervention was detected (60% protective efficacy). Primary entomological analysis of impact was inconclusive. While this study suggests spatial repellents prevent malaria, additional evidence is required to demonstrate the product class provides an operationally feasible and effective means of reducing malaria transmission. | 10.4269/ajtmh.19-0554 | medrxiv |
10.1101/19003426 | Efficacy of a spatial repellent for control of malaria in Indonesia: a cluster-randomized controlled trial. | Syafruddin, D.; Asih, P. B.; Rozi, I. E.; Permana, D. H.; Hidayati, A. P. N.; Syahrani, L.; Zubaidah, S.; Sidik, D.; Bangs, M. J.; Bogh, C.; Liu, F.; Eugenio, E. C.; Hendrickson, J.; Burton, T. A.; Baird, J. K.; Collins, F. H.; Grieco, J. P.; Lobo, N. F.; Achee, N. L. | Nicole L Achee | Department of Biological Sciences, Eck Institute for Global Health, University of Notre Dame, USA | 2019-08-31 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | public and global health | https://www.medrxiv.org/content/early/2019/08/31/19003426.source.xml | A cluster randomized, double-blinded, placebo-controlled trial was conducted to estimate protective efficacy of a spatial repellent against malaria infection at Sumba, Indonesia. Following radical cure in 1,341 children aged [≥] 6 months - [≤]5 years in 24 clusters, households were given transfluthrin or placebo passive emanators (devices designed to release vaporized chemical). Monthly blood screening and biweekly human-landing mosquito catches were performed during 10-months baseline (June 2015 to March 2016) and a 24-month intervention period (April 2016 to April 2018). Screening detected 164 first-time infections and an accumulative total of 459 infections in 667 subjects in placebo-control households; and 134 first-time and 253 accumulative total infections among 665 subjects in active intervention households. The 24-cluster protective effect of 27.7% and 31.3%, for time to first-event and overall (total new) infections, respectively, was not statistically significant. Purportedly, this was due in part to zero to low incidence in some clusters, undermining the ability to detect a protective effect. Subgroup analysis of 19 clusters where at least one infection occurred during baseline showed 33.3% (p-value = 0.083) and 40.9% (p-value = 0.0236, statistically significant at the 1-sided 5% significance level) protective effect to first-infection and overall infections, respectively. Among 12 moderate-to high-risk clusters, a statistically significant decrease on infection by intervention was detected (60% protective efficacy). Primary entomological analysis of impact was inconclusive. While this study suggests spatial repellents prevent malaria, additional evidence is required to demonstrate the product class provides an operationally feasible and effective means of reducing malaria transmission. | 10.4269/ajtmh.19-0554 | medrxiv |
10.1101/19003426 | Efficacy of a spatial repellent for control of malaria in Indonesia: a cluster-randomized controlled trial. | Syafruddin, D.; Asih, P. B.; Rozi, I. E.; Permana, D. H.; Hidayati, A. P. N.; Syahrani, L.; Zubaidah, S.; Sidik, D.; Bangs, M. J.; Bogh, C.; Liu, F.; Eugenio, E. C.; Hendrickson, J.; Burton, T. A.; Baird, J. K.; Collins, F. H.; Grieco, J. P.; Lobo, N. F.; Achee, N. L. | Nicole L Achee | Department of Biological Sciences, Eck Institute for Global Health, University of Notre Dame, USA | 2019-11-05 | 3 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | public and global health | https://www.medrxiv.org/content/early/2019/11/05/19003426.source.xml | A cluster randomized, double-blinded, placebo-controlled trial was conducted to estimate protective efficacy of a spatial repellent against malaria infection at Sumba, Indonesia. Following radical cure in 1,341 children aged [≥] 6 months - [≤]5 years in 24 clusters, households were given transfluthrin or placebo passive emanators (devices designed to release vaporized chemical). Monthly blood screening and biweekly human-landing mosquito catches were performed during 10-months baseline (June 2015 to March 2016) and a 24-month intervention period (April 2016 to April 2018). Screening detected 164 first-time infections and an accumulative total of 459 infections in 667 subjects in placebo-control households; and 134 first-time and 253 accumulative total infections among 665 subjects in active intervention households. The 24-cluster protective effect of 27.7% and 31.3%, for time to first-event and overall (total new) infections, respectively, was not statistically significant. Purportedly, this was due in part to zero to low incidence in some clusters, undermining the ability to detect a protective effect. Subgroup analysis of 19 clusters where at least one infection occurred during baseline showed 33.3% (p-value = 0.083) and 40.9% (p-value = 0.0236, statistically significant at the 1-sided 5% significance level) protective effect to first-infection and overall infections, respectively. Among 12 moderate-to high-risk clusters, a statistically significant decrease on infection by intervention was detected (60% protective efficacy). Primary entomological analysis of impact was inconclusive. While this study suggests spatial repellents prevent malaria, additional evidence is required to demonstrate the product class provides an operationally feasible and effective means of reducing malaria transmission. | 10.4269/ajtmh.19-0554 | medrxiv |
10.1101/19003426 | Efficacy of a spatial repellent for control of malaria in Indonesia: a cluster-randomized controlled trial. | Syafruddin, D.; Asih, P. B.; Rozi, I. E.; Permana, D. H.; Hidayati, A. P. N.; Syahrani, L.; Zubaidah, S.; Sidik, D.; Bangs, M. J.; Bogh, C.; Liu, F.; Eugenio, E. C.; Hendrickson, J.; Burton, T. A.; Baird, J. K.; Collins, F. H.; Grieco, J. P.; Lobo, N. F.; Achee, N. L. | Nicole L Achee | Department of Biological Sciences, Eck Institute for Global Health, University of Notre Dame, USA | 2020-03-29 | 4 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | public and global health | https://www.medrxiv.org/content/early/2020/03/29/19003426.source.xml | A cluster randomized, double-blinded, placebo-controlled trial was conducted to estimate protective efficacy of a spatial repellent against malaria infection at Sumba, Indonesia. Following radical cure in 1,341 children aged [≥] 6 months - [≤]5 years in 24 clusters, households were given transfluthrin or placebo passive emanators (devices designed to release vaporized chemical). Monthly blood screening and biweekly human-landing mosquito catches were performed during 10-months baseline (June 2015 to March 2016) and a 24-month intervention period (April 2016 to April 2018). Screening detected 164 first-time infections and an accumulative total of 459 infections in 667 subjects in placebo-control households; and 134 first-time and 253 accumulative total infections among 665 subjects in active intervention households. The 24-cluster protective effect of 27.7% and 31.3%, for time to first-event and overall (total new) infections, respectively, was not statistically significant. Purportedly, this was due in part to zero to low incidence in some clusters, undermining the ability to detect a protective effect. Subgroup analysis of 19 clusters where at least one infection occurred during baseline showed 33.3% (p-value = 0.083) and 40.9% (p-value = 0.0236, statistically significant at the 1-sided 5% significance level) protective effect to first-infection and overall infections, respectively. Among 12 moderate-to high-risk clusters, a statistically significant decrease on infection by intervention was detected (60% protective efficacy). Primary entomological analysis of impact was inconclusive. While this study suggests spatial repellents prevent malaria, additional evidence is required to demonstrate the product class provides an operationally feasible and effective means of reducing malaria transmission. | 10.4269/ajtmh.19-0554 | medrxiv |
10.1101/19004259 | Applications of qualitative grounded theory methodology to investigate hearing loss: Protocol for a qualitative systematic review | Ali, Y. H. K.; Wright, N.; Charnock, D.; Henshaw, H.; Ferguson, M. A.; Hoare, D. J. | Yasmin H.K. Ali | National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, University of Nottingham | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | otolaryngology | https://www.medrxiv.org/content/early/2019/08/13/19004259.source.xml | ABTRACTO_ST_ABSIntroductionC_ST_ABSHearing loss is a chronic condition affecting 11 million individuals in the UK. People with hearing loss regularly experience difficulties interacting in everyday conversations. These difficulties in communication can result in a person with hearing loss withdrawing from social situations and becoming isolated. While hearing health loss research has largely deployed quantitative methods to investigate various aspects of the condition, qualitative research is becoming more widespread. Grounded theory is a specific qualitative methodology that has been used to establish novel theories on the experiences of living with hearing loss.
Method and analysisThe aim of this systematic review is to establish how grounded theory has been applied to investigate the psychosocial aspects of hearing loss. Methods are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (PRISMA-P) 2015 checklist. Studies included in this review will have applied grounded theory methodology. For a study to be included, it can apply grounded theory as an overarching methodology, or have grounded theory methodology embedded amongst other methodologies. These studies can be in the form of retrospective or prospective studies, before and after comparison studies, RCTs, non-RCTs, cohort studies, prospective observational studies, case-control studies, cross-sectional studies, longitudinal studies, and mixed method studies. Purely quantitative studies, studies that have not applied grounded theory methodology, articles reporting expert opinions, case reports, practice guidelines, case series, conference abstracts, and book chapters will be excluded. Studies included will have adult participants ([≥]18 years) who are either people with an acquired hearing loss, their family and friends (communication partners), or audiologists. The quality of application of grounded theory in each study will be assessed using the Guideline for Reporting and Evaluating Grounded Theory Research Studies (GUREGT).
Ethics and disseminationAs only secondary data will be used in this systematic review, ethical approval is not required. No other ethical issues are foreseen. The International Prospective Register of Systematic Reviews (http://www.crd.york.ac.uk/PROSPERO) holds the registration record of this systematic review. Findings will be disseminated via peer reviewed publications and at relevant academic conferences. Findings may also be published in relevant professional and third sector newsletters and magazines as appropriate. Data will inform future research and guideline development.
Prospero registration numberPROSPERO CRD42019134197
Strengths and limitations of this studyO_LIThis systematic review is the first to provide a comprehensive critique of the use of grounded theory to investigate hearing loss.
C_LIO_LIThe search strategy was formed in collaboration with an information specialist at the University of Nottingham.
C_LIO_LIThe PRISMA-P guidelines have directed the considerations and layout of this protocol.
C_LIO_LIBecause experiences and articulations of hearing loss are influenced by age, only adult ([≥]18 years) participants (people with hearing loss, communication partners, audiologists) will be considered.
C_LIO_LIThe search will not include grey literature.
C_LIO_LIThe studies included will only have samples of individuals with hearing loss, rather than full deafness.
C_LI | 10.1136/bmjopen-2019-033537 | medrxiv |
10.1101/19004598 | Accuracy of Medical Billing Data Against the Electronic Health Record in the Measurement of Colorectal Cancer Screening Rates | Rudrapatna, V. A.; Glicksberg, B. S.; Avila, P.; Harding-Theobald, E.; Wang, C.; Butte, A. J. | Atul J Butte | UCSF | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | health informatics | https://www.medrxiv.org/content/early/2019/08/13/19004598.source.xml | ObjectiveAdministrative healthcare data are an attractive source of secondary analysis because of their potential to answer population-health questions. Although these datasets have known susceptibilities to biases, the degree to which they can distort measurements like cancer screening rates are not widely appreciated, nor are their causes and possible solutions.
MethodsUsing a billing code database derived from our institutions electronic health records (EHR), we estimated the colorectal cancer screening rate of average-risk patients aged 50-74 seen in primary care or gastroenterology clinic in 2016-2017. 200 records (150 unscreened, 50 screened) were sampled to quantify the accuracy against manual review.
ResultsOut of 4,611 patients, an analysis of billing data suggested a 61% screening rate. Manual review revealed a positive predictive value of 96% (86-100%), negative predictive value of 21% (15-29%), and a corrected screening rate of 85% (81-90%). Most false negatives occurred due to exams performed outside the scope of the database - both within and outside of our institution - but 21% of false negatives fell within the databases scope. False positives occurred due to incomplete exams and inadequate bowel preparation. Reasons for screening failure include ordered but incomplete exams (48%), lack of or incorrect documentation by primary care (29%) including incorrect screening intervals (13%), and patients declining screening (13%).
ConclusionsAlthough analytics on administrative data are commonly validated by comparison to independent datasets, comparing our naive estimate to the CDC estimate ([~]60%) would have been misleading. Therefore, regular data audits using the complete EHR are critical to improve screening rates and measure improvement.
Study HighlightsO_ST_ABSWHAT IS KNOWNC_ST_ABSO_LIMedical billing data might be useful for measuring colon cancer screening rates but are bias-prone and difficult to validate
C_LIO_LIThe degree to which these biases may skew the results of simple population-level analytics is not widely appreciated, nor are their causes and possible solutions.
C_LI
WHAT IS NEW HEREO_LIBilling data from the health record does not accurately capture unscreened patients. Some reasons were predictable (screening outside the system or prior to software implementation) but others were not.
C_LIO_LIThe common practice of external validation would have been falsely reassuring for these data. The naive estimate of screening rates matches the CDC estimate (61%); the true rate was 85%.
C_LIO_LIPeriodic data audits using the full EHR is critical to continue to improve screening rates and monitor improvements accurately and at scale.
C_LI | 10.1136/bmjoq-2019-000856 | medrxiv |
10.1101/19004598 | Accuracy of Medical Billing Data Against the Electronic Health Record in the Measurement of Colorectal Cancer Screening Rates | Rudrapatna, V. A.; Glicksberg, B. S.; Avila, P.; Harding-Theobald, E.; Wang, C.; Butte, A. J. | Atul J Butte | UCSF | 2019-08-22 | 2 | PUBLISHAHEADOFPRINT | cc_no | health informatics | https://www.medrxiv.org/content/early/2019/08/22/19004598.source.xml | ObjectiveAdministrative healthcare data are an attractive source of secondary analysis because of their potential to answer population-health questions. Although these datasets have known susceptibilities to biases, the degree to which they can distort measurements like cancer screening rates are not widely appreciated, nor are their causes and possible solutions.
MethodsUsing a billing code database derived from our institutions electronic health records (EHR), we estimated the colorectal cancer screening rate of average-risk patients aged 50-74 seen in primary care or gastroenterology clinic in 2016-2017. 200 records (150 unscreened, 50 screened) were sampled to quantify the accuracy against manual review.
ResultsOut of 4,611 patients, an analysis of billing data suggested a 61% screening rate. Manual review revealed a positive predictive value of 96% (86-100%), negative predictive value of 21% (15-29%), and a corrected screening rate of 85% (81-90%). Most false negatives occurred due to exams performed outside the scope of the database - both within and outside of our institution - but 21% of false negatives fell within the databases scope. False positives occurred due to incomplete exams and inadequate bowel preparation. Reasons for screening failure include ordered but incomplete exams (48%), lack of or incorrect documentation by primary care (29%) including incorrect screening intervals (13%), and patients declining screening (13%).
ConclusionsAlthough analytics on administrative data are commonly validated by comparison to independent datasets, comparing our naive estimate to the CDC estimate ([~]60%) would have been misleading. Therefore, regular data audits using the complete EHR are critical to improve screening rates and measure improvement.
Study HighlightsO_ST_ABSWHAT IS KNOWNC_ST_ABSO_LIMedical billing data might be useful for measuring colon cancer screening rates but are bias-prone and difficult to validate
C_LIO_LIThe degree to which these biases may skew the results of simple population-level analytics is not widely appreciated, nor are their causes and possible solutions.
C_LI
WHAT IS NEW HEREO_LIBilling data from the health record does not accurately capture unscreened patients. Some reasons were predictable (screening outside the system or prior to software implementation) but others were not.
C_LIO_LIThe common practice of external validation would have been falsely reassuring for these data. The naive estimate of screening rates matches the CDC estimate (61%); the true rate was 85%.
C_LIO_LIPeriodic data audits using the full EHR is critical to continue to improve screening rates and monitor improvements accurately and at scale.
C_LI | 10.1136/bmjoq-2019-000856 | medrxiv |
10.1101/19004390 | Prospective Trial Registration and Publication Rates of Randomized Clinical Trials in Digital Health: A Cross Sectional Analysis of Global Trial Registries | Al-Durra, M.; Nolan, R. P.; Seto, E.; Cafazzo, J. | Mustafa Al-Durra | Centre for Global eHealth Innovation, Techna Institute, University Health Network | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | health informatics | https://www.medrxiv.org/content/early/2019/08/13/19004390.source.xml | Registration of clinical trials was introduced to mitigate the risk of publication and selective reporting bias in the realm of clinical research. The prevalence of publication and selective reporting bias in trial results has been evidenced through scientific research. This bias may compromise the ethical and methodological conduct in the design, implementation and dissemination of evidence-based healthcare interventions. Principal investigators of digital health trials may be overwhelmed with challenges that are unique to digital health research, such as the usability of the intervention under test, participant recruitment, and retention challenges that may contribute to non-publication rate and prospective trial registration. Our primary research objective was to examine the prevalence of prospective registration and publication rates in digital health trials. We included 417 trials that enrolled participants in 2012 and were registered in any of the seventeen WHO registries. The prospective registration and publication rates were at (38.4%) and (65.5%) respectively. We identified a statistically significant (P<.001) "Selective Registration Bias" with 95.7% of trials published within a year after registration, were registered retrospectively. We reported a statistically significant relationship (P=.003) between prospective registration and funding sources, with industry-funded trials having the lowest compliance with prospective registration at (14.3%). The lowest non-publication rates were in the Middle East (26.7%) and Europe (28%), and the highest were in Asia (56.5%) and the U.S. (42.5%). We found statistically significant differences (P<.001) between trial location and funding sources with the highest percentage of industry funded trials in Asia (17.3%) and the U.S. (3.3%). | 10.1177/20552076221090034 | medrxiv |
10.1101/19004713 | Childhood coordination and survival up to six decades later: extended follow-up of participants in the National Child Development Study | Batty, G. D.; Deary, I.; Hamer, M.; Ritchie, S.; Bann, D. | George David Batty | University College London | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/13/19004713.source.xml | BackgroundPoorer performance on standard tests of motor coordination in children has emerging links with sedentary behaviour, obesity, and functional capacity in later life. These observations are suggestive of an as-yet untested association of coordination with health outcomes.
ObjectiveTo examine the association of performance on a series of psychomotor coordination tests in childhood with mortality up to six decades later.
Design, Setting, and ParticipantsThe National Child Development Study (1958 birth cohort study) is a prospective cohort study based on a nationally representative sample of births from England, Scotland and Wales. A total of 17,415 individuals had their gross and fine motor psychomotor coordination assessed using nine tests at 11 and 16 years of age.
Main outcome and measureAll-cause mortality as ascertained from a vital status registry and survey records.
ResultsMortality surveillance between 7 and 58 years of age in an analytical sample of 17,336 men and women yielded 1,090 deaths. After adjustment for sex, higher scores on seven of the nine childhood coordination tests were associated with a lower risk of mortality in a stepwise manner. After further statistical control for early life socioeconomic, health, cognitive, and developmental factors, relations at conventional levels of statistical significance remains for three tests: ball catching at age 11 (hazard ratio; 95% confidence interval for 0-8 versus 10 catches: 1.56; 1.21, 2.01), match-picking at age 11 (>50 seconds versus 0-36: 1.33; 1.03, 1.70), and hopping at age 16 years (very unsteady versus very steady: 1.29; 1.02, 1.64).
Conclusion and RelevanceThe apparent predictive utility of early life psychomotor coordination requires replication.
Key pointsO_ST_ABSQuestionC_ST_ABSWhat is the association of performance on a series of psychomotor coordination tests in childhood with mortality up to six decades later?
FindingsAfter taking into account multiple confounding factors, lower performance on three gross and fine motors skills tests in childhood were associated with a shorter survival over six decades.
MeaningThese findings require replication in other contexts and using complementary observational approaches. | 10.1001/jamanetworkopen.2020.4031 | medrxiv |
10.1101/19003897 | PathFlowAI: A Convenient High-Throughput Workflow for Preprocessing, Deep Learning Analytics and Interpretation in Digital Pathology | Levy, J.; Salas, L. A.; Christensen, B. C.; Sriharan, A.; Vaickus, L. J. | Joshua Levy | Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756 ; Department of Epidemiology, Geisel School of Medicine a | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | pathology | https://www.medrxiv.org/content/early/2019/08/13/19003897.source.xml | The diagnosis of disease often requires analysis of a biopsy. Many diagnoses depend not only on the presence of certain features but on their location within the tissue. Recently, a number of deep learning diagnostic aids have been developed to classify digitized biopsy slides. Clinical workflows often involve processing of more than 500 slides per day. But, clinical use of deep learning diagnostic aids would require a preprocessing workflow that is cost-effective, flexible, scalable, rapid, interpretable, and transparent. Here, we present such a workflow, optimized using Dask and mixed precision training via APEX, capable of handling any patch-level or slide level classification and prediction problem. The workflow uses a flexible and fast preprocessing and deep learning analytics pipeline, incorporates model interpretation and has a highly storage-efficient audit trail. We demonstrate the utility of this package on the analysis of a prototypical anatomic pathology specimen, liver biopsies for evaluation of hepatitis from a prospective cohort. The preliminary data indicate that PathFlowAI may become a cost-effective and time-efficient tool for clinical use of Artificial Intelligence (AI) algorithms. | 10.1142/9789811215636_0036 | medrxiv |
10.1101/19003897 | PathFlowAI: A High-Throughput Workflow for Preprocessing, Deep Learning and Interpretation in Digital Pathology | Levy, J.; Salas, L. A.; Christensen, B. C.; Sriharan, A.; Vaickus, L. J. | Joshua Levy | Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756 ; Department of Epidemiology, Geisel School of Medicine a | 2019-10-25 | 2 | PUBLISHAHEADOFPRINT | cc_no | pathology | https://www.medrxiv.org/content/early/2019/10/25/19003897.source.xml | The diagnosis of disease often requires analysis of a biopsy. Many diagnoses depend not only on the presence of certain features but on their location within the tissue. Recently, a number of deep learning diagnostic aids have been developed to classify digitized biopsy slides. Clinical workflows often involve processing of more than 500 slides per day. But, clinical use of deep learning diagnostic aids would require a preprocessing workflow that is cost-effective, flexible, scalable, rapid, interpretable, and transparent. Here, we present such a workflow, optimized using Dask and mixed precision training via APEX, capable of handling any patch-level or slide level classification and prediction problem. The workflow uses a flexible and fast preprocessing and deep learning analytics pipeline, incorporates model interpretation and has a highly storage-efficient audit trail. We demonstrate the utility of this package on the analysis of a prototypical anatomic pathology specimen, liver biopsies for evaluation of hepatitis from a prospective cohort. The preliminary data indicate that PathFlowAI may become a cost-effective and time-efficient tool for clinical use of Artificial Intelligence (AI) algorithms. | 10.1142/9789811215636_0036 | medrxiv |
10.1101/19004010 | A systematic review of medical and clinical research landscapes and quality in Malaysia and Indonesia : the review protocol | Chew, B. H.; Lim, P. Y.; Lee, S. W. H.; Devaraj, N. K.; Ismail, A. H.; Shamsuddin, N. H.; Jahn Kassim, P. S.; Abdul Rashid, A.; Fernandez, A.; Muhamad Zakuan, N.; Teoh, S. H.; Abdullah, A. R.; Ali, H.; Abdul Manap, A. H.; Mohamad, F.; Widyahening, I. S. | Boon How Chew | Department of Family Medicine, Faculty of Medicine and Health Science, Universiti Putra Malaysia, Malaysia | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/13/19004010.source.xml | BackgroundResearch landscapes and quality may change in many ways. Much research waste has been increasingly reported. Efforts to improve research performance will need good data on the profiles and performance of past research.
PurposeTo describe the characteristics and quality of clinical and biomedical research in Malaysia and Indonesia.
MethodsA search will be conducted in PubMed, EMBASE, CINAHL and PsycINFO to identify for published clinical and biomedical research from 1962 to 2017 from Malaysia and/or Indonesia.
Additional search will also be conducted in MyMedR (for Malaysian team only). Studies found will be independently screened by a team of reviewers, relevant information will be extracted and quality of articles will be assessed. As part of quality control, another reviewer will independently assess 10-20% of the articles extracted. In Phase 1, the profiles of the published research will be reported descriptively. In Phase 2, a research quality screening tool will be validated to assess research quality based on three major domains of relevance, credibility of the methods and usefulness of the results. Associations between the research characteristics and quality will be analysed. The independent effect of each of the determinant will be quantified in multivariable regression analysis. Longitudinal trends of the research profiles, health conditions in different settings will be explored. Depending on the availability of resources, this review project may proceed according to the different clinical and biomedical disciplines in sequence.
DiscussionResults of this study will serve as the baseline data for future evaluation and within country and between countries comparison. This review may also provide informative results to stakeholders of the evolution of research conduct and performance from the past till now. The longitudinal and prospective trends of the research profiles and quality could provide suggestions on improvement initiatives. Additionally, health conditions or areas in different settings, and whether they are over- or under-studied may help future prioritization of research initiatives and resources. | null | medrxiv |
10.1101/19003301 | An Efficient and Robust Approach to Detect Auditory Evoked Responses using Adaptive Averaging | Wang, H.; Ding, X.; Li, B.; Wang, X.; Huang, Z.; Hua, Y.; Wu, H. | Yunfeng Hua | Ear Institute, Shanghai Jiao Tong University School of Medicine | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | otolaryngology | https://www.medrxiv.org/content/early/2019/08/13/19003301.source.xml | Auditory brainstem response (ABR) serves as an objective indication of auditory perception at given sound level and is nowadays widely used in hearing function assessment. Despite efforts for automation over decades, hearing threshold determination by machine algorithm remains unreliable and thereby still rely on visual identification by trained personnel. Here, we described a procedure for automatic threshold determination that can be used in both animal and human ABR tests. The method terminates level averaging of ABR recordings upon detection of time-locked waveform through cross-correlation analysis. The threshold level was then indicated by a dramatic increase in the sweep numbers required to produce "qualified" level averaging. A good match was obtained between the algorithm outcome and the human readouts. Moreover, the method varies the level averaging based on the cross-correlation, thereby adapting to the signal-to-noise ratio of single sweep recordings. These features empower a robust and fully automated ABR test. | 10.1016/j.isci.2021.103285 | medrxiv |
10.1101/19003301 | An Efficient and Robust Approach to Detect Auditory Evoked Brainstem Responses using Adaptive Averaging | Wang, H.; Li, B.; Ding, X.; Wang, X.; Huang, Z.; Wu, H.; Hua, Y. | Yunfeng Hua | Ear Institute, Shanghai Jiao Tong University School of Medicine | 2020-01-03 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | otolaryngology | https://www.medrxiv.org/content/early/2020/01/03/19003301.source.xml | Auditory brainstem response (ABR) serves as an objective indication of auditory perception at given sound level and is nowadays widely used in hearing function assessment. Despite efforts for automation over decades, hearing threshold determination by machine algorithm remains unreliable and thereby still rely on visual identification by trained personnel. Here, we described a procedure for automatic threshold determination that can be used in both animal and human ABR tests. The method terminates level averaging of ABR recordings upon detection of time-locked waveform through cross-correlation analysis. The threshold level was then indicated by a dramatic increase in the sweep numbers required to produce "qualified" level averaging. A good match was obtained between the algorithm outcome and the human readouts. Moreover, the method varies the level averaging based on the cross-correlation, thereby adapting to the signal-to-noise ratio of single sweep recordings. These features empower a robust and fully automated ABR test. | 10.1016/j.isci.2021.103285 | medrxiv |
10.1101/19003301 | An Efficient and Robust Approach to Detect Auditory Evoked Brainstem Responses by Progressive Sub-average Cross-covariance Analysis | Wang, H.; Li, B.; Ding, X.; Wang, X.; Huang, Z.; Hua, Y.; Wu, H. | Yunfeng Hua | Ear Institute, Shanghai Jiao Tong University School of Medicine | 2020-04-14 | 3 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | otolaryngology | https://www.medrxiv.org/content/early/2020/04/14/19003301.source.xml | Auditory brainstem response (ABR) serves as an objective indication of auditory perception at given sound level and is nowadays widely used in hearing function assessment. Despite efforts for automation over decades, hearing threshold determination by machine algorithm remains unreliable and thereby still rely on visual identification by trained personnel. Here, we described a procedure for automatic threshold determination that can be used in both animal and human ABR tests. The method terminates level averaging of ABR recordings upon detection of time-locked waveform through cross-correlation analysis. The threshold level was then indicated by a dramatic increase in the sweep numbers required to produce "qualified" level averaging. A good match was obtained between the algorithm outcome and the human readouts. Moreover, the method varies the level averaging based on the cross-correlation, thereby adapting to the signal-to-noise ratio of single sweep recordings. These features empower a robust and fully automated ABR test. | 10.1016/j.isci.2021.103285 | medrxiv |
10.1101/19003301 | Automated Threshold Determination of Auditory Evoked Brainstem Responses by Cross-correlation Analysis with Varying Sweep Number | Wang, H.; Li, B.; Ding, X.; Wang, X.; Huang, Z.; Hua, Y.; Song, L.; Wu, H. | Yunfeng Hua | Ear Institute, Shanghai Jiao Tong University School of Medicine | 2020-07-02 | 4 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | otolaryngology | https://www.medrxiv.org/content/early/2020/07/02/19003301.source.xml | Auditory brainstem response (ABR) serves as an objective indication of auditory perception at given sound level and is nowadays widely used in hearing function assessment. Despite efforts for automation over decades, hearing threshold determination by machine algorithm remains unreliable and thereby still rely on visual identification by trained personnel. Here, we described a procedure for automatic threshold determination that can be used in both animal and human ABR tests. The method terminates level averaging of ABR recordings upon detection of time-locked waveform through cross-correlation analysis. The threshold level was then indicated by a dramatic increase in the sweep numbers required to produce "qualified" level averaging. A good match was obtained between the algorithm outcome and the human readouts. Moreover, the method varies the level averaging based on the cross-correlation, thereby adapting to the signal-to-noise ratio of single sweep recordings. These features empower a robust and fully automated ABR test. | 10.1016/j.isci.2021.103285 | medrxiv |
10.1101/19003301 | Real-time Hearing Threshold Determination of Auditory Brainstem Responses by Cross-correlation Analysis | Wang, H.; Li, B.; Lu, Y.; Han, K.; Sheng, H.; Zhou, J.; Qi, Y.; Wang, X.; Huang, Z.; Song, L.; Hua, Y. | Yunfeng Hua | Ear Institute, Shanghai Jiao Tong University School of Medicine | 2021-05-13 | 5 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | otolaryngology | https://www.medrxiv.org/content/early/2021/05/13/19003301.source.xml | Auditory brainstem response (ABR) serves as an objective indication of auditory perception at given sound level and is nowadays widely used in hearing function assessment. Despite efforts for automation over decades, hearing threshold determination by machine algorithm remains unreliable and thereby still rely on visual identification by trained personnel. Here, we described a procedure for automatic threshold determination that can be used in both animal and human ABR tests. The method terminates level averaging of ABR recordings upon detection of time-locked waveform through cross-correlation analysis. The threshold level was then indicated by a dramatic increase in the sweep numbers required to produce "qualified" level averaging. A good match was obtained between the algorithm outcome and the human readouts. Moreover, the method varies the level averaging based on the cross-correlation, thereby adapting to the signal-to-noise ratio of single sweep recordings. These features empower a robust and fully automated ABR test. | 10.1016/j.isci.2021.103285 | medrxiv |
10.1101/19004184 | Lower germline mutation rates in young adults predictlonger lives and longer reproductive lifespans | Cawthon, R. M.; Meeks, H. D.; Sasani, T. A.; Smith, K. R.; Kerber, R. A.; O'Brien, E.; Quinlan, A. R.; Jorde, L. B. | Richard M Cawthon | University of Utah, Dept. of Human Genetics | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/13/19004184.source.xml | BACKGROUNDAnalysis of sequenced genomes from large three-generation families allows de novo mutations identified in Generation II individuals to be attributed to each of their parents germlines in Generation I. Because germline mutations increase with age, we hypothesized that they directly limit the duration of childbearing in women, and if correlated with mutation accumulation in somatic tissues, also reflect systemic aging in both sexes. Here we test whether the germline mutation rates of Generation I individuals when they were young adults predict their remaining survival, as well as the womens reproductive lifespans.
METHODSGermline autosomal mutation counts in 122 Generation I individuals (61 women, 61 men) from 41 three-generation Utah CEPH families were converted to germline mutation rates by normalizing each subjects number of mutations to the callable portion of their genome. Age at death, cause of death, all-site cancer incidence, and reproductive histories were provided by the Utah Population Database, Cancer Registry, and Utah Genetic Reference Project. Fertility analyses were restricted to the 53 women whose age at last birth (ALB) was at least 30 years, the approximate age when the decline in female fertility begins. Cox proportional hazard regression models were used to test the association of age-adjusted mutation rates (AAMRs) with aging-related outcomes. Linear regression analysis was used to estimate the age when adult germline mutation accumulation rates are established.
FINDINGSQuartiles of increasing AAMRs were associated with increasing all-cause mortality rates in both sexes combined (test for trend, p=0.009); subjects in the top quartile of AAMRs experienced more than twice the mortality of bottom quartile subjects (hazard ratio [HR], 2.07; 95% confidence interval [CI], 1.21-3.56; p=0.008; median survival difference = 4.7 years). Women with higher AAMRs had significantly fewer live births and a younger ALB. The analyses also indicate that adult germline mutation accumulation rates are established in adolescence, and that later menarche in women may delay mutation accumulation.
INTERPRETATIONParental-age-adjusted germline mutation rates in healthy young adults may provide a measure of both reproductive and systemic aging. Puberty may induce the establishment of adult mutation accumulation rates, just when DNA repair genes expression levels are known to begin their lifelong decline.
FUNDINGNIH R01AG038797 and R21AG054962 (to R.M.C.); University of Utah Program in Personalized Health (to H.D.M.); NIH T32GM007464 (to T.A.S.); NIH R01AG022095 (to K.R.S.); NIH R01HG006693, R01HG009141, and R01GM124355 (to A.R.Q.); NIH GM118335 and GM059290 (to L.B.J.); NIH P30CA2014 (to the Utah Population Database, a.k.a. the UPDB); National Center for Research Resources Public Health Services grant M01RR00064 (to the Huntsman General Clinical Research Center, University of Utah); National Center for Advancing Translational Sciences NIH grant UL1TR002538 (to the University of Utahs Center for Clinical and Translational Science); Howard Hughes Medical Institute funding (to Ray White); gifts from the W.M. Keck Foundation (to Stephen M. Prescott and M.F.L.) and from the George S. and Delores Dore Eccles Foundation (to the University of Utah) that supported the Utah Genetic Reference Project (UGRP). Sequencing of the CEPH samples was funded by the Utah Genome Project, the George S. and Dolores Dore Eccles Foundation, and the H.A. and Edna Benning Foundation. We thank the Pedigree and Population Resource of the Huntsman Cancer Institute, University of Utah (funded in part by the Huntsman Cancer Foundation) for its role in the ongoing collection, maintenance and support of the UPDB. | 10.1038/s41598-020-66867-0 | medrxiv |
10.1101/19004184 | Lower germline mutation rates in young adults predictlonger lives and longer reproductive lifespans | Cawthon, R. M.; Meeks, H. D.; Sasani, T. A.; Smith, K. R.; Kerber, R. A.; O'Brien, E.; Quinlan, A. R.; Jorde, L. B. | Richard M Cawthon | University of Utah, Dept. of Human Genetics | 2019-11-05 | 2 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/11/05/19004184.source.xml | BACKGROUNDAnalysis of sequenced genomes from large three-generation families allows de novo mutations identified in Generation II individuals to be attributed to each of their parents germlines in Generation I. Because germline mutations increase with age, we hypothesized that they directly limit the duration of childbearing in women, and if correlated with mutation accumulation in somatic tissues, also reflect systemic aging in both sexes. Here we test whether the germline mutation rates of Generation I individuals when they were young adults predict their remaining survival, as well as the womens reproductive lifespans.
METHODSGermline autosomal mutation counts in 122 Generation I individuals (61 women, 61 men) from 41 three-generation Utah CEPH families were converted to germline mutation rates by normalizing each subjects number of mutations to the callable portion of their genome. Age at death, cause of death, all-site cancer incidence, and reproductive histories were provided by the Utah Population Database, Cancer Registry, and Utah Genetic Reference Project. Fertility analyses were restricted to the 53 women whose age at last birth (ALB) was at least 30 years, the approximate age when the decline in female fertility begins. Cox proportional hazard regression models were used to test the association of age-adjusted mutation rates (AAMRs) with aging-related outcomes. Linear regression analysis was used to estimate the age when adult germline mutation accumulation rates are established.
FINDINGSQuartiles of increasing AAMRs were associated with increasing all-cause mortality rates in both sexes combined (test for trend, p=0.009); subjects in the top quartile of AAMRs experienced more than twice the mortality of bottom quartile subjects (hazard ratio [HR], 2.07; 95% confidence interval [CI], 1.21-3.56; p=0.008; median survival difference = 4.7 years). Women with higher AAMRs had significantly fewer live births and a younger ALB. The analyses also indicate that adult germline mutation accumulation rates are established in adolescence, and that later menarche in women may delay mutation accumulation.
INTERPRETATIONParental-age-adjusted germline mutation rates in healthy young adults may provide a measure of both reproductive and systemic aging. Puberty may induce the establishment of adult mutation accumulation rates, just when DNA repair genes expression levels are known to begin their lifelong decline.
FUNDINGNIH R01AG038797 and R21AG054962 (to R.M.C.); University of Utah Program in Personalized Health (to H.D.M.); NIH T32GM007464 (to T.A.S.); NIH R01AG022095 (to K.R.S.); NIH R01HG006693, R01HG009141, and R01GM124355 (to A.R.Q.); NIH GM118335 and GM059290 (to L.B.J.); NIH P30CA2014 (to the Utah Population Database, a.k.a. the UPDB); National Center for Research Resources Public Health Services grant M01RR00064 (to the Huntsman General Clinical Research Center, University of Utah); National Center for Advancing Translational Sciences NIH grant UL1TR002538 (to the University of Utahs Center for Clinical and Translational Science); Howard Hughes Medical Institute funding (to Ray White); gifts from the W.M. Keck Foundation (to Stephen M. Prescott and M.F.L.) and from the George S. and Delores Dore Eccles Foundation (to the University of Utah) that supported the Utah Genetic Reference Project (UGRP). Sequencing of the CEPH samples was funded by the Utah Genome Project, the George S. and Dolores Dore Eccles Foundation, and the H.A. and Edna Benning Foundation. We thank the Pedigree and Population Resource of the Huntsman Cancer Institute, University of Utah (funded in part by the Huntsman Cancer Foundation) for its role in the ongoing collection, maintenance and support of the UPDB. | 10.1038/s41598-020-66867-0 | medrxiv |
10.1101/19004184 | Germline mutation rates in young adults predict longevity and reproductive lifespan | Cawthon, R. M.; Meeks, H. D.; Sasani, T. A.; Smith, K. R.; Kerber, R. A.; O'Brien, E.; Baird, L.; Dixon, M. M.; Peiffer, A. P.; Leppert, M. F.; Quinlan, A. R.; Jorde, L. B. | Richard M Cawthon | University of Utah, Dept. of Human Genetics | 2020-01-02 | 3 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2020/01/02/19004184.source.xml | BACKGROUNDAnalysis of sequenced genomes from large three-generation families allows de novo mutations identified in Generation II individuals to be attributed to each of their parents germlines in Generation I. Because germline mutations increase with age, we hypothesized that they directly limit the duration of childbearing in women, and if correlated with mutation accumulation in somatic tissues, also reflect systemic aging in both sexes. Here we test whether the germline mutation rates of Generation I individuals when they were young adults predict their remaining survival, as well as the womens reproductive lifespans.
METHODSGermline autosomal mutation counts in 122 Generation I individuals (61 women, 61 men) from 41 three-generation Utah CEPH families were converted to germline mutation rates by normalizing each subjects number of mutations to the callable portion of their genome. Age at death, cause of death, all-site cancer incidence, and reproductive histories were provided by the Utah Population Database, Cancer Registry, and Utah Genetic Reference Project. Fertility analyses were restricted to the 53 women whose age at last birth (ALB) was at least 30 years, the approximate age when the decline in female fertility begins. Cox proportional hazard regression models were used to test the association of age-adjusted mutation rates (AAMRs) with aging-related outcomes. Linear regression analysis was used to estimate the age when adult germline mutation accumulation rates are established.
FINDINGSQuartiles of increasing AAMRs were associated with increasing all-cause mortality rates in both sexes combined (test for trend, p=0.009); subjects in the top quartile of AAMRs experienced more than twice the mortality of bottom quartile subjects (hazard ratio [HR], 2.07; 95% confidence interval [CI], 1.21-3.56; p=0.008; median survival difference = 4.7 years). Women with higher AAMRs had significantly fewer live births and a younger ALB. The analyses also indicate that adult germline mutation accumulation rates are established in adolescence, and that later menarche in women may delay mutation accumulation.
INTERPRETATIONParental-age-adjusted germline mutation rates in healthy young adults may provide a measure of both reproductive and systemic aging. Puberty may induce the establishment of adult mutation accumulation rates, just when DNA repair genes expression levels are known to begin their lifelong decline.
FUNDINGNIH R01AG038797 and R21AG054962 (to R.M.C.); University of Utah Program in Personalized Health (to H.D.M.); NIH T32GM007464 (to T.A.S.); NIH R01AG022095 (to K.R.S.); NIH R01HG006693, R01HG009141, and R01GM124355 (to A.R.Q.); NIH GM118335 and GM059290 (to L.B.J.); NIH P30CA2014 (to the Utah Population Database, a.k.a. the UPDB); National Center for Research Resources Public Health Services grant M01RR00064 (to the Huntsman General Clinical Research Center, University of Utah); National Center for Advancing Translational Sciences NIH grant UL1TR002538 (to the University of Utahs Center for Clinical and Translational Science); Howard Hughes Medical Institute funding (to Ray White); gifts from the W.M. Keck Foundation (to Stephen M. Prescott and M.F.L.) and from the George S. and Delores Dore Eccles Foundation (to the University of Utah) that supported the Utah Genetic Reference Project (UGRP). Sequencing of the CEPH samples was funded by the Utah Genome Project, the George S. and Dolores Dore Eccles Foundation, and the H.A. and Edna Benning Foundation. We thank the Pedigree and Population Resource of the Huntsman Cancer Institute, University of Utah (funded in part by the Huntsman Cancer Foundation) for its role in the ongoing collection, maintenance and support of the UPDB. | 10.1038/s41598-020-66867-0 | medrxiv |
10.1101/19004184 | Germline mutation rates in young adults predict longevity and reproductive lifespan | Cawthon, R. M.; Meeks, H. D.; Sasani, T. A.; Smith, K. R.; Kerber, R. A.; O'Brien, E.; Baird, L.; Dixon, M. M.; Peiffer, A. P.; Leppert, M. F.; Quinlan, A. R.; Jorde, L. B. | Richard M Cawthon | University of Utah, Dept. of Human Genetics | 2020-04-16 | 4 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2020/04/16/19004184.source.xml | BACKGROUNDAnalysis of sequenced genomes from large three-generation families allows de novo mutations identified in Generation II individuals to be attributed to each of their parents germlines in Generation I. Because germline mutations increase with age, we hypothesized that they directly limit the duration of childbearing in women, and if correlated with mutation accumulation in somatic tissues, also reflect systemic aging in both sexes. Here we test whether the germline mutation rates of Generation I individuals when they were young adults predict their remaining survival, as well as the womens reproductive lifespans.
METHODSGermline autosomal mutation counts in 122 Generation I individuals (61 women, 61 men) from 41 three-generation Utah CEPH families were converted to germline mutation rates by normalizing each subjects number of mutations to the callable portion of their genome. Age at death, cause of death, all-site cancer incidence, and reproductive histories were provided by the Utah Population Database, Cancer Registry, and Utah Genetic Reference Project. Fertility analyses were restricted to the 53 women whose age at last birth (ALB) was at least 30 years, the approximate age when the decline in female fertility begins. Cox proportional hazard regression models were used to test the association of age-adjusted mutation rates (AAMRs) with aging-related outcomes. Linear regression analysis was used to estimate the age when adult germline mutation accumulation rates are established.
FINDINGSQuartiles of increasing AAMRs were associated with increasing all-cause mortality rates in both sexes combined (test for trend, p=0.009); subjects in the top quartile of AAMRs experienced more than twice the mortality of bottom quartile subjects (hazard ratio [HR], 2.07; 95% confidence interval [CI], 1.21-3.56; p=0.008; median survival difference = 4.7 years). Women with higher AAMRs had significantly fewer live births and a younger ALB. The analyses also indicate that adult germline mutation accumulation rates are established in adolescence, and that later menarche in women may delay mutation accumulation.
INTERPRETATIONParental-age-adjusted germline mutation rates in healthy young adults may provide a measure of both reproductive and systemic aging. Puberty may induce the establishment of adult mutation accumulation rates, just when DNA repair genes expression levels are known to begin their lifelong decline.
FUNDINGNIH R01AG038797 and R21AG054962 (to R.M.C.); University of Utah Program in Personalized Health (to H.D.M.); NIH T32GM007464 (to T.A.S.); NIH R01AG022095 (to K.R.S.); NIH R01HG006693, R01HG009141, and R01GM124355 (to A.R.Q.); NIH GM118335 and GM059290 (to L.B.J.); NIH P30CA2014 (to the Utah Population Database, a.k.a. the UPDB); National Center for Research Resources Public Health Services grant M01RR00064 (to the Huntsman General Clinical Research Center, University of Utah); National Center for Advancing Translational Sciences NIH grant UL1TR002538 (to the University of Utahs Center for Clinical and Translational Science); Howard Hughes Medical Institute funding (to Ray White); gifts from the W.M. Keck Foundation (to Stephen M. Prescott and M.F.L.) and from the George S. and Delores Dore Eccles Foundation (to the University of Utah) that supported the Utah Genetic Reference Project (UGRP). Sequencing of the CEPH samples was funded by the Utah Genome Project, the George S. and Dolores Dore Eccles Foundation, and the H.A. and Edna Benning Foundation. We thank the Pedigree and Population Resource of the Huntsman Cancer Institute, University of Utah (funded in part by the Huntsman Cancer Foundation) for its role in the ongoing collection, maintenance and support of the UPDB. | 10.1038/s41598-020-66867-0 | medrxiv |
10.1101/19004291 | Health Care Provider Compliance with the HIPAA Right of Individual Access: A Scorecard and Survey | McGraw, D. C.; Fitter, N.; Taylor, L. B. | Deven C McGraw | Ciitizen Corporation | 2019-08-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | health policy | https://www.medrxiv.org/content/early/2019/08/13/19004291.source.xml | BackgroundHistorically, patients have had difficulty obtaining copies of their medical records, notwithstanding the legal right to do so. In 2018, a study of 83 top hospitals found discrepancies between those hospitals published information and telephone survey responses regarding their processes for release of records to patients, indicating noncompliance with the HIPAA right of individual access.
ObjectiveAssess state of compliance with the HIPAA right of access across a broader range of health care providers and in the context of real records requests from patients.
MethodsEvaluate the degree of compliance with the HIPAA right of access 1) through telephone surveys of health care institutions regarding release of records to patients and 2) by scoring the responses of a total of 210 health care providers to actual patient record requests against the HIPAA right of access requirements. (51 of those providers were part of an initial cohort of 51 scored for an earlier version of this paper.)
ResultsBased on the scores of responses of 210 health care providers to record requests and the responses of nearly 3000 healthcare institutions to telephone surveys, more than 50% of health care providers are out of compliance with the HIPAA right of access. The most common failure was refusal to send records to patient or patients designee in the form and format requested by the patient, with 86% of noncompliance due to this factor. The number of phone calls required to obtain records in compliance with HIPAA, and the lack of consistency in provider responses to actual requests, makes the records retrieval process a challenging one for patients.
ConclusionsRecent federal proposals prioritize patient access to medical records through certified electronic health record (EHR) technology, but access by patients to their complete clinical records via EHRs is years away. In the meantime, health care providers need to focus more attention on compliance with the HIPAA right of access, including better training of staff on HIPAA requirements. Greater enforcement of the law will help motivate providers to prioritize this issue. | null | medrxiv |
10.1101/19004291 | Health Care Provider Compliance with the HIPAA Right of Individual Access: A Scorecard and Survey | McGraw, D. C.; Fitter, N.; Taylor, L. B. | Deven C McGraw | Ciitizen Corporation | 2019-11-11 | 2 | PUBLISHAHEADOFPRINT | cc_no | health policy | https://www.medrxiv.org/content/early/2019/11/11/19004291.source.xml | BackgroundHistorically, patients have had difficulty obtaining copies of their medical records, notwithstanding the legal right to do so. In 2018, a study of 83 top hospitals found discrepancies between those hospitals published information and telephone survey responses regarding their processes for release of records to patients, indicating noncompliance with the HIPAA right of individual access.
ObjectiveAssess state of compliance with the HIPAA right of access across a broader range of health care providers and in the context of real records requests from patients.
MethodsEvaluate the degree of compliance with the HIPAA right of access 1) through telephone surveys of health care institutions regarding release of records to patients and 2) by scoring the responses of a total of 210 health care providers to actual patient record requests against the HIPAA right of access requirements. (51 of those providers were part of an initial cohort of 51 scored for an earlier version of this paper.)
ResultsBased on the scores of responses of 210 health care providers to record requests and the responses of nearly 3000 healthcare institutions to telephone surveys, more than 50% of health care providers are out of compliance with the HIPAA right of access. The most common failure was refusal to send records to patient or patients designee in the form and format requested by the patient, with 86% of noncompliance due to this factor. The number of phone calls required to obtain records in compliance with HIPAA, and the lack of consistency in provider responses to actual requests, makes the records retrieval process a challenging one for patients.
ConclusionsRecent federal proposals prioritize patient access to medical records through certified electronic health record (EHR) technology, but access by patients to their complete clinical records via EHRs is years away. In the meantime, health care providers need to focus more attention on compliance with the HIPAA right of access, including better training of staff on HIPAA requirements. Greater enforcement of the law will help motivate providers to prioritize this issue. | null | medrxiv |
10.1101/19004150 | Peripheral Biomarkers of Tobacco-Use Disorder: A Systematic Review | Newton, D. F. | Dwight F Newton | University of Toronto | 2019-08-14 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | addiction medicine | https://www.medrxiv.org/content/early/2019/08/14/19004150.source.xml | IntroductionTobacco use disorder (TUD) is a major worldwide healthcare burden resulting in 7 million deaths annually. TUD has few approved cessation aids, all of which are associated a high rate of relapse within one year. Biomarkers of TUD severity, treatment response, and risk of relapse have high potential clinical utility to identify ideal responders and guide additional treatment resources.
MethodsA MEDLINE search was performed using the terms biomarkers, dihydroxyacetone phosphate, bilirubin, inositol, cotinine, adrenocorticotropic hormone, cortisol, pituitary-adrenal system, homovanillic acid, dopamine, pro-opiomelanocortin, lipids, lipid metabolism all cross-referenced with tobacco-use disorder.
ResultsThe search yielded 424 results, of which 57 met inclusion criteria. The most commonly studied biomarkers were those related to nicotine metabolism, the hypothalamic-pituitary-adrenal (HPA) axis, and cardiovascular (CVD) risk. Nicotine metabolism was most associated with severity of dependence and treatment response, where as HPA axis and CVD markers showed less robust associations with dependence and relapse risk.
ConclusionsNicotine-metabolite ratio, cortisol, and atherogenicity markers appear to be the most promising lead biomarkers for further investigation, though the body of literature is still preliminary. Longitudinal, repeated-measures studies are required to determine the directionality of the observed associations and determine true predictive power of these biomarkers. Future studies should also endeavour to study populations with comorbid psychiatric disorders to determine differences in utility of certain biomarkers. | 10.24966/AAD-7276/100026 | medrxiv |
10.1101/19003830 | Assessment of Bias in Estimates of Sexual Network Degree using Prospective Cohort Data | Uong, S.; Rosenberg, E. S.; Goodreau, S. M.; Luisi, N.; Sullivan, P. S.; Jenness, S. M. | Samuel M Jenness | Department of Epidemiology, Emory University | 2019-08-14 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/14/19003830.source.xml | BackgroundSexual network degree, a count of ongoing partnerships, plays a critical role in the transmission dynamics of human immunodeficiency virus (HIV) and other sexually transmitted infections (STI). Researchers often quantify degree using self-reported cross-sectional data on the day of survey, which may result in bias because of uncertainty about future sexual activity.
MethodsWe evaluated the bias of a cross-sectional degree measure with a prospective cohort study of men who have sex with men (MSM). At baseline, we asked men about whether recent sexual partnerships were ongoing. We confirmed the true, ongoing status of those partnerships at baseline at follow-up. With logistic regression, we estimated the partnership-level predictors of baseline measure accuracy. With Poisson regression, we estimated the longitudinally confirmed degree as a function of baseline predicted degree.
ResultsAcross partnership types, the baseline ongoing status measure was 70% accurate, with higher negative predictive value (91%) than positive predictive value (39%). Partnership exclusivity and racial pairing were associated with higher accuracy. Baseline degree generally overestimated confirmed degree. Bias, or number of ongoing partners different than predicted at baseline, was -0.28 overall, ranging from -1.91 to -0.41 for MSM with any ongoing partnerships at baseline. Comparing MSM of the same baseline degree, the level of bias was stronger for black compared to white MSM, and for younger compared to older MSM.
ConclusionsResearch studies may overestimate degree when it is quantified cross-sectionally. Adjustment and structured sensitivity analyses may account for bias in studies of HIV or STI prevention interventions. | 10.1097/EDE.0000000000001151 | medrxiv |
10.1101/19003830 | Assessment of Bias in Estimates of Sexual Network Degree using Prospective Cohort Data | Uong, S.; Rosenberg, E. S.; Goodreau, S. M.; Luisi, N.; Sullivan, P. S.; Jenness, S. M. | Samuel M Jenness | Department of Epidemiology, Emory University | 2019-11-22 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/11/22/19003830.source.xml | BackgroundSexual network degree, a count of ongoing partnerships, plays a critical role in the transmission dynamics of human immunodeficiency virus (HIV) and other sexually transmitted infections (STI). Researchers often quantify degree using self-reported cross-sectional data on the day of survey, which may result in bias because of uncertainty about future sexual activity.
MethodsWe evaluated the bias of a cross-sectional degree measure with a prospective cohort study of men who have sex with men (MSM). At baseline, we asked men about whether recent sexual partnerships were ongoing. We confirmed the true, ongoing status of those partnerships at baseline at follow-up. With logistic regression, we estimated the partnership-level predictors of baseline measure accuracy. With Poisson regression, we estimated the longitudinally confirmed degree as a function of baseline predicted degree.
ResultsAcross partnership types, the baseline ongoing status measure was 70% accurate, with higher negative predictive value (91%) than positive predictive value (39%). Partnership exclusivity and racial pairing were associated with higher accuracy. Baseline degree generally overestimated confirmed degree. Bias, or number of ongoing partners different than predicted at baseline, was -0.28 overall, ranging from -1.91 to -0.41 for MSM with any ongoing partnerships at baseline. Comparing MSM of the same baseline degree, the level of bias was stronger for black compared to white MSM, and for younger compared to older MSM.
ConclusionsResearch studies may overestimate degree when it is quantified cross-sectionally. Adjustment and structured sensitivity analyses may account for bias in studies of HIV or STI prevention interventions. | 10.1097/EDE.0000000000001151 | medrxiv |
10.1101/19004267 | A comparison of contemporary versus older studies of aspirin for primary prevention | Moriarty, F.; Ebell, M. H. | Mark H Ebell | University of Georgia | 2019-08-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/08/15/19004267.source.xml | ObjectiveThis study compares the benefits and harms of aspirin for primary prevention before and after widespread use of statins and colorectal cancer screening.
MethodsWe compared studies of aspirin for primary prevention that recruited patients from 2005 onward with previous individual patient meta-analyses that recruited patients from 1978 to 2002. Data for contemporary studies were synthesized using random-effects models. We report vascular (major adverse cardiovascular events [MACE], myocardial infarction [MI], stroke), bleeding, cancer, and mortality outcomes.
ResultsThe IPD analyses of older studies included 95,456 patients for CV prevention and 25,270 for cancer mortality, while the four newer studies had 61,604 patients. Relative risks for vascular outcomes for older vs newer studies follow: MACE: 0.89 (95% CI 0.83-0.95) vs 0.93 (0.86-0.99); fatal hemorrhagic stroke: 1.73 (1.11-2.72) vs 1.06 (0.66-1.70); any ischemic stroke: 0.86 (0.74-1.00) vs 0.86 (0.75-0.98); any MI: 0.84 (0.77-0.92) vs 0.88 (0.77-1.00); and non-fatal MI: 0.79 (0.71-0.88) vs 0.94 (0.83-1.08). Cancer death was not significantly decreased in newer studies (RR 1.11, 0.92-1.34). Major hemorrhage was significantly increased for both older and newer studies (RR 1.48, 95% CI 1.25-1.76 vs 1.37, 95% CI 1.24-1.53). There was no effect in either group on all-cause mortality, cardiovascular mortality, fatal stroke, or fatal MI.
ConclusionsIn the modern era characterized by widespread statin use and cancer screening, aspirin does not reduce the risk of non-fatal MI or cancer death. There are no mortality benefits and a significant risk of major hemorrhage. Aspirin should no longer be recommended for primary prevention.
Summary of current evidence and what this study addsO_ST_ABSWhat is already known about this subject?C_ST_ABSO_LIThe cumulative evidence for aspirin suggests a role in the primary prevention of cardiovascular disease, and in reducing cancer incidence and mortality.
C_LIO_LIHowever most of the trials of aspirin for primary prevention were set in Europe and the United States and recruited patients prior to the year 2000.
C_LIO_LIThe benefits and harms of aspirin should be considered separately in studies performed in the eras before and after widespread use of statins and colorectal cancer screening.
C_LI
What does this study add?O_LIThis study provides the most detailed summary to date of cardiac, stroke, bleeding, mortality and cancer outcomes to date in the literature.
C_LIO_LIIn trials of aspirin for primary prevention from 2005 onwards, aspirin reduced major adverse cardiovascular events but significantly increased the risk of bleeding, with no benefit for mortality or,
C_LIO_LIUnlike older studies, there was no reduction in cancer mortality and non-fatal myocardial infarction.
C_LI
How does this impact on clinical practice?O_LIOur study suggests aspirin should not be recommended for primary prevention in the modern era.
C_LI | 10.1093/fampra/cmz080 | medrxiv |
10.1101/19003947 | Multidrug therapy with terbinafine and itraconazole is not superior to itraconazole alone in current epidemic of altered dermatophytosis in India: A randomized pragmatic trial | Singh, S.; Anchan, V. N.; Chandra, U.; Raheja, R. | Sanjay Singh | Institute of Medical Sciences, Banaras Hindu University, Varanasi, India | 2019-08-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | dermatology | https://www.medrxiv.org/content/early/2019/08/15/19003947.source.xml | BackgroundTreatment responsiveness of dermatophytosis has decreased considerably in recent past in India. We compared effectiveness of oral terbinafine daily (Terb) (active control) versus itraconazole daily (Itra) versus terbinafine plus itraconazole daily (TI) versus terbinafine daily plus itraconazole pulse (TIp) in tinea corporis, tinea cruris and tinea faciei in a pragmatic randomized open trial.
MethodsNinety-two microscopically confirmed patients were allocated to Terb (6 mg/kg/day), Itra (5 mg/kg/day), TI (terbinafine 6 mg/kg/day, itraconazole 5 mg/kg/day), or TIp (terbinafine 6 mg/kg/day, itraconazole 10 mg/kg/day for 1 week in 4 weeks) group by concealed block randomization and treated for 8 weeks or cure.
ResultsCure rates were similar at 4 weeks (P=0.768). At 8 weeks, 5 (21.7%), 18 (78.3%), 16 (69.6%), and 16 (69.6%) patients were cured in Terb, Itra, TI, and TIp groups, respectively. All experimental regimens (Itra, TI, TIp) were more effective than Terb (P[≤]0.0027). All experimental regimens had similar effectiveness (P[≥]0.738). Relapse rates 4 and 8 weeks after cure were similar (P=0.869 and 0.314, respectively). Number-needed-to-treat (NNT) was 2 for Itra, 3 for TI, and 3 for TIp.
ConclusionsOral itraconazole given daily (NNT=2) is the most effective treatment and combining it with terbinafine does not increase effectiveness.
One Sentence SummaryCombination of oral terbinafine and itraconazole is not more effective than itraconazole alone in current epidemic of altered dermatophytosis in India. | null | medrxiv |
10.1101/19003715 | Elucidating user behaviours in a digital health surveillance system to correct prevalence estimates | Liu, D.; Mitchell, L.; Cope, R. C.; Carlson, S. J.; Ross, J. V. | Dennis Liu | The University of Adelaide | 2019-08-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/15/19003715.source.xml | Estimating seasonal influenza prevalence is of undeniable public health importance, but remains challenging with traditional datasets due to cost and timeliness. Digital epidemiology has the potential to address this challenge, but can introduce sampling biases that are distinct to traditional systems. In online participatory health surveillance systems, the voluntary nature of the data generating process must be considered to address potential biases in estimates. Here we examine user behaviours in one such platform, FluTracking, from 2011 to 2017. We build a Bayesian model to estimate probabilities of an individual reporting in each week, given their past reporting behaviour, and to infer the weekly prevalence of influenza-like-illness (ILI) in Australia. We show that a model that corrects for user behaviour can substantially effect ILI estimates. The model examined here elucidates several factors, such as the status of having ILI and consistency of prior reporting, that are strongly associated with the likelihood of participating in online health surveillance systems. This framework could be applied to other digital participatory health systems where participation is inconsistent and sampling bias may be of concern. | 10.1016/j.epidem.2020.100404 | medrxiv |
10.1101/19003715 | Elucidating user behaviours in a digital health surveillance system to correct prevalence estimates | Liu, D.; Mitchell, L.; Cope, R. C.; Carlson, S. J.; Ross, J. V. | Dennis Liu | The University of Adelaide | 2020-02-07 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2020/02/07/19003715.source.xml | Estimating seasonal influenza prevalence is of undeniable public health importance, but remains challenging with traditional datasets due to cost and timeliness. Digital epidemiology has the potential to address this challenge, but can introduce sampling biases that are distinct to traditional systems. In online participatory health surveillance systems, the voluntary nature of the data generating process must be considered to address potential biases in estimates. Here we examine user behaviours in one such platform, FluTracking, from 2011 to 2017. We build a Bayesian model to estimate probabilities of an individual reporting in each week, given their past reporting behaviour, and to infer the weekly prevalence of influenza-like-illness (ILI) in Australia. We show that a model that corrects for user behaviour can substantially effect ILI estimates. The model examined here elucidates several factors, such as the status of having ILI and consistency of prior reporting, that are strongly associated with the likelihood of participating in online health surveillance systems. This framework could be applied to other digital participatory health systems where participation is inconsistent and sampling bias may be of concern. | 10.1016/j.epidem.2020.100404 | medrxiv |
10.1101/19003814 | Measuring Baseline Health with Individual Health-Adjusted Life Expectancy (iHALE) | Johansson, K. A.; Okland, J.-M.; Skaftun, E. K.; Bukhman, G.; Norheim, O. F.; Coates, M. M.; Haaland, O. A. | Øystein Ariansen Haaland | Bergen Centre for Ethics and Priority Setting (BCEPS), Department of Global Public Health and Primary Care, University of Bergen, Norway | 2019-08-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | medical ethics | https://www.medrxiv.org/content/early/2019/08/15/19003814.source.xml | ObjectivesAt any point of time, a persons baseline health is the number of healthy life years they are expected to experience during the course of their lifetime. In this article we propose an equity-relevant health metric, illness-specific individual Health Adjusted Life Expectancy (iHALE), that facilitates comparison of baseline health for individuals at the onset of different medical conditions, and allows for the assessment of which patient groups are worse off. A method for calculating iHALE is presented, and we use this method to rank four conditions in six countries according to several criteria of "worse off" as a proof of concept.
MethodsiHALE measures baseline health at an individual level for specific conditions, and consists of two components: past health (before disease onset) and future expected health (after disease onset). Four conditions (acute myeloid leukemia (AML), acute lymphoid leukemia (ALL), schizophrenia, and epilepsy) are analysed in six countries (Ethiopia, Haiti, China, Mexico, United States and Japan). Data for all countries and for all diseases in 2017 were obtained from the Global Burden of Disease Study database. In order to assess who are the worse off, we focus on four measures: the proportion of affected individuals who are expected to attain less than 20 healthy life years (T20), the 25th and 75th percentiles of healthy life years for affected individuals (Q1 and Q3, respectively), and the average iHALE across all affected individuals.
ResultsEven in settings where average iHALE is similar for two conditions, other measures may vary. One example is AML (average iHALE=58.7, T20=2.1, Q3-Q1=15.3) and ALL (57.7, T20=4.7, Q3-Q1=21.8) in the US. Many illnesses, such as epilepsy, are associated with higher baseline health in high-income settings (average iHALE in Japan=64.3) than in low-income settings (average iHALE in Ethiopia=36.8).
ConclusioniHALE allows for the estimation of the distribution of baseline health of all individuals in a population. Hence, baseline health can be incorporated as an equity consideration in setting priorities for health interventions. | null | medrxiv |
10.1101/19003848 | Diet-derived fruit and vegetable metabolites suggest sex-specific mechanisms conferring protection against osteoporosis in humans | Mangano, K. M.; Noel, S. E.; Lai, C.-Q.; Christensen, J. J.; Ordovas, J. M.; Dawson-Hughes, B.; Tucker, K. L.; Parnell, L. D. | Kelsey M Mangano | University of Massachusetts, Lowell | 2019-08-15 | 1 | PUBLISHAHEADOFPRINT | cc0 | nutrition | https://www.medrxiv.org/content/early/2019/08/15/19003848.source.xml | The impact of nutrition on the metabolic profile of osteoporosis is incompletely characterized. The objective of this cross-sectional study was to detangle the association of fruit and vegetable (FV) intakes with osteoporosis prevalence. Dietary, anthropometric and blood plasma metabolite data were examined from the Boston Puerto Rican Osteoporosis Study, a cohort of 600 individuals (age 46-79yr). High FV intake was protective against osteoporosis prevalence. Associations of 525 plasma metabolites were assessed with fruit and vegetable intake, and separately with osteoporosis status. Several biological processes affiliated with the FV-associating metabolites, including caffeine metabolism, carnitines and fatty acids, and glycerophospholipids. For osteoporosis-associated metabolites, important processes were steroid hormone biosynthesis in women, and branched-chain amino acid metabolism in men. In all instances, the metabolite patterns differed greatly between sexes, arguing for a stratified nutrition approach in recommending FV intakes to improve bone health. Factors derived from principal components analysis of the FV intakes were correlated with the osteoporosis-associated metabolites, with high intake of dark leafy greens and berries/melons appearing protective in both sexes. These data warrant investigation into whether increasing intakes of dark leafy greens, berries and melons causally affect bone turnover and BMD among adults at risk for osteoporosis via sex-specific metabolic pathways, and how gene-diet interactions alter these sex-specific metabolomic-osteoporosis links. | 10.1016/j.bone.2020.115780 | medrxiv |
10.1101/19004762 | Why Westerners Are Dissatisfied: A Cross-Sectional Study Identifying State-Level Factors Associated with Variation in Private Health Insurance Satisfaction | McLeod, M. R.; Berinstein, J. A.; Steiner, C. A.; Cushing, K.; Cohen Mekelburg, S. A.; Higgins, P. D. R. | Megan Rose McLeod | University of Michigan Medical School | 2019-08-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | health policy | https://www.medrxiv.org/content/early/2019/08/16/19004762.source.xml | ImportanceLarge regional variations in consumer satisfaction with private health insurance plans have been observed, but the factors driving this variation are unknown.
ObjectiveTo identify explanatory state-level and insurance family-level predictors of satsifaction with private health insurance.
DesignCross-sectional study examining regional and state variations in consumer health insurance plan satisfaction using National Committee for Quality Assurance data from 2015 to 2018, state-level health data and parent insurance family.
SettingUS Population
ParticipantsPrivately insured individuals.
ExposureOne of 2176 private health insurance plans.
Main OutcomeConsumer satisfaction with the health insurance plan on a 0-5 scale.
RESULTSConsumer satisfaction with health insurance was consistently lowest in the West (p<0.0001). Lower private health insurance plan satisfaction was associated with the percentage of the population without a place of usual medical care, the percentage of the state population that is Hispanic, and the percentage of the population reporting any mental illness. Factors associated with increasing insurance satisfaction included higher healthcare spending per capita, a higher number of for-profit beds per capita, and an increased cancer death rate. Increased consumer satisfaction was associated with the Kaiser and Anthem insurance plan families.
Conclusions and RelevanceState and insurer family factors are predictive of private health insurance plan satisfaction. Potentially modifiable factors include access to primary care, healthcare spending per capita, and numbers of for-profit hospital beds. This information will help consumers hold insurance providers accountable to provide higher quality and more desirable coverage and provide actionable items to improve health insurance satisfaction. | null | medrxiv |
10.1101/19004762 | Why Westerners Are Dissatisfied: A Cross-Sectional Study Identifying State-Level Factors Associated with Variation in Private Health Insurance Satisfaction | McLeod, M. R.; Berinstein, J. A.; Steiner, C. A.; Cushing, K.; Cohen Mekelburg, S. A.; Higgins, P. D. R. | Megan Rose McLeod | University of Michigan Medical School | 2019-08-22 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | health policy | https://www.medrxiv.org/content/early/2019/08/22/19004762.source.xml | ImportanceLarge regional variations in consumer satisfaction with private health insurance plans have been observed, but the factors driving this variation are unknown.
ObjectiveTo identify explanatory state-level and insurance family-level predictors of satsifaction with private health insurance.
DesignCross-sectional study examining regional and state variations in consumer health insurance plan satisfaction using National Committee for Quality Assurance data from 2015 to 2018, state-level health data and parent insurance family.
SettingUS Population
ParticipantsPrivately insured individuals.
ExposureOne of 2176 private health insurance plans.
Main OutcomeConsumer satisfaction with the health insurance plan on a 0-5 scale.
RESULTSConsumer satisfaction with health insurance was consistently lowest in the West (p<0.0001). Lower private health insurance plan satisfaction was associated with the percentage of the population without a place of usual medical care, the percentage of the state population that is Hispanic, and the percentage of the population reporting any mental illness. Factors associated with increasing insurance satisfaction included higher healthcare spending per capita, a higher number of for-profit beds per capita, and an increased cancer death rate. Increased consumer satisfaction was associated with the Kaiser and Anthem insurance plan families.
Conclusions and RelevanceState and insurer family factors are predictive of private health insurance plan satisfaction. Potentially modifiable factors include access to primary care, healthcare spending per capita, and numbers of for-profit hospital beds. This information will help consumers hold insurance providers accountable to provide higher quality and more desirable coverage and provide actionable items to improve health insurance satisfaction. | null | medrxiv |
10.1101/19003681 | Redefining the Bladder Cancer Phenotype using Patterns of Familial Risk | Hanson, H. A.; Leiser, C. L.; Martin, C.; Gupta, S.; Smith, K. R.; Dechet, C.; Lowrance, W.; O'Neil, B.; Camp, N. J. | Heidi A Hanson | University of Utah | 2019-08-16 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/08/16/19003681.source.xml | Relatives of bladder cancer (BCa) patients have been shown to be at increased risk for kidney, lung, thyroid, and cervical cancer after correcting for smoking related behaviors that may concentrate in some families. We demonstrate a new method to simultaneously assess risks for multiple cancers to identify distinct multi-cancer configurations (multiple different cancer types that cluster in relatives) surrounding BCa patients. We identified 6,416 individuals with urothelial carcinoma and familial information using the Utah Cancer Registry and Utah Population Database (UPDB). First-degree relatives, second-degree relatives, and first cousins were used to construct a familial enrichment matrix for cancer-types previously shown to be individually associated with BCa. K-medioids clustering were used to identify Familial Multi-Cancer Configurations (FMC). A case-control design and Cox regression with a 1:5 ratio of BCa cases to cancer-free controls was used to quantify the risk in specific relative-types and spouses in each FMC. Clustering analysis revealed 12 distinct FMCs, each exhibiting a different pattern of cancer co-aggregation. Of the 12 FMCs, four exhibited strong familial risk of bladder cancer along with specific patterns of increased risk of cancers in other sites (BCa FMCs), and were the focus of further investigation. Cancers at increased risk in these four BCa FMCs most commonly included melanoma, prostate and breast cancer and less commonly included leukemia, lung, pancreas and kidney cancer. A network-based approach can be used with familial data to discover new phenotype clusters for BCa, providing new directions for discovering patterns of cancer clustering. | null | medrxiv |
10.1101/19004499 | Rationalising neurosurgical head injury referrals: The development and implementation of the Liverpool Head Injury Tomography Score (Liverpool HITS) for mild traumatic brain injury | Gillespie, C. S.; McLeavy, C. M.; Islim, A. I.; Prescott, S.; McMahon, C. J. | Conor SN Gillespie | University of Liverpool | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | surgery | https://www.medrxiv.org/content/early/2019/08/17/19004499.source.xml | ObjectivesTo develop and implement a radiological scoring system to define a surgically significant mild Traumatic Brain Injury (TBI), stratify neurosurgical referrals and improve communication between referral centres and neurosurgical units.
DesignRetrospective single centre case-control analysis of ten continuous months of mild TBI referrals.
SettingA major tertiary neurosurgery centre in England, UK.
ParticipantsAll neurosurgical referrals with a mild TBI (GCS 13-15) during the period of 1st January to 30th October 2017 were eligible for the study. 1248 patients were identified during the study period, with 1144 being included in the final analysis.
InterventionsAll patients CT head results from the referring centres were scored retrospectively using the scoring system and stratified according to their mean score, and if they were accepted for transfer to the neurosurgical centre or managed locally.
Main outcome measureDetermine the discriminatory and diagnostic power, sensitivity and specificity of the scoring system for predicting a surgically significant mild TBI.
ResultsMost patients referred were male (59.4%, N=681), with a mean age of 69 years (SD=21.1). Of the referrals to the neurosurgical centre, 17% (n=195) were accepted for transfer and 83% (n=946) were not accepted. The scoring system was 99% sensitive and 51.9% specific for determining a surgically significant TBI. Diagnostic power of the model was fair with an area under the curve of 0.79 (95% CI 0.76 to 0.82). The score identified 495 (52.2%) patients in ten months of referrals that could have been successfully managed locally without neurosurgical referral if the scoring system was correctly used at the time of injury.
ConclusionThe Liverpool Head Injury Tomography Score (HITS) score is a CT based scoring system that can be used to define a surgically significant mild TBI. The scoring system can be easily used by multiple healthcare professionals, has high sensitivity, will reduce neurosurgical referrals, and could be incorporated into local, regional and national head injury guidance. | 10.1080/02688697.2019.1710825 | medrxiv |
10.1101/19004499 | Rationalising neurosurgical head injury referrals: The development and implementation of the Liverpool Head Injury Tomography Score (Liverpool HITS) for mild traumatic brain injury | Gillespie, C. S.; McLeavy, C. M.; Islim, A. I.; Prescott, S.; McMahon, C. J. | Conor SN Gillespie | University of Liverpool | 2019-08-26 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | surgery | https://www.medrxiv.org/content/early/2019/08/26/19004499.source.xml | ObjectivesTo develop and implement a radiological scoring system to define a surgically significant mild Traumatic Brain Injury (TBI), stratify neurosurgical referrals and improve communication between referral centres and neurosurgical units.
DesignRetrospective single centre case-control analysis of ten continuous months of mild TBI referrals.
SettingA major tertiary neurosurgery centre in England, UK.
ParticipantsAll neurosurgical referrals with a mild TBI (GCS 13-15) during the period of 1st January to 30th October 2017 were eligible for the study. 1248 patients were identified during the study period, with 1144 being included in the final analysis.
InterventionsAll patients CT head results from the referring centres were scored retrospectively using the scoring system and stratified according to their mean score, and if they were accepted for transfer to the neurosurgical centre or managed locally.
Main outcome measureDetermine the discriminatory and diagnostic power, sensitivity and specificity of the scoring system for predicting a surgically significant mild TBI.
ResultsMost patients referred were male (59.4%, N=681), with a mean age of 69 years (SD=21.1). Of the referrals to the neurosurgical centre, 17% (n=195) were accepted for transfer and 83% (n=946) were not accepted. The scoring system was 99% sensitive and 51.9% specific for determining a surgically significant TBI. Diagnostic power of the model was fair with an area under the curve of 0.79 (95% CI 0.76 to 0.82). The score identified 495 (52.2%) patients in ten months of referrals that could have been successfully managed locally without neurosurgical referral if the scoring system was correctly used at the time of injury.
ConclusionThe Liverpool Head Injury Tomography Score (HITS) score is a CT based scoring system that can be used to define a surgically significant mild TBI. The scoring system can be easily used by multiple healthcare professionals, has high sensitivity, will reduce neurosurgical referrals, and could be incorporated into local, regional and national head injury guidance. | 10.1080/02688697.2019.1710825 | medrxiv |
10.1101/19004622 | Atypical somatic symptoms in adults with prolonged recovery from mild traumatic brain injury | Stubbs, J. L.; Green, K. E.; Silverberg, N. D.; Howard, A.; Dhariwal, A.; Brubacher, J. R.; Garraway, N.; Heran, M. K. S.; Sekhon, M. S.; Aquino, A.; Purcell, V.; Hutchison, J. S.; Torres, I. J.; Panenka, W. J.; CanTBI (National biobank and database of pat...), ; CTRC (Canadian traumatic brain injury research...), | William J. Panenka | University of British Columbia | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | neurology | https://www.medrxiv.org/content/early/2019/08/17/19004622.source.xml | Somatization may contribute to persistent symptoms after mild traumatic brain injury (mTBI). In two independently-recruited study samples, we characterized the extent to which symptoms atypical of mTBI but typical for somatoform disorders (e.g., gastrointestinal upset, joint pain) were present in adult patients with prolonged recovery following mTBI. The first sample was cross-sectional and consisted of mTBI patients recruited from the community who reported ongoing symptoms attributable to a previous mTBI (n = 16) along with a healthy control group (n = 15). The second sample consisted of patients with mTBI prospectively recruited from a Level 1 trauma center who had either good recovery (GOSE = 8; n = 33) or poor recovery (GOSE < 8; n = 29). In all participants, we evaluated atypical somatic symptoms using the Patient Health Questionnaire-15 and typical post-concussion symptoms with the Rivermead Post-Concussion Symptom Questionnaire. Participants with poor recovery from mTBI had significantly higher atypical somatic symptoms as compared to the healthy control group in Sample 1 (b = 4.308, p = 9.43E-5) and to mTBI patients with good recovery in Sample 2 (b = 3.287, p = 6.83E-04). As would be expected, participants with poor outcome in Sample 2 had a higher burden of typical rather than atypical symptoms (t(28) = 3.675, p = 9.97E-04, d = 0.94). However, participants with poor recovery still reported atypical somatic symptoms that were significantly higher (1.4 standard deviations, on average) than those with good recovery. Our results suggest that although typical post-concussion symptoms predominate after mTBI, a broad range of somatic symptoms also frequently accompanies mTBI, and that somatization may represent an important, modifiable factor in mTBI recovery. | 10.3389/fneur.2020.00043 | medrxiv |
10.1101/19004473 | Evaluation of a rapid lateral flow calprotectin test for the diagnosis of prosthetic joint infection | Trotter, A. J.; Dean, R.; Whitehouse, C. E.; Mikalsen, J.; Hill, C.; Brunton-Sim, R.; Kay, G. L.; Shakokhani, M.; Durst, A.; Wain, J.; McNamara, I.; O'Grady, J. | Alexander J Trotter | University of East Anglia | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | orthopedics | https://www.medrxiv.org/content/early/2019/08/17/19004473.source.xml | BackgroundMicrobiological diagnosis of prosthetic joint infection (PJI) relies on culture techniques that are slow and insensitive. Rapid tests are urgently required to improve patient management. Calprotectin is a neutrophil biomarker of inflammation that has been demonstrated to be effective for the diagnosis of PJI. A calprotectin based lateral flow test has been developed for the rapid detection of PJI using synovial fluid samples.
MethodsA convenience series of 69 synovial fluid samples from patients at the Norfolk and Norwich University Hospitals (NNUH) were collected intraoperatively from 52 hip and 17 knee revision operations. Calprotectin levels were measured using a new commercially available lateral flow assay for PJI diagnosis (Lyfstone). For all samples, synovial fluid was pipetted onto the lateral flow device and the signal was read using a mobile phone app after 15 minutes incubation at room temperature.
ResultsAccording to the Musculoskeletal Infection Society (MSIS) criteria, 24 patients were defined as PJI positive and the remaining 45 were negative. The overall accuracy of the lateral flow test against the MSIS criteria was 75%. The test had a sensitivity and specificity of 75% and 76% respectively with a positive predictive value (PPV) of 62% and a negative predictive value (NPV) of 85%. Discordant results were then reviewed by the clinical team using available patient data to develop an alternative gold standard for defining presence/absence of infection (MSIS+). Compared to MSIS+, the test showed an overall accuracy of 83%, sensitivity and specificity of 95% and 78% respectively, a PPV of 62% and an NPV of 98%. Test accuracy for hip revisions was 77% and for knee revisions was 100%.
ConclusionsThis study demonstrates that the calprotectin lateral flow assay is an effective diagnostic test for PJI. Our data suggests that the test is likely to generate false positive results in patients with metallosis and gross osteolysis. | 10.1302/2046-3758.95.BJR-2019-0213.R1 | medrxiv |
10.1101/19004846 | The Implementation, Diagnostic Yield and Clinical Outcome of Genetic Testing on an Inpatient Child and Adolescent Psychiatry Service | Besterman, A. D.; Sadik, J.; Enenbach, M. J.; Quintero-Rivera, F.; UCLA Clinical Genomics Center, ; DeAntonio, M.; Martinez-Agosto, J. A. | Aaron D Besterman | University of California Los Angeles | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_no | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/08/17/19004846.source.xml | ObjectiveDiagnostic genetic testing is recommended for children with neurodevelopmental disorders (NDDs). However, many children with NDDs do not receive genetic testing. One approach to improve access to genetic services for these patients is to offer testing on the inpatient child and adolescent psychiatry (CAP) service.
MethodsWe implemented systematic genetic testing on an inpatient CAP service by providing medical genetics education to CAP fellows. We compared the genetic testing rates pre- and post-education. We compared the diagnostic yield to previously published studies and the demographics of our cohort to inpatients who received genetic testing on other clinical services. We assessed rates of outpatient genetics follow-up post-discharge.
ResultsThe genetic testing rate on the inpatient CAP service was 1.6% (2/125) before the educational intervention and 10.7% (21/197) afterwards (OR = 0.13, 95% CI = 0.015- 0.58, p = 0.0015). Diagnostic yield for patients on the inpatient service was 4.3% (1/23), lower than previously reported. However, 34.8% (8/23) of patients had variants of unknown significance (VUSs). 39.1% (9/23) of children who received genetic testing while inpatients were underrepresented minorities, compared to 7.7% (1/13) of patients who received genetic testing on other clinical services (OR = 7.35, CI = 0.81-365.00, p = 0.057). 43.5% of patients were lost to outpatient genetics follow-up.
ConclusionMedical genetics education for fellows on an inpatient CAP service can improve genetic testing rates. Genetic testing for inpatients may primarily identify VUSs instead of well-known NDD risk variants. Genetic testing on the inpatient CAP service may improve access to genetic services for underrepresented minorities, but assuring outpatient follow-up can be challenging. | 10.1002/aur.2338 | medrxiv |
10.1101/19004846 | The Implementation, Diagnostic Yield and Clinical Outcome of Genetic Testing on an Inpatient Child and Adolescent Psychiatry Service | Besterman, A. D.; Sadik, J.; Enenbach, M. J.; Quintero-Rivera, F.; UCLA Clinical Genomics Center, ; DeAntonio, M.; Martinez-Agosto, J. A. | Aaron D Besterman | University of California Los Angeles | 2019-09-14 | 2 | PUBLISHAHEADOFPRINT | cc_no | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/09/14/19004846.source.xml | ObjectiveDiagnostic genetic testing is recommended for children with neurodevelopmental disorders (NDDs). However, many children with NDDs do not receive genetic testing. One approach to improve access to genetic services for these patients is to offer testing on the inpatient child and adolescent psychiatry (CAP) service.
MethodsWe implemented systematic genetic testing on an inpatient CAP service by providing medical genetics education to CAP fellows. We compared the genetic testing rates pre- and post-education. We compared the diagnostic yield to previously published studies and the demographics of our cohort to inpatients who received genetic testing on other clinical services. We assessed rates of outpatient genetics follow-up post-discharge.
ResultsThe genetic testing rate on the inpatient CAP service was 1.6% (2/125) before the educational intervention and 10.7% (21/197) afterwards (OR = 0.13, 95% CI = 0.015- 0.58, p = 0.0015). Diagnostic yield for patients on the inpatient service was 4.3% (1/23), lower than previously reported. However, 34.8% (8/23) of patients had variants of unknown significance (VUSs). 39.1% (9/23) of children who received genetic testing while inpatients were underrepresented minorities, compared to 7.7% (1/13) of patients who received genetic testing on other clinical services (OR = 7.35, CI = 0.81-365.00, p = 0.057). 43.5% of patients were lost to outpatient genetics follow-up.
ConclusionMedical genetics education for fellows on an inpatient CAP service can improve genetic testing rates. Genetic testing for inpatients may primarily identify VUSs instead of well-known NDD risk variants. Genetic testing on the inpatient CAP service may improve access to genetic services for underrepresented minorities, but assuring outpatient follow-up can be challenging. | 10.1002/aur.2338 | medrxiv |
10.1101/19004937 | A gene-diet interaction-based score predicts response to dietary fat in the Women's Health Initiative | Westerman, K.; Liu, Q.; Liu, S.; Parnell, L. D.; Sebastiani, P.; Jacques, P.; DeMeo, D. L.; Ordovas, J. M. | Jose M. Ordovas | JM-USDA Human Nutrition Research Center on Aging | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_no | genetic and genomic medicine | https://www.medrxiv.org/content/early/2019/08/17/19004937.source.xml | While diet response prediction for cardiometabolic risk factors (CRFs) has been demonstrated using single SNPs and main-effect genetic risk scores, little investigation has gone into the development of genome-wide diet response scores. We sought to leverage the multi-study setup of the Womens Health Initiative cohort to generate and test genetic scores for the response of six CRFs (body mass index, systolic blood pressure, LDL-cholesterol, HDL-cholesterol, triglycerides, and fasting glucose) to dietary fat. A genome-wide interaction study was undertaken for each CRF in women (n [~] 10000) not participating in the Dietary Modification (DM) trial, which focused on the reduction of dietary fat. Genetic scores based on these analyses were developed using a pruning-and-thresholding approach and tested for the prediction of one-year CRF changes as well as long-term chronic disease development in DM trial participants (n [~] 5000). One of these genetic scores, for LDL-cholesterol (LDL-C), predicted changes in the associated CRF. This 1760-variant score explained 3.4% of the variance in one-year LDL-C changes in the intervention arm, but was unassociated with changes in the control arm. In contrast, a main-effect genetic risk score for LDL-C was not useful for predicting dietary fat response. Further investigation of this score with respect to downstream disease outcomes revealed suggestive differential associations across DM trial arms, especially with respect to coronary heart disease and stroke subtypes. These results lay the foundation for the combination of many genome-wide gene-diet interactions for diet response prediction while highlighting the need for further research and larger samples in order to achieve robust biomarkers for use in personalized nutrition. | 10.1093/ajcn/nqaa037 | medrxiv |
10.1101/19004770 | Challenges in Building and Maintaining Rare Disease Patient Registries: Results of a Questionnaire Survey in Japan at 2012 | Morita, M.; Ogishima, S. | Mizuki Morita | Okayama University | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | health informatics | https://www.medrxiv.org/content/early/2019/08/17/19004770.source.xml | BackgroundResearch infrastructure such as patient registries and biobanks is expected to play important roles by aggregating information and biospecimens to promote research and development for rare diseases. However, both building and maintaining them can be costly. This paper presents results of a survey of patient registries for rare diseases in Japan conducted at the end of 2012, with emphasis on clarifying costs and efforts related to building and maintaining them.
ResultsOf 31 patient registries in Japan found by searching a database of research grant reports and by searching the internet, 11 returned valid responses to this survey. Results show that labor and IT system costs are major expenses for developing and maintaining patient registries. Half of the respondent patient registries had no prospect of securing a budget to maintain them. Responders required the following support for patient registries: financial support, motivation of registrants (medical doctors or patients), and improved communication with and visibility to potential data users. These results resemble those reported from another survey conducted almost simultaneously in Europe (EPIRARE survey).
ConclusionsSurvey results imply that costs and efforts to build and maintain patient registries for many rare diseases make them unrealistic. Some alternative strategy is necessary to reduce burdens, such as offering a platform that supplies IT infrastructure and basic secretariat functions that can be used commonly among many patient registries. | null | medrxiv |
10.1101/19004440 | Medal: a patient similarity metric using medication prescribing patterns | Lopez Pineda, A.; Pourshafeie, A.; Ioannidis, A.; McCloskey Leibold, C.; Chan, A.; Frankovich, J.; Bustamante, C. D.; Wojcik, G. L. | Arturo Lopez Pineda | Stanford University | 2019-08-17 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | health informatics | https://www.medrxiv.org/content/early/2019/08/17/19004440.source.xml | ObjectivePediatric acute-onset neuropsychiatric syndrome (PANS) is a complex neuropsychiatric syndrome characterized by an abrupt onset of obsessive-compulsive symptoms and/or severe eating restrictions, along with at least two concomitant debilitating cognitive, behavioral, or neurological symptoms. A wide range of pharmacological interventions along with behavioral and environmental modifications, and psychotherapies have been adopted to treat symptoms and underlying etiologies. Our goal was to develop a data-driven approach to identify treatment patterns in this cohort.
Materials and MethodsIn this cohort study, we extracted medical prescription histories from electronic health records. We developed a modified dynamic programming approach to perform global alignment of those medication histories. Our approach is unique since it considers time gaps in prescription patterns as part of the similarity strategy.
ResultsThis study included 43 consecutive new-onset pre-pubertal patients who had at least 3 clinic visits. Our algorithm identified six clusters with distinct medication usage history which may represent clinicians practice of treating PANS of different severities and etiologies i.e., two most severe groups requiring high dose intravenous steroids; two arthritic or inflammatory groups requiring prolonged nonsteroidal anti-inflammatory drug (NSAID); and two mild relapsing/remitting group treated with a short course of NSAID. The psychometric scores as outcomes in each cluster generally improved within the first two years.
Discussion and conclusionOur algorithm shows potential to improve our knowledge of treatment patterns in the PANS cohort, while helping clinicians understand how patients respond to a combination of drugs. | null | medrxiv |