dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.medmentions_ner
science.medmentions_ner.1862
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nVariants in congenital hypogonadotrophic hypogonadism genes identified in an Indonesian cohort of 46,XY under-virilised boys Congenital hypogonadotrophic hypogonadism (CHH) and Kallmann syndrome (KS) are caused by disruption to the hypothalamic-pituitary-gonadal (H-P-G) axis. In particular, reduced production, secretion or action of gonadotrophin-releasing hormone (GnRH) is often responsible. Various genes, many of which play a role in the development and function of the GnRH neurons, have been implicated in these disorders. Clinically, CHH and KS are heterogeneous; however, in 46,XY patients, they can be characterised by under-virilisation phenotypes such as cryptorchidism and micropenis or delayed puberty. In rare cases, hypospadias may also be present. Here, we describe genetic mutational analysis of CHH genes in Indonesian 46,XY disorder of sex development patients with under-virilisation. We present 11 male patients with varying degrees of under-virilisation who have rare variants in known CHH genes. Interestingly, many of these patients had hypospadias. We postulate that variants in CHH genes, in particular PROKR2, PROK2, WDR11 and FGFR1 with CHD7, may contribute to under-virilisation phenotypes including hypospadias in Indonesia." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [\"FGFR1\", \"PROKR2\", \"neurons\", \"WDR11\", \"micropenis\", \"genes\", \"PROK2\", \"CHD7\", \"cryptorchidism\"], \"Bacterium\": [], \"BiologicFunction\": [\"hypogonadotrophic hypogonadism\", \"disorders\", \"production\", \"KS\", \"CHH\", \"under-virilised\", \"delayed puberty\", \"hypospadias\", \"secretion\", \"under-virilisation\", \"genetic mutational\", \"development\", \"Kallmann syndrome\", \"46,XY disorder of sex development\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"hypothalamic-pituitary-gonadal (H-P-G) axis\"], \"Chemical\": [\"gonadotrophin-releasing hormone\", \"GnRH\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"46,XY\"], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"Indonesian\", \"cohort\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"analysis\"], \"SpatialConcept\": [\"Indonesia\"], \"Virus\": []}" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.26868
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\na fragment corresponding to the extracellular domain ( 20160 amino acids ) of human trii was cloned into pbtm116 .\na total of 10 10 independent colonies were screened as previously described ( colland et al . , 2004 ) .\nthe prey fragments of the positive clones were pcr amplified and sequenced . the human embryonic kidney cell line\n293 t , hscs , human rd cells , mouse c2c12 cells , monkey kidney cos7 cells , and mink lung mvlu1 cells were transfected using lipofectamine - plus reagent ( invitrogen ) according to the manufacturer 's instructions . for experiments with adam12 antisense\n, cells were incubated with 2 m of antisense oligonucleotides to adam12 ( ctctcttttatgccttct and ccccattcctttctcc ) or random control oligonucleotides ( actactacactagactac and gctctatgactcccag ) as previously described ( lafuste et al . , 2005 ) . for rnai experiments\nare3-lux , gaga9-lux , fast1 , ha - smad4 , myc - smad2 , and myc - smad7 were previously described ( dumont et al .\nthe expression vector for ha - trii was provided by j. wrana ( samuel lunenfeld research institute , mount sinai hospital , toronto , ontario , canada ) .\nexpression constructs for wild type or mutants of adam12 and adam12 fused to egfp were prepared as previously described ( hougaard et al . , 2000 ) .\nthe expression vector encoding adam12 shrna or scrambled shrna was constructed using the block - it u6 rna system ( invitrogen ) according to the manufacturer 's instructions . the expression vector for flag - adam12\nwas obtained by fusing the flag epitope to the n terminus of the adam12 fragment ( amino acids 142739 ) isolated in the yeast two - hybrid screen .\nhepg2 , c2c12 , or 293 t cells were transfected by lipofectamine , and , 30 h later , they were treated for 18 h with 2 ng / ml human tgf-1 ( sigma - aldrich ) .\ncell extracts were assayed for luciferase activity using the dual luciferase reporter assay system ( promega ) , and luciferase activities were normalized on the basis of renilla luciferase expression from the prl - tk control vector . for potassium depletion experiments , transfected cells were incubated in medium and water ( 1:1 ) for 5 min at 37c followed by incubation in medium depleted or not depleted in kcl for 1 h at 37c before stimulation with tgf-. after transfection , cells were lysed in lysis buffer ( dumont et al . , 2003 ) , and cell lysates\nwere subjected to immunoprecipitation with the appropriate antibody for 2 h followed by adsorption to sepharose bead coupled protein g for 1 h. immunoprecipitates were washed five times with lysis buffer containing 0.5% np-40 . for the association of endogenous trii with endogenous adams ,\nimmunoprecipitates were washed three times with lysis buffer containing 0.5% np-40 and two times with lysis buffer containing 1% np-40 .\nthen , samples were separated by sds - page and analyzed by immunoblotting with the indicated antibodies .\nthe following antibodies were used : anti - adam12 rb 122 ( gilpin et al . , 1998 ) , anti - flag m2 ( sigma - aldrich ) , anti - ha and anti \nmyc-9e10 ( boehringer manheim ) , antiphospho - smad2 ( cell signaling technologies ) , anti - smad2 ( zymed laboratories ) , anti - adam10 ( prosci ) , anti - adam17 ( chemicon ) , and antiactin , anti - trii , anti bone morphogenetic protein rii , anti - smad7 , anti pai-1 , and anti - junb ( santa cruz biotechnology , inc . ) .\ncells were fixed in 3% pfa , permeabilized with 0.1% triton x-100 , and incubated for 60 min at room temperature with the appropriate primary antibody followed by the appropriate secondary antibody .\n/ ml 1,4-diazabicyclo[2.2.2]octane , and viewed on an automated microscope ( dmrxa2 ; leica ) equipped with a camera ( coolsnap es n&b ; roper scientific ) and a 63 hcx pl apo na 1.32 oil objective ( leica ) .\nz steps were submitted to deconvolution ( nearest neighbor method ) by using metamorph software ( universal imaging corp . ) .\ntotal rna were extracted by the guanidinium thiocianate / cesium chloride method , and real - time quantitative pcr was performed by the fluorescent dye sybr green methodology as previously described ( le pabic et al . , 2003 ) .\nprimer pairs for target genes were as follows : pai-1 , sense ( 5-gtctttccgaccaagagcag-3 ) and antisense ( 5-cgatcctgaccttttgcagt-3 ) ; adam12 , sense ( 5-gtttggctttggaggaagcacag-3 ) and antisense ( 5-tgcaggcagaggcttctgagg-3 ) ; col1a2 , sense ( 5-ggtggtggttatgactttg-3 ) and antisense ( 5-atacaggtttcgccggtag-3 ) ; and 18s , sense ( 5-cgccgctagaggtgaaattc-3 ) and antisense ( 5-ttggcaaatgctttcgctc-3 ) .\ns1 a shows the effect of increasing amounts of adam12 on expression of the tgf-responsive gene pai-1 .\ns1 b shows the tgf-dependent expression of endogenous adam12 or col1a2 in cells treated with adam12 antisense or control oligonucleotides .\ns3 shows the association of trii with several transmembrane proteins as indicated by labeling with a membrane - impermeable biotinylation reagent .\na fragment corresponding to the extracellular domain ( 20160 amino acids ) of human trii was cloned into pbtm116 .\na total of 10 10 independent colonies were screened as previously described ( colland et al . , 2004 ) .\nthe human embryonic kidney cell line 293 t , hscs , human rd cells , mouse c2c12 cells , monkey kidney cos7 cells , and mink lung mvlu1 cells were transfected using lipofectamine - plus reagent ( invitrogen ) according to the manufacturer 's instructions . for experiments with adam12 antisense\n, cells were incubated with 2 m of antisense oligonucleotides to adam12 ( ctctcttttatgccttct and ccccattcctttctcc ) or random control oligonucleotides ( actactacactagactac and gctctatgactcccag ) as previously described ( lafuste et al . , 2005 ) . for rnai experiments ,\nare3-lux , gaga9-lux , fast1 , ha - smad4 , myc - smad2 , and myc - smad7 were previously described ( dumont et al . , 2003 ;\nwas provided by j. wrana ( samuel lunenfeld research institute , mount sinai hospital , toronto , ontario , canada ) .\nexpression constructs for wild type or mutants of adam12 and adam12 fused to egfp were prepared as previously described ( hougaard et al . , 2000 ) .\nthe expression vector encoding adam12 shrna or scrambled shrna was constructed using the block - it u6 rna system ( invitrogen ) according to the manufacturer 's instructions .\nthe expression vector for flag - adam12 was obtained by fusing the flag epitope to the n terminus of the adam12 fragment ( amino acids 142739 ) isolated in the yeast two - hybrid screen .\nhepg2 , c2c12 , or 293 t cells were transfected by lipofectamine , and , 30 h later , they were treated for 18 h with 2 ng / ml human tgf-1 ( sigma - aldrich ) .\ncell extracts were assayed for luciferase activity using the dual luciferase reporter assay system ( promega ) , and luciferase activities were normalized on the basis of renilla luciferase expression from the prl - tk control vector . for potassium depletion experiments , transfected cells were incubated in medium and water ( 1:1 ) for 5 min at 37c followed by incubation in medium depleted or not depleted in kcl for 1 h at 37c before stimulation with tgf-.\nafter transfection , cells were lysed in lysis buffer ( dumont et al . , 2003 ) , and cell lysates\nwere subjected to immunoprecipitation with the appropriate antibody for 2 h followed by adsorption to sepharose bead coupled protein g for 1 h. immunoprecipitates were washed five times with lysis buffer containing 0.5% np-40 . for the association of endogenous trii with endogenous adams ,\nimmunoprecipitates were washed three times with lysis buffer containing 0.5% np-40 and two times with lysis buffer containing 1% np-40 .\nthen , samples were separated by sds - page and analyzed by immunoblotting with the indicated antibodies .\nthe following antibodies were used : anti - adam12 rb 122 ( gilpin et al . , 1998 ) , anti - flag m2 ( sigma - aldrich ) , anti - ha and anti \nmyc-9e10 ( boehringer manheim ) , antiphospho - smad2 ( cell signaling technologies ) , anti - smad2 ( zymed laboratories ) , anti - adam10 ( prosci ) , anti - adam17 ( chemicon ) , and antiactin , anti - trii , anti bone morphogenetic protein rii , anti - smad7 , anti \ncells were fixed in 3% pfa , permeabilized with 0.1% triton x-100 , and incubated for 60 min at room temperature with the appropriate primary antibody followed by the appropriate secondary antibody .\nthe coverslips were washed , mounted in pbs containing 50% glycerol and 1 mg / ml 1,4-diazabicyclo[2.2.2]octane , and viewed on an automated microscope ( dmrxa2 ; leica ) equipped with a camera ( coolsnap es n&b ; roper scientific ) and a 63 hcx pl apo na 1.32 oil objective ( leica ) .\nz steps were submitted to deconvolution ( nearest neighbor method ) by using metamorph software ( universal imaging corp . ) .\ntotal rna were extracted by the guanidinium thiocianate / cesium chloride method , and real - time quantitative pcr was performed by the fluorescent dye sybr green methodology as previously described ( le pabic et al . , 2003 ) .\nprimer pairs for target genes were as follows : pai-1 , sense ( 5-gtctttccgaccaagagcag-3 ) and antisense ( 5-cgatcctgaccttttgcagt-3 ) ; adam12 , sense ( 5-gtttggctttggaggaagcacag-3 ) and antisense ( 5-tgcaggcagaggcttctgagg-3 ) ; col1a2 , sense ( 5-ggtggtggttatgactttg-3 ) and antisense ( 5-atacaggtttcgccggtag-3 ) ; and 18s , sense ( 5-cgccgctagaggtgaaattc-3 ) and antisense ( 5-ttggcaaatgctttcgctc-3 ) .\ns1 a shows the effect of increasing amounts of adam12 on expression of the tgf-responsive gene pai-1 .\ns1 b shows the tgf-dependent expression of endogenous adam12 or col1a2 in cells treated with adam12 antisense or control oligonucleotides .\ns3 shows the association of trii with several transmembrane proteins as indicated by labeling with a membrane - impermeable biotinylation reagent ." }, { "role": "assistant", "content": "transforming growth factor- ( tgf- ) regulates a wide variety of biological processes through two types of ser / thr transmembrane receptors : the tgf- type i receptor and the tgf- type ii receptor ( trii ) . upon ligand binding , tgf- type i receptor activated by trii propagates signals to smad proteins , which mediate the activation of tgf- target genes . in this study \n , we identify adam12 ( a disintegrin and metalloproteinase 12 ) as a component of the tgf- signaling pathway that acts through association with trii . \n we found that adam12 functions by a mechanism independent of its protease activity to facilitate the activation of tgf- signaling , including the phosphorylation of smad2 , association of smad2 with smad4 , and transcriptional activation . furthermore , adam12 induces the accumulation of trii in early endosomal vesicles and stabilizes the trii protein presumably by suppressing the association of trii with smad7 . \n these results define adam12 as a new partner of trii that facilitates its trafficking to early endosomes in which activation of the smad pathway is initiated ." } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.22677
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Thermodynamic Costs of Information Processing in Sensory Adaptation}\nArticle:\nAbstract:\nBiological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response.\nIntroduction:\nIn order to perform a variety of tasks, living organisms continually respond and adapt to their changing surroundings through diverse electrical, chemical and mechanical signaling pathways, called sensory systems[1]. In mammals, prominent examples are the neurons involved in the visual, olfactory, and somatic systems[2]–[5]. But also unicellular organisms lacking a neuronal system sense their environment: Yeast can sense osmotic pressure[6], and E. coli can monitor chemical gradients[7], temperatures[8] and pH[9]. Despite the diversity in biochemical details, sensory adaptation systems( SAS) exhibit a common behavior: long-term storage of the state of the environment and rapid response to its changes[10]. Intuitively, one expects that for these SAS to function, an energy source – such as ATP or SAM – is required; but is there a fundamental minimum energy needed? To tackle this question, we first relate a generic SAS to a binary information processing device, which is tasked to perform fast information acquisition on the environment( response) and to record subsequently the information into its longer term memory( adaptation). Since the foundational works of Maxwell, Szilard and Landauer, the intimate relationship between thermodynamic costs and information processing tasks has been intensely studied[11]–[17]. As a result, the natural mapping between a generic SAS and an information processing device allows us to quantify the minimal energetic costs of sensory adaptation. The idea of viewing biological processes as information processing tasks is not new[7],[12],[18]. However, rationalizing sensory adaptation is complicated by recent studies that have revealed that motifs in the underlying biochemical networks play a fundamental role in the thermodynamic costs. For instance, the steady state of feedback adaptive systems must be dissipative, with more dissipation leading to better adaptation[19], an observation echoed in the analysis of a minimal model of adaptive particle transport[20]. Other studies have suggested that some feedforward adaptive systems may require dissipation to sustain their steady state[21], while some may not[22],[23]. Furthermore, past studies[18],[24] have approached the notion of information by considering noisy inputs due to stochastic binding, a realm in which adaptation may not be relevant due to the separation of time-scales[25]. Here, we develop a different approach that avoids these caveats by considering a thermodynamically consistent notion of information that naturally incorporates the costs of sensing in sensory adaptation. Specifically, we derive a collection of universal bounds that relate the thermodynamic costs of sensing to the information processed. These bounds reveal for the first time that for a generic SAS, measuring an environmental change is energetically costly[( 6) below], while to erase the memory of the past is energetically free, but necessarily irreversible[( 5) below]. By formalizing and linking the information processing and thermodynamics of sensory systems, our work shows that there is an intrinsic cost of sensing due to the necessity to process information. To illustrate our generic approach, we study first a minimal four-state feedforward model and then a detailed ten-state feedback model of E. coli chemotaxis. Owing to the symmetry of its motif's topology the four-state feedforward model does not require energy to sustain its adapted state. Instead, all the dissipation arises from information processing: acquiring new information consumes energy, while erasing old information produces entropy. By contrast, the E. coli model sustains its nonequilibrium steady state( NESS) by constantly dissipating energy, a requirement for adaptation with a feedback topology[19]. In this nonequilibrium setting, we generalize our thermodynamic bounds in order to pinpoint the additional energy for sensing over that required to maintain the steady state. We find with this formalism that in E. coli chemotaxis the theoretical minimum demanded by our bounds accounts for a sizable portion of the energy spent by the bacterium on its SAS.\nDiscussion:\nWe have derived generic information-theoretic bounds to sensory adaptation. We have focused on response-adaptive sensory systems subject to an abrupt environmental switch. This was merely a first step, but the procedure we have outlined here only relies on the validity of the second law of thermodynamics, and therefore can be extend to any small system affected by a random external perturbation to which we can apply stochastic thermodynamics, which is reviewed in[37]. Our predictions are distinct from( although reminiscent of) Landauer's principle[11],[12], which bounds the minimum energy required to reset an isolated memory. By contrast, the information erased in our system is its correlations with the signal. There is another important distinction from the setup of Landauer, and more broadly the traditional setup in the thermodynamics of computation[11] as well as the more recent advancements on the thermodynamics of information processing in the context of measurement and feedback[15],[38]–[45]. There the memory is reset by changing or manipulating it by varying its energy landscape. In our situation, the erasure comes about because the signal is switched. The loss of correlations is stimulated by a change in the measured system – that is the environmental signal; erasure does not occur because the memory itself is altered. Also relevant is[46], which addresses the minimum dissipated work for a system to make predictions about the future fluctuations of the environmental signal, in contrast to the measured information about the current signal, which we have considered. Our results predict that energy is required to sense changes in the environment, but do not dictate that source of energy. Our equilibrium feedforward model is able to sense and adapt by consuming energy provided by the environment. E. coli's feedback, however, uses mostly external energy to respond, but must consume energy of its own to adapt. The generic bounds here established apply to these two distinct basic topologies, irrespective of their fundamentally different energetics. For E. coli, to quantify to what extent is affected by SAM consumption and ligand binding, a more detailed chemical model is required in conduction with a partitioning of the excess work into distinct terms. An interesting open question in this regard, is why nature would choose the dissipative steady state of E. coli, when theoretically the cost of sensing could be paid by the environment. For a ligand change of, in the region of high adaptation, the information measured/erased is bits. We observed that the corresponding average change in the methylation level for a chemoreceptor is, suggesting that a methylation level can store bits for such 1-bit step response operations. Despite the small adaptation error, information storage is limited by fluctuations arising from the finite number of discrete methylation levels. Receptors' cooperativity, which is known to reduce fluctuations of the collective methylation level, may prevent this allowing them to store more information. On the energetic side, we have shown that the cost of sensing these ligand changes per receptor is around 10% of the cost of sustaining the corresponding adaptive machinery. We also showed that the energetic cost of binary operations is roughly twice beyond its minimum for large ligand changes, in stark contrast with everyday computers for which the difference is orders of magnitude. Taken together these numbers suggest that 5% of the energy a cell uses in sensing is determined by information-thermodynamic bounds, and is thus unavoidable. Future work should include addressing sensory adaptation in more complex scenarios. One which has recently aroused attention is fluctuating environments, which so far has been addressed using trajectory information[44],[45],[47]. However, under physiological conditions this is unlikely to play a significant role given the large separation of time-scales between binding, response, and adaptation[25]. Another scenario is a many bits step operation, in which instead of high and low signals a large discrete set of ligand concentrations is considered. Frequency response and gradient sensing are also appealing[27], since in them the system is in a dynamic steady state in which the memory is continuously erased and rewritten. Analysis of such scenarios is far from obvious, but the tools developed in this work constitute the first step in developing their theoretical framework." }, { "role": "assistant", "content": "The ability to process information is a ubiquitous feature of living organisms . Indeed , in order to survive , every living being , from the smallest bacterium to the biggest mammal , has to gather and process information about its surrounding environment . In the same way as our everyday computers need power to function , biological sensors need energy in order to gather and process this sensory information . How much energy do living organisms have to spend in order to get information about their environment ? In this paper , we show that the minimum energy required for a biological sensor to detect a change in some environmental signal is proportional to the amount of information processed during that event . In order to know how far a real biological sensor operates from this minimum , we apply our predictions to chemo-sensing in the bacterium Escherichia Coli and find that the theoretical minimum corresponds to a sizable portion of the energy spent by the bacterium ." } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1505
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, this corresponds to learning the reverse process of a fixed-length Markovian forward process. In simple terms, a conditional diffusion model \\hat{\\mathbf{x}}_{\\theta} is trained using a squared error loss to denoise a variably-noised image \\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows:\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left[w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right](1)where \\mathbf{x} is the ground-truth image, \\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), {\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) is a noise term and \\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t\\sim\\mathcal{U}([0,1]).At inference time, the diffusion model is sampled by iteratively denoising \\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) using either the deterministic DDIM song2020denoising or the stochastic ancestral sampler ho2020denoising . Intermediate points \\mathbf{z}_{t_{1}},\\dotsc,\\mathbf{z}_{t_{T}}, where 1=t_{1}>\\cdots>t_{T}=0, are generated, with decreasing noise levels. These points, \\hat{\\mathbf{x}}^{t}_{0}\\coloneqq\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c}), are functions of the \\mathbf{x}-predictions. The details of text-conditioning in text-to-image diffusion models are of high importance for visual quality and semantic fidelity. Ramesh et al. ramesh2022hierarchical use CLIP text embeddings that are translated into image embeddings using a learned prior, while Saharia et al. saharia2022photorealistic use a pre-trained T5-XXL language model raffel2020exploring . In our work, we use the latter.Language models like T5-XXL generate embeddings of a tokenized text prompt, and vocabulary encoding is an important pre-processing step for prompt embedding. In order to transform a text prompt \\mathbf{P} into a conditioning embedding \\mathbf{c}, the text is first tokenized using a tokenizer f using a learned vocabulary. Following saharia2022photorealistic , we use the SentencePiece tokenizer kudo2018sentencepiece . After tokenizing a prompt \\mathbf{P} using tokenizer f we obtain a fixed-length vector f(\\mathbf{P}). The language model \\Gamma is conditioned on this token identifier vector to produce an embedding \\mathbf{c}\\coloneqq\\Gamma(f(\\mathbf{P})). Finally, the text-to-image diffusion model is directly conditioned on \\mathbf{c}.\n\nQuestion: How is the conditioning vector obtained?" }, { "role": "assistant", "content": "From text prompts, the conditioning embedding vector is obtained. The text prompt is first tokenized to provide a fixed length token identification vector, and then the language model maps this vector to produce an embedding that serves as a conditioning vector." } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2365
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I propose an unsupervised method to decouple the lexical and affective information from facial configurations.\n\nKeyphrases: emotion recognition video images\n\nDataset description:\nAff-Wild: Aff-Wild is a dataset for emotion recognition from facial images in a variety of head poses, illumination conditions and occlusions.\nFlightmare Simulator: Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other.\nSherLIiC: SherLIiC is a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types.\nRAF-DB: The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers.\nIEMOCAP: Multimodal Emotion Recognition IEMOCAP The IEMOCAP dataset consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance.\nSEMAINE: The SEMAINE videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate of 25 fps (352 x 288 pixels).\n\nOptions:- A: Aff-Wild B: Flightmare Simulator C: SherLIiC D: RAF-DB E: IEMOCAP F: SEMAINE" }, { "role": "assistant", "content": "[\"IEMOCAP\", \"SEMAINE\"]" } ]
science.chemprot_re
science.chemprot_re.188
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPharmacokinetic Interactions between Monoamine Oxidase A Inhibitor Harmaline and 5-Methoxy-N, N-Dimethyltryptamine, and the Impact of CYP2D6 Status. 5-Methoxy-N, N-dimethyltryptamine (5-MeO-DMT or street name \" 5-MEO \") is a newer designer drug belonging to a group of naturally occurring indolealkylamines. Our recent study has demonstrated that coadministration of monoamine oxidase A (MAO-A) inhibitor harmaline (5 mg/kg) increases systemic exposure to 5-MeO-DMT (2 mg/kg) and active metabolite bufotenine. This study is aimed at delineating harmaline and 5-MeO-DMT pharmacokinetic (PK) interactions at multiple dose levels, as well as the impact of CYP2D6 that affects harmaline PK and determines 5-MeO-DMT O-demethylation to produce bufotenine. Our data revealed that inhibition of MAO-A-mediated metabolic elimination by harmaline (2, 5, and 15 mg/kg) led to a sharp increase in systemic and cerebral exposure to 5-MeO-DMT (2 and 10 mg/kg) at all dose combinations. A more pronounced effect on 5-MeO-DMT PK was associated with greater exposure to harmaline in wild-type mice than CYP2D6-humanized (Tg-CYP2D6) mice. Harmaline (5 mg/kg) also increased blood and brain bufotenine concentrations that were generally higher in Tg-CYP2D6 mice. Surprisingly, greater harmaline dose (15 mg/kg) reduced bufotenine levels. The in vivo inhibitory effect of harmaline on CYP2D6-catalyzed bufotenine formation was confirmed by in vitro study using purified CYP2D6. Given these findings, a unified PK model including the inhibition of MAO-A- and CYP2D6-catalyzed 5-MeO-DMT metabolism by harmaline was developed to describe blood harmaline, 5-MeO-DMT, and bufotenine PK profiles in both wild-type and Tg-CYP2D6 mouse models. This PK model may be further employed to predict harmaline and 5-MeO-DMT PK interactions at various doses, define the impact of CYP2D6 status, and drive harmaline-5-MeO-DMT pharmacodynamics." }, { "role": "assistant", "content": "[[\"harmaline\", \"INHIBITOR\", \"monoamine oxidase A\"], [\"harmaline\", \"DOWNREGULATOR\", \"CYP2D6\"], [\"Harmaline\", \"INHIBITOR\", \"Monoamine Oxidase A\"], [\"bufotenine\", \"PRODUCT-OF\", \"CYP2D6\"], [\"harmaline\", \"INHIBITOR\", \"MAO-A\"]]" } ]
science.covid_deepset_qa
science.covid_deepset_qa.604
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: Etiology of Influenza-Like Illnesses from Sentinel Network Practitioners in Réunion Island, 2011-2012\n\nhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5031398/\n\nSHA: f5ff89ebfdd0375d034c112c6c1c7e163fa69a0c\n\nAuthors: Brottet, Elise; Jaffar-Bandjee, Marie-Christine; Li-Pat-Yuen, Ghislaine; Filleul, Laurent\nDate: 2016-09-21\nDOI: 10.1371/journal.pone.0163377\nLicense: cc-by\n\nAbstract: In Réunion Island, despite an influenza surveillance established since 1996 by the sentinel general practitioner’s network, little is known about the etiology of Influenza like-illness (ILI) that differs from influenza viruses in a tropical area. We set up a retrospective study using nasal swabs collected by sentinel GPs from ILI patients in 2011 and 2012. A total of 250 swabs were randomly selected and analyzed by multiplex reverse transcriptase polymerase chain reaction (RT-PCR) including research of 18 viruses and 4 bacteria. We detected respiratory viruses in 169/222 ( 76.1 %) samples, mostly rhinovirus (23.4%), influenza A virus (21.2%), influenza B virus (12.6%), coronavirus (4.9%) and Human metapneumovirus (3.6%). Nine swabs (5.3% of positive swabs) revealed co-infections with two viruses identified, among which six concerned co-infections with influenza viruses. We observed important seasonal differences, with circulation of Human Metapneumoviruses, RSV A and B and coronavirus only during summer; whereas parainfluenza viruses were identified only during winter. In conclusion, this study highlights a substantial circulation of multiple respiratory pathogens in Réunion Island throughout the year. It shows that ILI are not only attributable to influenza and underlines the need for biological surveillance. As the use of multiplex RT-PCR showed its efficacy, it is now used routinely in the surveillance of ILI. Text: Influenza like-illness (ILI) or acute respiratory infections can be caused by several types of respiratory viruses or bacteria in humans [1] . Influenza viruses, Respiratory Syncytial viruses (RSV) and Parainfluenza viruses are identified as major viruses mostly responsible for ILI and pneumonia in several studies [2] . However practitioners cannot diagnose the infection without a biological test confirmation. Unfortunately, these infections causes are identified in less than 50% [3] . Réunion Island, a French overseas territory with 850,000 inhabitants, is located in the southern hemisphere between Madagascar and Mauritius in the Indian Ocean (Latitude: 21°05.2920 S Longitude: 55°36.4380 E.). The island benefits from a healthcare system similar to mainland France and epidemiological surveillance has been developed by the regional office of the French Institute for Public Health Surveillance (Cire OI), based on the surveillance system of mainland France [4] . Influenza activity generally increases during austral winter, corresponding to summer in Europe [5] . Since 2011, influenza vaccination campaign in Reunion Island starts in April and the vaccine used corresponds to World Health Organization recommendations for the southern hemisphere. Since 1996, clinical and biological influenza surveillance has been based on a sentinel practitioner's network [6] . In 2014, this network was composed of 58 general practitioners (GPs) spread over the island and represented around 7% of all Réunion Island GPs. Nasal swabs are randomly collected all along the year and are tested by RT-PCR for influenza viruses. Among these surveillance samples, 40 to 50% are tested positive for influenza A virus, A(H1N1)pdm09 or B virus by the virological laboratory of the University Hospital Center of Réunion. Thus ILI samples tested negative for influenza are of unknown etiology. Several biological tools allow identifying respiratory pathogens from nasal swab. In recent years, multiplex reverse transcriptase polymerase chain reaction (RT-PCR) has been developed to identify several viruses simultaneously [7] [8] [9] [10] . We therefore used this new method to set up a retrospective study using swabs collected by sentinel GPs from 2011 to 2012. The main objective of our study was to characterize respiratory pathogens responsible for ILI consultations in sentinel GPs in 2011 and 2012. Secondary objectives were to highlight seasonal trends on respiratory pathogens circulation and to describe occurrence of co-infections, especially during the flu season. ILI was defined as a sudden onset of fever more than 38 degrees Celsius and cough, associated or not with other symptoms such as breathing difficulty, headache, etc. Every week, all GPs of the sentinel network were encouraged to collect a nasal swab from the first two patients who presented ILI since less than three days. After being tested for influenza viruses, the 994 swabs collected in 2011 and 2012 are frozen at -80°C at the university hospital center (CHU) laboratory. Based on the budget, a season-stratified sample of 250 swabs was randomly selected in order to describe circulating viruses including outside flu season. Random sampling was performed with Excel 1 using the anonymized surveillance database of the Cire OI. The sampling frame contained identification number of swab assigned by Cire OI, laboratory identification number, sex, age, date of onset of symptoms, date of swab collection and result of influenza RT-PCR. We used Respifinder 1 Smart 22 kits a multiplex RT-PCR (PathoFinder, Maastricht, The Netherlands) which can detect 22 respiratory pathogens. This assay is based on the multiplex ligation-dependent probe amplification (MLPA) technology. The reverse transcription and preamplification steps were performed on the epgradient Mastercycler 1 (Eppendorf) and the hybridization, ligation and detection steps on the LightCycler 1 480 system (Roche Applied Science). This method was chosen because of its high specificity, compared to other same methods (78% versus 33%) [3, 11] . Multiplex analysis allows for rapid production of diagnostic results. It thus allows highlighted the possible presence of eighteen respiratory viruses and four bacteria in one reaction by melt curve analysis: Influenza A not (H1N1 \n\nStatistical analyses were performed with Stata 1 and Excel 1 . Two seasons were defined to identify possible seasonal trends in circulation of the viruses: winter season during weeks 23 to 39 between June and September and summer season during the rest of the year. Data and swabs result from a surveillance system that received regulatory approvals, including the CNIL (National Commission for Information Technology and Civil Liberties Number 1592205) approval in July 2012. All the patients have received oral information and gave their consent for swab and data collection. Data were collected for surveillance purpose and are totally anonymous. Among the 250 randomly-selected swabs, 26 were not available anymore as they were sent to Influenza Reference Center for confirmation and characterization of the pathogenic agent. According to the sensitivity of the assay two samples could be discordant results between Influenza PCR initially realized and Multiplex PCR. Thus they were deleted from the analysis: one is positive for Influenza in singleplex and negative for all tested pathogens in multiplex and one is positive for Influenza in singleplex and positive for PIV2 in multiplex. In total, 222 analyses were considered. Moreover, 53 samples were negative for all analyzed respiratory pathogens (23.9%) and 169 samples had at least one detected pathogen ( %), finally a total of 178 pathogens was identified. During the study period, a minority of the weeks (21 i.e. 20%) did not include any sampled swab, mainly outside flu season. Patients' sex-ratio was 0.63 (86 men and 136 women) and mean age was 28.4 years [min 0; max 81]. Ten percent had less than 5 years, 24% 5-15 years, 63% 15-65 years and only 3% were 65 and older. The respiratory pathogens most frequently identified in ILI swabs were rhinovirus (23.4%), influenza A not H1N1 (21.2%) and influenza B (12.6%) ( Table 1) . Among the 22 respiratory pathogens tested by the multiplex, only three were not found in any analyzed sample: Parainfluenza3, Legionella pneumophila and Bordetella pertussis. Regarding co-infections, nine swabs revealed the presence of two viruses, among which6 involved influenza viruses (Table 2) . Analyses showed that some viruses are possibly seasonal and were circulating during a specific period of the year. They are detected only in summer for Human Metapneumovirus, RSV A and B, and influenza A(H1N1)pdm09. For the latter, it is specific to the studied period since the influenza A(H1N1)pdm09 virus reappeared in Réunion Island in October 2012 and was no longer circulating since late 2010. On the opposite, Parainfluenza 1,2 and 4 viruses were identified only in winter. For other pathogens, no specific period of detection was observed. A weekly description of samples was realized to study the distribution of respiratory pathogens in 2011 and 2012 (Fig 1) . Results of biological analyses were compared with data of ILI consultations declared by sentinel GPs in 2011 and 2012. We observed in 2011, after a first wave in June mainly due to influenza A not H1N1 virus, a second wave of ILI consultations with mainly identification of Parainfluenza viruses and not influenza viruses. In 2012, the second epidemic wave at the end of austral winter coincided with Influenza viruses and Rhinovirus circulation. Regarding negative swabs (Fig 2) , we observed no seasonality during the study period with a similar proportion whatever the season. This retrospective study based on a sentinel GPs network showed that not only influenza viruses are responsible for ILI consultations. Indeed, an important circulation of multiple pathogens was observed throughout the year, with 12 different types of pathogens identified in 2011 and 2012. Respiratory viral pathogens were present in % of samples, which is largely above results from annual influenza surveillance [12] . After influenza viruses, Rhinovirus and Coronavirus were the most common respiratory viruses in Réunion Island. Although samples were not taken every week, sample was representative of ILI activity and consistent with flu season. Nevertheless, according to the low number of samples, it is difficult to conclude about seasonality. However in our study, RSV was circulating in summer season which is hot and rainy, which is confirmed by other studies in tropical region [13] . This study also highlighted several co-infections, showing that concomitant the multiple etiology of ILI. Co-circulation was already observed in Réunion Island during the A(H1N1) pdm09 pandemic in addition to influenza virus, with identification of other respiratory viruses such as Rhinovirus or Coronavirus [14] . In mainland France, during this pandemic, circulation of major respiratory viruses was found, such as Rhinovirus, Parainfluenza, Coronavirus, Human Metapneumovirus, like in our publication [15] [16] . In our study, only 5.3% of positive swabs were co-infections whereas in two studies in Madagascar co-infections represented 27.3% and 29.4% [17] [18] . Despite the distance of 9,300 km between Réunion and France, the island is directly connected to Europe with four daily flights to France. These exchanges can impact respiratory pathogens circulation in southern and northern hemisphere. Results of this study can therefore be of interest to both Indian Ocean and Europe countries. Among the 148 swabs initially negative for influenza because not previously tested for any other viruses, the study found an etiology for 95 swabs. In total, only 53 swabs, representing 24% of the sample, remained without etiology with negative multiplex PCR results all along the year. Multiple hypotheses can explain this result: a poor quality of swabs, preventing from identifying a pathogen, noninfectious causes or other pathogens not included in the multiplex PCR. However, we couldn't test the negative swabs for RNAse P, a marker of human cells, which could provide a modicum of assurance that the swab contained human cells. Concerning the two samples divergent for influenza identification between the multiplex and singleplex PCR, we discarded them for the analysis; one was positive in Influenza with singleplex and positive in PIV with multiplex. It could be a false positive result from singleplex. Indeed, as the multiplex PCR assay has a good sensitivity and is considered as a gold-standard, we decided to keep seven negative results for Influenza in singleplex and positive in Influenza in multiplex [7] [8] [9] [10] . No case of Bordetella pertussis which causes whooping cough and Legionella pneumophila which causes Legionnaires' disease was identified in this study. However, these diseases are rare in Réunion Island, around three cases of Legionnaires' disease are declared each year. A limit of the study is that no clinical data were available in the virological surveillance system of influenza in Réunion Island. It was impossible to compare clinical symptoms according to each pathogen and to know if there are different pathogens which cause for instance rhinitis, laryngitis or bronchitis (diseases included in ILI). A specific prospective study including clinical data might provide useful elements in the semiotics of diseases. In conclusion, this study highlighted an important circulation of multiple pathogens in Réunion Island throughout the year. It shows that ILI is not specific to influenza and so it is essential to have biological results in order to establish the differential diagnosis and thus explain the etiology of symptoms. For a better understanding of respiratory pathogens circulating in Réunion Island, information from this study may also be useful to practitioners who see many patients in consultation with ILI. As the use of multiplex RT-PCR showed its efficacy in the ILI surveillance and allowed to highlight the circulation of other viruses and bacterial causes of respiratory infections, it is now used routinely in the surveillance of ILI. Moreover, it would be interesting to repeat this study every 3 or 5 years adding clinical data to monitor the evolution of respiratory pathogens in Réunion Island over time.\n\nQuestion: Respiratory viral pathogens were present in what percentage of samples?" }, { "role": "assistant", "content": "76.1" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.95294
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe bhlumi monte carlo ( mc ) program@xcite was developed as a precision tool for calculating the small - angle bhabha luminosity process at slc and lep , and with continued development , it will continue to be a valuable tool meeting the requirements of a next - generation linear @xmath1 collider , such as the proposed ilc .\ncentral to this program s success was an exact treatment of the phase space for @xmath2 photon bremsstrahlung .\na yfs - exponentiation@xcite procedure allows all ir singularities to be canceled exactly between real and virtual emission processes to all orders .\nthe leading soft photon effects are exponentiated , and ir - finite yfs residuals are then calculated exactly to the order required to reach the desired precision level .\nbhlumi attained a total error budget of 0.061% for lep1 parameters and 0.122% for lep2 parameters for a typical calorimetric detector scenario.@xcite to assure this precision level , it was necessary to calculate the most important unimplemented effect in bhlumi4.04 , which was the next to leading - log ( nll ) contribution to the two - photon radiative corrections .\nthe double real photon and single real plus virtual photon corrections to the small - angle bhabha scattering process were calculated exactly in refs .\n@xcite when added to the known two - loop virtual correction@xcite , these results showed that the @xmath3 corrections enter at the 0.027% level for lep1 parameters and 0.04% level for lep2 parameters .\nhere , @xmath4 is the `` large logarithm '' entering into a leading log expansion . implementing these exact @xmath5 results in bhlumi would eliminate these contributions to the error budget .\nthe only remaining unimplemented @xmath5 radiative corrections would then be up - down interference effects in which two virtual photons are exchanged between the @xmath6 and @xmath7 line , which are suppressed at small angles , and nominally enter at the level of 0.004% or less for angles below @xmath8.@xcite these contributions , represented by diagrams of the type shown in fig . 1 , could thus be safely neglected for slc and lep physics .\n-0.5 cm for ilc physics , where the goal is to reach 0.01% in the small - angle bhabha luminosity process , it is desirable to carefully check the magnitude of the up - down interference terms , and to implement them if they turn out to be significant . a key ingredient in the comparison , the five - point box integral appearing in the second and fourth diagram in fig . 1\n, has recently been provided by the looptools 2.2 package.@xcite a number of other @xmath5 calculations which have appeared recently@xcite should also provide valuable insight into effects which may need to be implemented in the small - angle bhabha calculation to reach ilc precision specifications .\nfermion pair production plays a critical role in extracting precision electroweak physics from @xmath1 colliders .\nthis process is calculated by another yfs - exponentiated mc program , the @xmath0mc.@xcite again , the photonic radiative corrections play an essential role in calculating the yfs residuals through order @xmath9 .\nthese have been calculated exactly , including finite - mass corrections , for initial state and final state radiation.@xcite using helicity spinor techniques,@xcite a concise and stable representation for the @xmath5 initial or final state radiation amplitude has been obtained , including finite - mass corrections . the matrix element for hard photon initial - state emission with one virtual photon may be expressed as @xmath10 where @xmath11 is the tree - level matrix element for single hard photon emission , @xmath12 are scalar form factors and @xmath13 are spinor factors defined in ref .\n@xcite .\nthe single hard photon cross section is of particular interest in radiative return applications@xcite , where initial state radiation is used to reduce the effective beam energy , allowing a fixed energy machine to probe a range of energies .\na mc program phokhara was developed to calculate radiative return at @xmath14 and @xmath15 factories.@xcite the same radiative corrections are relevant for a high - energy @xmath1 collider investigating physics around the @xmath16 peak , for example .\nit is therefore useful to compare the radiative corrections obtained for both the @xmath0mc and phokhara in detail .\nboth calculations claim the same level of exactness , including the same diagrams as well as electron mass corrections relevant for collinear bremsstrahlung . we have compared the virtual corrections to initial state hard - photon emission calculated in ref .\n@xcite ( kr ) for phokhara to those calculated in ref .\n@xcite ( jmwy ) for the @xmath0mc in the case of muon pair production .\nanalytically , it was found that in the absence of mass corrections , both expressions agree to nll order ( @xmath3 in the integrated cross section).@xcite a compact expression for nll limit of the matrix element was obtained in ref .\n@xcite , where it was shown that the terms @xmath17 and @xmath18 in eqn .\n( [ vdef1 ] ) vanish to nll order , and the helicity - averaged nll limit of @xmath19 is @xmath20 where @xmath21 is the `` large logarithm '' in the leading log expansion , @xmath22 measures the inner product of one of the incoming fermion momenta @xmath23 with the hard photon momentum @xmath24 , sp@xmath25 = li@xmath26 is the dilogarithm ( spence ) function , and @xmath27 is the ir - divergent virtual yfs form factor . since the two results are known to agree with the nll limit calculated using eqn .\n( [ f0 ] ) , the nll limit is subtracted in each case , permitting the nnll contributions and collinear mass corrections to be investigated in the context of the @xmath0mc .\n2 shows the results of a @xmath0mc run calculating the nnll contribution to muon pair production at a cms energy of 500 gev for @xmath28 events , both with and without the mass corrections .\nthe cross - section is integrated up to a radiated photon energy fraction of @xmath29 ( with @xmath30 ) using the yfs residual @xmath31 for one hard photon at @xmath5 , subtracting the nll contribution obtained using eqn .\n( [ f0 ] ) .\nthe result is normalized with respect to the non - radiative born cross section for muon pair production .\nit is found that the maximum difference between the complete kr and jmwy results ( from the next - to - last bin ) is @xmath32 units of the born cross section .\nmost of this apparently comes from differences in the treatment of the mass corrections .\nkr uses an expansion in @xmath33 , while jmwy uses a technique developed by berends _\net al._@xcite for adding the mass corrections required in collinear limits to a calculation obtained using massless spinors . without mass corrections\n( comparing the massless points ) , the results agree to within a part per million . this agreement is better than noted previously@xcite due to improvements in the stability of the algorithms used .\ndirect comparisons of the phokhara and @xmath0mc programs have also been conducted.@xcite it is interesting to note that the size of the nnll part of the corrections implemented by jmwy never exceed @xmath34 up to @xmath35 , and reach @xmath36 in the last bin , where @xmath37 .\nthis suggests that for most purposes , the considerably simpler nll result represented by eqn .\n( [ f0 ] ) will suffice .\nthe drell - yan process plays a role at hadron colliders which is as basic as the bhabha scattering or pair production cross section at @xmath1 colliders .\nin fact , @xmath38 and @xmath16 production has been proposed as the luminosity process for the lhc.@xcite a fully exclusive calculation of the parton - level cross sections is needed at the @xmath39 level for upcoming lhc physics .\nthese cross sections are currently known at the @xmath40 level , using nlo matrix elements .\nwhile nnlo results are available for the integrated cross section@xcite and rapidity distribution@xcite , a fully - exclusive nnlo cross section needed for a mc event generator is not yet available .\nmoreover , electroweak radiative corrections will be required as well .\nreaching the desired lhc precision will require corrections of order @xmath41 and order @xmath42 , including mixed @xmath43 contributions .\nexamples of the latter diagrams are shown in fig .\n( 7.5,6.5 ) ( 1.8,3.5)(a ) ( 0,3.25 ) ( 5.6,3.5)(b ) ( 0,0 ) ( 1.8,0.25)(c ) ( 3.75,3.25 ) ( 5.6,0.25)(d ) ( 3.75,0 ) the hard parton - level processes must be calculated and combined with pdfs in a mc program designed to generate the desired distribution of partons plus mixed qcd and qed bremsstrahlung .\nthis will require a careful implementation of the multiple gluon and photon phase space .\nexperience in the electroweak sector suggests that yfs exponentiation will provide a strong tool for implementing the multiple - emission phase space , giving very precise control over the soft and collinear limits .\nthe large numbers of diagrams creates a challenge for obtaining an expression that can be evaluated quickly enough for mc implementation , and evaluated in a numerically stable manner .\ncommon reduction methods based on the passarino - veltman technique@xcite can produce millions of terms , which are both slow to evaluate in a mc setting , and prone to numerical instabilities due to the large numbers of terms added and potential cancellations among them .\nthus , a significant part of this problem will involve developing and testing new methods for organizing and calculating the terms in a stable manner .\nonce the parton - level matrix element is obtained , it may be incorporated into a `` qced - exponentiated '' mc program , implementing a procedure similar to yfs exponentiation in a combined qed and qcd setting to construct the exact phase space for multiple gluon and photon radiation .\nthis requires extending the yfs calculus to non - abelian gauge theory , with due care in handling the genuine non - abelian singularities which arise in qcd\n. the qcd and qed exponentiation will be conducted at order @xmath44 on an event - by - event basis in the presence of showers without double - counting of soft and collinear emission effects .\nfurther details of this construction will be presented elsewhere in these proceedings.@xcite\na careful calculation of higher - order bremsstrahlung corrections led to a precision tool ( bhlumi ) for bhabha luminosity calculations .\nincorporating the second order photonic corrections obtained to test bhlumi s precision may be enough to reach the 0.01% level proposed for the ilc . calculating the @xmath5 up - down interference contribution will help to clarify this .\na number of second - order bhabha scattering results are now available which should be useful for testing bhlumi s precision at this higher level .\nwe have also described cross - checks on the second - order photonic radiative corrections for fermion pair production developed for the @xmath0mc .\ncomparisons with similar initial - state radiative corrections developed for phokhara show agreement on the order of @xmath45 in units of the born cross section at ilc energies .\nremaining differences may be attributed primarily to differences in the handling of finite mass corrections in the collinear limits , and possible residual numerical instabilities for photons radiated near the fermion pair production threshold .\nfinally , we have outlined a program for carrying the successes of the yfs - exponentiated mc framework developed for slc and lep physics into the hadronic realm of the lhc , where a precise calculation of the parton - level diagrams can be combined with a combined qcd and qed exponentiation framework , with complete control of the multiple photon and gluon phase space , to develop a mc for the drell - yan process which can reach the lhc precision requirements .\nyost , s. majhi and b.f.l .\nward , proceedings of the 2005 international linear collider physics and detector workshop and second ilc accelerator workshop , snowmass ( 2005 ) , alcpg1911 ( hep - ph/0512022 ) ." }, { "role": "assistant", "content": "we describe radiative corrections to bremsstrahlung and their application to high energy collider physics , focusing on the applications to luminosity measurement , fermion pair production and radiative return . \n we review the status of one loop radiative corrections in bhlumi and the @xmath0mc , including cross checks with newer results developed independently for radiative return . \n we outline a yfs - exponentiated approach to the drell - yan process for lhc physics , including a discussion of the relevant radiative corrections ." } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.6735
[ { "role": "user", "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nMental Fatigue Increases Gait Variability During Dual-task Walking in Old Adults\nAbstract 1:\nBackground Mental fatigue is a psychobiological state induced by sustained periods of dem and ing cognitive activity and is characterized by feelings of tiredness which are common in everyday life . Recently , it has been hypothesized that mental fatigue might have an impact on gait performance in old adults . Therefore , the effect of mental fatigue on gait performance under single- and dual-task conditions was investigated in young and old participants . Methods Spatio-temporal gait parameters of 16 young and 16 old healthy participants were measured using a photoelectric system during single- and dual-task walking before and after a r and omly assigned mental fatigue ( performing a stop-signal task for 90 minutes ) and control intervention ( watching a video for 90 minutes ) , respectively . Changes in subjective fatigue , wakefulness , mood , arousal , and psychophysiological workload ( heart rate variability indices ) were assessed . Results Psychometric measures indicated increased subjective fatigue and arousal as well as decreased mood and wakefulness after the mental fatigue task . Heart rate variability indices revealed a higher psychophysiological workload during the mental fatigue intervention in old compared to young participants . Gait measures ( coefficient of variation of speed , stride length , and stance time ) revealed impaired dual-task walking performance following the mental fatigue intervention only in old participants . Conclusion Data indicate that mental fatigue , induced by sustained cognitive activity , can impair gait performance during dual-task walking in old adults . The susceptibility to mental fatigue could be a new intrinsic risk factor for falls in older people and should be taken into account when dual-task gait analyses are performed\n\n\nTitle 2:\nChanges in postural stability with fatigue of lower extremity frontal and sagittal plane movers.\nAbstract 2:\nThe purpose of this study was to quantify changes in postural stability with fatigue of the frontal and sagittal movers of the lower extremities . There were four test sessions , with a r and omized order assigned according to the muscles tested and the plane of motion . Subjects were 20 healthy men ( age : 22.6+/-2.4 years , height : 173.7+/-3.6 cm , weight : 63.3+/-7.9 kg ) . During each session , one set of muscle groups was fatigued using isokinetic contractions : ankle plantar/dorsi flexors , ankle evertor/invertors , hip flexor/extensors or hip abductor/adductors . The Biodex Stability System was used to assess anterior/posterior and medial/lateral stability before and after muscle fatigue . Repeated measures ANOVAs revealed that fatigue was associated with a significant increase in all stability indices . Fatigue of the hip movers , whether in the frontal or sagittal planes , led to greater increments in stability indices than fatigue of the ankle musculature . Fatigue of the frontal movers result ed in greater increases in the medial/lateral stability index compared to fatigue of the sagittal movers . In conclusion , fatigue of proximal lower extremity muscles affects postural stability and fatigue of the frontal movers is associated with postural instability in the frontal plane\n\n\nTitle 3:\nFatigue effects on muscle excitability.\nAbstract 3:\nRepeated isometric or shortening contractions of skeletal muscle cause muscle fatigue but several prior studies have reported an apparent absence of muscle fatigue when humans performed up to 70 lengthening contractions . We pursued the hypothesis that perhaps muscle excitability is a factor that aids force preservation with repeated eccentric actions . Soleus compound muscle action potential ( M-wave ) latency , peak-to-peak amplitude ( PPA ) , duration , and area under the curve were examined in 12 subjects ( mean age 24.3 y ) over 4 testing days that included : no exercise , isometric exercise ( neutral ankle angle ) , isokinetic ( 0.5 rad.s-1 ) concentric and eccentric exercise of the plantar flexors in the seated position on a Biodex dynamometer . Supramaximal shocks were delivered to the tibial nerve in the popliteal fossa at baseline ( 3 shocks , 1 min apart ) , during exercise ( 1 shock-after each of 5 bouts/10 contractions ) , and during 10-min recovery . From initial to final contractions , concentric , isometric , and eccentric fatigue was -32 , -41 and + 2 % ( Condition by Trial Interaction , F2,22 = 25.1 , p = 0.000 ) . No changes occurred in latency or duration ( p > 0.05 ) , but PPA ( Condition by Time interaction , F51,561 = 3.7 , p = 0.000 ) increased during isometric and eccentric exercise and remained elevated during recovery . Area increased ( F51,561 = 3.1 , p = 0.000 ) significantly during all three exercise conditions and approximated baseline by minute 8 of recovery . It was concluded that although the potentiation of the action potential of individual muscle fibers seems to be the common mechanism underlying the increase in muscle excitability during plantar flexion exercise , it is possible that different factors could cause such a non-specific response\n\n\nTitle 4:\nThe effect of lower limb muscle fatigue on obstacle negotiation during walking in older adults.\nAbstract 4:\nTripping over obstacles is a common cause of falls in older adults , and muscle fatigue , which can alter walking patterns , may add to this risk . To date , no study has examined the effect of lower limb muscle fatigue on obstacle negotiation in older adults . 30 older adults ( 13 women , aged 78.3 [ 6.2 ] years ) negotiated a 12 m obstacle course , while completing a visual secondary task , under two r and omized conditions : rested or fatigued . For the fatigue condition , participants performed a repeated sit-to-st and movement , as fast as possible , until they could no longer continue . Participants then immediately began walking trials . Kinematic and kinetic data were collected on approach to , during , and after crossing a height-adjustable target obstacle ( 10 % and 20 % of leg length ) . Repeated measures ANOVA showed a statistically significant increase in lead limb vertical loading rate after stepping over the 10 % obstacle when fatigued , relative to rested ( P=0.046 ) . No other significant between-condition differences ( > 0.05 ) were observed for the other kinematic variables when negotiating the 10 % obstacle . Furthermore , no significant between-condition differences ( P>0.05 ) were observed for any kinetic or kinematic variables when negotiating the 20 % obstacle . This study describes a feasible method for investigating the consequences of lower limb muscle fatigue on obstacle crossing . The current finding of increased vertical loading rate when fatigued supports the need for further investigation into the effect of muscle fatigue on gait under different environmental conditions , fatiguing a range of muscles , analyzing a more comprehensive array of kinetic and kinematic measures , and in healthy and clinical population\n\n\nTitle 5:\nMaximum voluntary activation in nonfatigued and fatigued muscle of young and elderly individuals.\nAbstract 5:\nBACKGROUND AND PURPOSE Research ers study ing central activation of muscles in elderly subjects ( > or = 65 years of age ) have investigated activation in only the nonfatigued state . This study examined the ability of young and elderly people to activate their quadriceps femoris muscles voluntarily under both fatigued and nonfatigued conditions to determine the effect of central activation failure on age-related loss of force . SUBJECTS AND METHODS Twenty young subjects ( 11 men , 9 women ; mean age = 22.67 years , SD = 4.14 , range = 18 - 32 years ) and 17 elderly subjects ( 8 men , 9 women ; mean age = 71.5 years , SD = 5.85 , range = 65 - 84 years ) participated in this study . Subjects were seated on a dynamometer and stabilized . Central activation was quantified , based on the change in force produced by a 100-Hz , 12-pulse electrical train that was delivered during a 3- to 5-second isometric maximum voluntary contraction ( MVC ) of the quadriceps femoris muscle . Next , subjects performed 25 MVCs ( a 5-second contraction with 2 seconds of rest ) to fatigue the muscle . During the last MVC , central activation was measured again . RESULTS In the nonfatigued state , elderly subjects had lower central activation than younger subjects . In the fatigued state , this difference became larger . DISCUSSION AND CONCLUSION Central activation of the quadriceps femoris muscle in elderly subjects was reduced in both the fatigued and nonfatigued states when compared with young subjects . Some part of age-related weakness , therefore , may be attributed to failure of central activation in both the fatigued and nonfatigued states\n\n\nTitle 6:\nEffects of unilateral leg muscle fatigue on balance control in perturbed and unperturbed gait in healthy elderly.\nAbstract 6:\nThis study assessed effects of unilateral leg muscle fatigue ( ULMF ) on balance control in gait during the stance and swing phases of the fatigued leg in healthy elderly , to test the assumption that leg muscle strength limits balance control during the stance-phase . Ten subjects ( aged 63.4 , SD 5.5 years ) walked on a treadmill in 4 conditions : unperturbed unfatigued , unperturbed fatigued , perturbed unfatigued , and perturbed fatigued . The perturbations were lateral trunk pulls just before contralateral heel contact . ULMF was evoked by unilateral squat exercise until task failure . Isometric knee extension strength was measured to verify the presence of muscle fatigue . Between-stride st and ard deviations and Lyapunov exponents of trunk kinematics were used as indicators of balance control . Required perturbation force and the deviation of trunk kinematics from unperturbed gait were used to assess perturbation responses . Knee extension strength decreased considerably ( 17.3 % SD 8.6 % ) as a result ULMF . ULMF did not affect steady-state gait balance . Less force was required to perturb subjects when the fatigued leg was in the stance-phase compared to the swing-phase . Subjects showed a faster return to the unperturbed gait pattern in the fatigued than in the unfatigued condition , after perturbations in swing and stance of the fatigued leg . The results of this study are not in line with the hypothesized effects of leg muscle fatigue on balance in gait . The healthy elderly subjects were able to cope with substantial ULMF during steady-state gait and demonstrated faster balance recovery after laterally directed mechanical perturbations in the fatigued than in the unfatigued condition\n\n\nTitle 7:\nMonitoring Muscle Fatigue Progression during Dynamic Exercise\nAbstract 7:\nABSTRACT Purpose To develop and vali date a novel model for assessment of peripheral fatigue progression during dynamic exercise . Methods Seven males and four females ( 24 ± 3 yr ) completed one-legged knee-extensor exercise at 40 % of peak power output ( PPO ) for 12 min ( 40 % PPO ) . Additionally , an exhaustive bout lasting approximately 6 min ( 84 % ± 2 % PPO ; 59 ± 10 W ) was contrasted to two bouts completed at ±5 % of the target workload . Trials were completed in r and omized order . Percutaneous electrical stimulation of m. vastus lateralis during the passive knee-flexion phase allowed quantification of maximal twitch force every 30 s in parallel with RPE . Results Elicited twitch force remained unchanged during the 40 % PPO trial . During the three exhaustive bouts , exercise differed in duration ( 561 ± 154 s , 366 ± 64 s , 245 ± 61 s ; P < 0.001 ) and amplitude of elicited twitch force showed a curvilinear decline across time . Elicited twitch force at exhaustion was approximately 60 % reduced and similar between the exhaustive trials ( intraclass correlation coefficient , 0.76 ; 95 % confidence interval , 0.48–0.92 ) . The increase in RPE during exercise was strongly correlated to the gradually reduced evoked twitch force ( repeated- measures correlation , 0.89 ; 95 % confidence interval , 0.62–0.97 ) . Conclusions The developed model permits quantification of muscle fatigue progression during continuous dynamic one-legged knee-extension and a biphasic fatigue pattern is demonstrated during intense exercise . The model is sensitive to small changes in intensity , and it provides a novel approach for study ing muscular mechanisms and their temporal relation to fatigue progression in vivo" }, { "role": "assistant", "content": "Conclusion Muscle performance fatigability affects spatial and temporal features of gait and , mainly , inter-stride trunk acceleration variability .\nIn contrast , sustained mental activity tends only to affect step variability during dual tasking ." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.6025
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nSensing surroundings is ubiquitous and effortless to humans: It takes a single glance to extract the spatial configuration of objects and the free space from the scene. To help machine vision with spatial understanding capabilities, we introduce the View Parsing Network (VPN) for cross-view semantic segmentation. In this framework, the first-view observations are parsed into a top-down-view semantic map indicating precise object locations. VPN contains a view transformer module, designed to aggregate the first-view observations taken from multiple angles and modalities, in order to draw a bird-view semantic map. We evaluate the VPN framework for cross-view segmentation on two types of environments, indoors and driving-traffic scenes. Experimental results show that our model accurately predicts the top-down-view semantic mask of the visible objects from the first-view observations, as well as infer the location of contextually-relevant objects even if they are invisible.\n\nCited Abstract(s):\n[1]: We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.\n\n[2]: We introduce a neural architecture for navigation in novel environments. Our proposed architecture learns to map from first-person views and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the task, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. We train and test CMP on navigation problems in simulation environments derived from scans of real world buildings. Our experiments demonstrate that CMP outperforms alternate learning-based architectures, as well as, classical mapping and path planning approaches in many cases. Furthermore, it naturally extends to semantically specified goals, such as 'going to a chair'. We also deploy CMP on physical robots in indoor environments, where it achieves reasonable performance, even though it is trained entirely in simulation.\n\n[3]: Humans routinely retrace a path in a novel environment both forwards and backwards despite uncertainty in their motion. In this paper, we present an approach for doing so. Given a demonstration of a path, a first network generates an abstraction of the path. Equipped with this abstraction, a second network then observes the world and decides how to act in order to retrace the path under noisy actuation and a changing environment. The two networks are optimized end-to-end at training time. We evaluate the method in two realistic simulators, performing path following both forwards and backwards. Our experiments show that our approach outperforms both a classical approach to solving this task as well as a number of other baselines.\n\n[4]: With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.\n\n[5]: Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.\n\n[6]: Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.\n\n[7]: \n\n[8]: The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.\n\n[9]: Harvesting dense pixel-level annotations to train deep neural networks for semantic segmentation is extremely expensive and unwieldy at scale. While learning from synthetic data where labels are readily available sounds promising, performance degrades significantly when testing on novel realistic data due to domain discrepancies. We present Dual Channel-wise Alignment Networks (DCAN), a simple yet effective approach to reduce domain shift at both pixel-level and feature-level. Exploring statistics in each channel of CNN feature maps, our framework performs channel-wise feature alignment, which preserves spatial structures and semantic information, in both an image generator and a segmentation network. In particular, given an image from the source domain and unlabeled samples from the target domain, the generator synthesizes new images on-the-fly to resemble samples from the target domain in appearance and the segmentation network further refines high-level features before predicting semantic maps, both of which leverage feature statistics of sampled images from the target domain. Unlike much recent and concurrent work relying on adversarial training, our framework is lightweight and easy to train. Extensive experiments on adapting models trained on synthetic segmentation benchmarks to real urban scenes demonstrate the effectiveness of the proposed framework.\n\n[10]: Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.\n\n[11]: Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.\n\n[12]: We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.\n\n[13]: Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment." }, { "role": "assistant", "content": "Given that current graphics simulation engines can render realistic scenes, recognition algorithms can be trained on data pulled from simulation engines (i.e. for visual navigation models [2] [4]1 [11] [7] [3] ). Several techniques have been proposed to address the domain adaptation issue [9] [5] [4] [1] [12] [6] [8] , when models trained with simulated images are transferred to real scenes [4]3 . Rather than working on the task of visual navigation directly, our project aims at parsing the top-down-view semantic map from first-view observations. The resulting top-down-view map will further facilitate the visual navigation and path planning." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.7184
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nImage retrieval based on deep convolutional features has demonstrated state-of-the-art performance in popular benchmarks. In this paper, we present a unified solution to address deep convolutional feature aggregation and image re-ranking by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or features tend to dominate final image representations, resulting in representations less distinguishable. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoid over-representation of features. We additionally provide a practical solution for the proposed aggregation method and further show the efficiency of our method in experimental evaluation. Inspired by the aforementioned deep feature aggregation method, we also propose a method to re-rank a number of top ranked images for a given query image by considering the query as the heat source. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks and show superior performance compared to previous work.\n\nCited Abstract(s):\n[1]: Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.\n\n[2]: We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.\n\n[3]: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.\n\n[4]: This publicly available curated dataset of almost 100 million photos and videos is free and legal for all.\n\n[5]:" }, { "role": "assistant", "content": "Fine-tuning an off-the-shelf network is also popular for improving retrieval quality. For instance, there are a number of approaches that learn features for the specific task of landmark retrieval [5] [2] [3] [1] . While fine-tuning a pre-trained model is usually preceded by extensive manual annotation, Radenovic [5] introduced an unsupervised fine-tuning of CNN for image retrieval from a large collection of unordered images in a fully automated manner. Similar to this work, the methods presented in [3] [1] overcame laborious annotation, and collected training data in a weakly supervised manner. More specifically, Arandjelovic [3] proposed a new network architecture, NetVLAD, that was trained for place recognition in an end-to-end manner from weakly supervised Google Street View Time Machine images. Cao [1] trained a special architecture Quartet-net by harvesting data automatically from GeoPair [4] . We show that our feature weighting, and image re-ranking approach, while not requiring extra supervision, performs favorably compared to these previous methods." } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1138
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is a bacteriocin?\n\nContext: One such strategy involves using narrow-spectrum protein antibiotics (so-called bacteriocins), which diverse bacteria use to compete against closely related species.\nMany enterococcal strains produce bacteriocins, which could be useful as natural food preservatives through inhibition of pathogenic and spoilage microorganisms.\nBacteriocins, the ribosomally produced antimicrobial peptides of bacteria, represent an untapped source of promising antibiotic alternatives" }, { "role": "assistant", "content": "antimicrobial peptide of bacteria" } ]
science.scitldr_aic
science.scitldr_aic.876
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nPrevious work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.\nSince collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning.\nWe first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training.\nThis inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness.\nNamely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes.\nWe empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.\nThe deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016) .\nHowever, in terms of robustness, the SCE loss is not sufficient to lead to satisfactory performance of the trained models.\nIt has been widely recognized that the DNNs trained by the SCE loss are vulnerable to adversarial attacks (Carlini & Wagner, 2017a; Goodfellow et al., 2015; Kurakin et al., 2017; Papernot et al., 2016) , where human imperceptible perturbations can be crafted to fool a high-performance network.\nTo improve adversarial robustness of classifiers, various kinds of defenses have been proposed, but many of them are quickly shown to be ineffective to the adaptive attacks, which are adapted to the specific details of the proposed defenses .\nBesides, the methods on verification and training provably robust networks have been proposed (Dvijotham et al., 2018a; b; Hein & Andriushchenko, 2017; .\nWhile these methods are exciting, the verification process is often slow and not scalable.\nAmong the previously proposed defenses, the adversarial training (AT) methods can achieve state-of-the-art robustness under different adversarial settings Zhang et al., 2019b) .\nThese methods either directly impose the AT mechanism on the SCE loss or add additional regularizers.\nAlthough the AT methods are relatively strong, they could sacrifice accuracy on clean inputs and are computationally expensive (Xie et al., 2019) .\nDue to the computational obstruction, many recent efforts have been devoted to proposing faster verification methods Xiao et al., 2019) and accelerating AT procedures (Shafahi et al., 2019; Zhang et al., 2019a) .\nHowever, the problem still remains.\nshow that the sample complexity of robust learning can be significantly larger than that of standard learning.\nGiven the difficulty of training robust classifiers in practice, they further postulate that the difficulty could stem from the insufficiency of training samples in the commonly used datasets, e.g., CIFAR-10 (Krizhevsky & Hinton, 2009) .\nRecent work intends to solve this problem by utilizing extra unlabeled data (Carmon et al., 2019; Stanforth et al., 2019) , while we focus on the complementary strategy to exploit the labeled data in hand more efficiently.\nNote that although the samples in the input space are unchangeable, we could instead manipulate the local sample distribution, i.e., sample density in the feature space via appropriate training objectives.\nIntuitively, by inducing high-density feature regions, there would be locally sufficient samples to train robust classifiers and return reliable predictions .\n!\"\"# ∈ %&, %& + ∆% (low sample density) !\"\"# ∈ %*, %* + ∆% (high sample density) + , * !\n.#/ ∈ %&, %& + ∆% (medium sample density) !.#/ ∈ %*, %* + ∆% (medium sample density)\nIn this paper, we formally demonstrate that applying the softmax function in training could potentially lead to unexpected supervisory signals.\nTo solve this problem, we propose the MMC loss to learn more structured representations and induce high-density regions in the feature space.\nIn our experiments, we empirically demonstrate several favorable merits of our method:\n(i) Lead to reliable robustness even under strong adaptive attacks in different threat models;\n(ii) Keep high performance on clean inputs comparable TO SCE;\n(iii) Introduce little extra computation compared to the SCE loss;\n(iv) Compatible with the existing defense mechanisms, e.g., the AT methods.\nOur analyses in this paper also provide useful insights for future work on designing new objectives beyond the SCE framework." }, { "role": "assistant", "content": "Applying the softmax function in training leads to indirect and unexpected supervision on features. We propose a new training objective to explicitly induce dense feature regions for locally sufficient samples to benefit adversarial robustness." } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3470
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nBACKGROUND: The COVID-19 pandemic continues to have a significant impact on the provision of medical care. Planning to ensure there is capability to treat those that become ill with the virus has led to an almost complete moratorium on elective work. This study evaluates the impact of COVID-19 on cancer, in particular surgical intervention, in patients with esophago-gastric cancer at a high-volume tertiary center. METHODS: All patients undergoing potential management for esophago-gastric cancer from 12 March to 22 May 2020 had their outcomes reviewed. Multi-disciplinary team (MDT) decisions, volume of cases, and outcomes following resection were evaluated. RESULTS: Overall 191 patients were discussed by the MDT, with a 12% fall from the same period in 2019, including a fall in new referrals from 120 to 83 (P = 0.0322). The majority of patients (80%) had no deviation from the pre-COVID-19 pathway. Sixteen patients had reduced staging investigations, 4 had potential changes to their treatment only, and 10 had a deviation from both investigation and potential treatment. Only one patient had palliation rather than potentially curative treatment. Overall 19 patients underwent surgical resection. Eight patients (41%) developed complications with two (11%) graded Clavien-Dindo 3 or greater. Two patients developed COVID-19 within a month of surgery, one spending 4 weeks in critical care due to respiratory complications; both recovered. Twelve patients underwent endoscopic resections with no complications. CONCLUSION: Care must be taken not to compromise cancer treatment and outcomes during the COVID-19 pandemic. Excellent results can be achieved through meticulous logistical planning, good communication, and maintaining high-level clinical care." }, { "role": "assistant", "content": "[\"Prevention\"]" } ]
science.anat_em_ner
science.anat_em_ner.86
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIntervention\nPatients' initial introduction to the Wrist Extension Dynasplint (WED) system, [Dynasplint Systems, Inc., Severna Park, MD, USA] included customized fitting (wrist length, width, and girth so that the force and counter force straps could be properly aligned) and training on donning and doffing of the device. (See figure 1.) Verbal and written instructions were provided throughout the duration of treatment for safety, general wear and care, and tension setting goals based on patient tolerance.\nFigure 1\nWrist Extension Dynasplint.\nEach patient initially wore the WED for 4-6 continuous hours at an initial tension setting of #2 (0.1 foot pounds of torque). This duration was for acclimatization to the system; then patients were instructed to wear the WED system at night while sleeping for 6-8 hours of continuous wear. After each patient was comfortable wearing the unit for one week at tension level #2, they were instructed to increase the tension level to #3 (0.3 ft lbs.) and make continual increases every two weeks. If prolonged soreness followed a session (soreness for more than 15 minutes) the patient was instructed to decrease the tension one half a setting for two days until they were comfortable wearing it for 6-8 hours at the new tension setting. The majority of all patients reached level #5 (0.8 foot pounds of torque) by the end of two months. All range of motion measurements were recorded by the prescribing clinician." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"Wrist\", \"wrist\"]}" } ]
science.scitldr_aic
science.scitldr_aic.317
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nDeep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods.\nAt the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training.\nIn many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions.\nWe propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.\nWe do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process.\nAt inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function.\nUsing this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels.\nWe show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.\nEnd-to-end supervised learning with deep neural networks (DNNs) has taken the stage in the past few years, achieving state-of-the-art performance in multiple domains including computer vision BID25 , natural language processing BID23 BID10 , and speech recognition BID29 .\nMany of the tasks addressed by DNNs can be naturally decomposed to a series of functions.\nIn such cases, it might be advisable to learn neural network approximations for some of these functions and use precise existing functions for others.\nExamples of such tasks include Semantic Parsing and Question Answering.\nSince such a decomposition relies partly on precise functions, it may lead to a superior solution compared to an approximated one based solely on a learned neural model.\nDecomposing a solution into trainable networks and existing functions requires matching the output of the networks to the input of the existing functions, and vice-versa.\nThe input and output are defined by the existing functions' interface.\nWe shall refer to these functions as black-box functions (bbf), focusing only on their interface.\nFor example, consider the question: \"Is 7.2 greater than 4.5?\"\nGiven that number comparison is a solved problem in symbolic computation, a natural solution would be to decompose the task into a two-step process of\n(i) converting the natural language to an executable program, and\n(ii) executing the program on an arithmetic module.\nWhile a DNN may be a good fit forWe propose an alternative approach called Estimate and Replace that finds a differentiable function approximation, which we term black-box estimator, for estimating the black-box function.\nWe use the black-box estimator as a proxy to the original black-box function during training, and by that allow the learnable parts of the model to be trained using gradient-based optimization.\nWe compensate for not using any intermediate labels to direct the learnable parts by using the black-box function as an oracle for training the black-box estimator.\nDuring inference, we replace the black-box estimator with the original non-differentiable black-box function.End-to-end training of a solution composed of trainable components and black-box functions poses several challenges we address in this work-coping with non-differentiable black-box functions, fitting the network to call these functions with the correct arguments, and doing so without any intermediate labels.\nTwo more challenges are the lack of prior knowledge on the distribution of inputs to the black-box function, and the use of gradient-based methods when the function approximation is near perfect and gradients are extremely small.\nThis work is organized as follows: In Section 2, we formulate the problem of decomposing the task to include calls to a black-box function.\nSection 3 describes the network architecture and training procedures.\nIn Section 4, we present experiments and comparison to Policy Gradient-based RL, and to fully neural models.\nWe further discuss the potential and benefits of the modular nature of our approach in Section 6.\nInterpretability via Composability Lipton (2016) identifies composability as a strong contributor to model interpretability.\nThey define composability as the ability to divide the model into components and interpret them individually to construct an explanation from which a human can predict the model's output.\nThe Estimate and Replace approach solves the black-box interface learning problem in a way that is modular by design.\nAs such, it provides an immediate interpretability benefit.\nTraining a model to comply with a well-defined and well-known interface inherently supports model composability and, thus, directly contributes to its interpretability.For example, suppose you want to let a natural language processing model interface with a WordNet service to receive additional synonym and antonym features for selected input words.\nBecause the WordNet interface is interpretable, the intermediate output of the model to the WordNet service (the words for which the model requested additional features) can serve as an explanation to the model's final prediction.\nKnowing which words the model chose to obtain additional features for gives insight to how it made its final decision.Reusability via Composability An additional clear benefit of model composability in the context of our solution is reusability.\nTraining a model to comply with a well-defined interface induces well-defined module functionality which is a necessary condition for module reuse.\nCurrent solutions for learning using black-box functionality in neural network prediction have critical limitations which manifest themselves in at least one of the following aspects:\n(i) poor generalization,\n(ii) low learning efficiency,\n(iii) under-utilization of available optimal functions, and\n(iv) the need for intermediate labels.In this work, we proposed an architecture, termed EstiNet, and a training and deployment process, termed Estimate and Replace, which aim to overcome these limitations.\nWe then showed empirical results that validate our approach.Estimate and Replace is a two-step training and deployment approach by which we first estimate a given black-box functionality to allow end-to-end training via back-propagation, and then replace the estimator with its concrete black-box function at inference time.\nBy using a differentiable estimation module, we can train an end-to-end neural network model using gradient-based optimization.\nWe use labels that we generate from the black-box function during the optimization process to compensate for the lack of intermediate labels.\nWe show that our training process is more stable and has lower sample complexity compared to policy gradient methods.\nBy leveraging the concrete black-box function at inference time, our model generalizes better than end-to-end neural network models.\nWe validate the advantages of our approach with a series of simple experiments.\nOur approach implies a modular neural network that enjoys added interpretability and reusability benefits.Future Work We limit the scope of this work to tasks that can be solved with a single black-box function.\nSolving the general case of this problem requires learning of multiple black-box interfaces, along unbounded successive calls, where the final prediction is a computed function over the output of these calls.\nThis introduces several difficult challenges.\nFor example, computing the final prediction over a set of black-box functions, rather than a direct prediction of a single one, requires an additional network output module.\nThe input of this module must be compatible with the output of the previous layer, be it an estimation function at training time, or its black-box function counterpart at inference time, which belong to different distributions.\nWe reserve this area of research for future work.As difficult as it is, we believe that artificial intelligence does not lie in mere knowledge, nor in learning from endless data samples.\nRather, much of it is in the ability to extract the right piece of information from the right knowledge source for the right purpose.\nThus, training a neural network to intelligibly interact with black-box functions is a leap forward toward stronger AI." }, { "role": "assistant", "content": "Training DNNs to interface w\\ black box functions w\\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training" } ]
science.craftchem_ner
science.craftchem_ner.3644
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nTransmissionelectronmicrographsofCrx-/-photoreceptorsinthephotoreceptorlayeratP7." }, { "role": "assistant", "content": "[\"electron\"]" } ]
science.chemprot_re
science.chemprot_re.147
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nTargeting protein-protein interactions within the cyclic AMP signaling system as a therapeutic strategy for cardiovascular disease. The cAMP signaling system can trigger precise physiological cellular responses that depend on the fidelity of many protein-protein interactions, which act to bring together signaling intermediates at defined locations within cells. In the heart, cAMP participates in the fine control of excitation-contraction coupling, hence, any disregulation of this signaling cascade can lead to cardiac disease. Due to the ubiquitous nature of the cAMP pathway, general inhibitors of cAMP signaling proteins such as PKA, EPAC and PDEs would act non-specifically and universally, increasing the likelihood of serious 'off target' effects. Recent advances in the discovery of peptides and small molecules that disrupt the protein-protein interactions that underpin cellular targeting of cAMP signaling proteins are described and discussed." }, { "role": "assistant", "content": "[]" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.401
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is the function of the spliceosome complex?\n\nContext: The spliceosome machinery is composed of multimeric protein complexes that generate a diverse repertoire of mRNA through coordinated splicing of heteronuclear RNAs.\nSplicing and alternate splicing are the two key biological processes that result in the generation of diverse transcript and protein isoforms in Plasmodium falciparum as well as in other eukaryotic organisms.\nSpliceosome assembly belongs to the key processes that enable splicing of mRNA and modulate alternative splicing.\nThe excision of introns from nascent eukaryotic transcripts is catalyzed by the spliceosome, a highly complex and dynamic macromolecular machine composed of RNA and protein.\nRecent advances have begun to provide exciting new insights into the dynamic interactions that govern the function of the spliceosome, the multi-megadalton complex that performs splicing." }, { "role": "assistant", "content": "The excision of introns from nascent eukaryotic transcripts is catalyzed by the spliceosome." } ]
science.chia_ner
science.chia_ner.148
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPatients with any other primary DSM-IV psychiatric diagnosis in addition to Obsessive Compulsive Disorder.\nPatients who currently fulfil criteria for DSM-IV eating disorder, body dysmorphic disorder, current alcohol or substance abuse, or who have a lifetime history of bipolar disorder. Patients with a history of Schizophrenia and other psychotic disorders, Delirium, Dementia, and Amnestic and other cognitive disorders.\nSubjects with a concurrent Axis II Cluster A Personality Disorder\nBorderline or Antisocial Personality Disorder.\nSubjects who based on history or mental status examination have a significant risk of committing suicide, in the investigator's opinion.\nSubjects with a history of more than three adequate trials with an SSRI.\nSubjects who have had an adequate trial of pregabalin.\nSubjects who have initiated psychotherapy in the last 4 months prior to the first visit.\nSubjects who, during the course of the study, would be likely to require treatment with prohibited concomitant therapy .\nPrior use of or a known allergy or hypersensitivity to pregabalin.\nSubjects who have participated in any clinical trial within 30 days prior to entering the study, or in a clinical trial involving a psychotropic medication within the 6 months prior to entering the study.\nAny subject who has been taking benzodiazepines before entering the study who: 1) cannot tolerate being free of benzodiazepines for 4 weeks, or 2) has signs or symptoms of benzodiazepine withdrawal or rebound at the end of those 4 weeks. Should a patient entering the study, who is currently on benzodiazepines develop discontinuation symptoms with discontinuation of their benzodiazepine, we will treat these symptoms with a more gradual benzodiazepine taper. Study will be delayed until the patient is able to tolerate the discontinuation for 4 weeks.\nPatients with a current seizure disorder, organic brain disorder or a history of seizure disorders (except for febrile seizures in childhood).\nPatients with thyroid pathology, the treatment of which has not been stabilized for at least three months.\nPatients on neuroleptic drugs in the two months prior to study entry or cognitive behavioural therapy specific to OCD within four weeks of study entry\nPregnant or lactating females, or if sexually active and of childbearing potential, not using adequate methods of birth control.\nPatients with a history or evidence of a medical condition that would expose them to an increased risk of a significant adverse event or interfere with assessments of safety and efficacy during the trial.\nPatients receiving psychotropics of any kind, including betablockers and other anticonvulsants. Sleep medication such as oral chloral-hydrate or zopiclone are acceptable.\nPatients using any herbal psychoactive treatments, e.g. St John's Wort, Valerian, Kava Kava, L-tryptophan.\nPatients with any condition or on any therapy that, in the investigator's opinion, or as indicated in the pregabalin product label, may pose a risk to the subject.\nPatients who have had a major life event in the past three months, which in the judgement of the investigator is influencing their current condition.\nPatients having clinically significant abnormal laboratory, or ECG findings not resolved by further examinations." }, { "role": "assistant", "content": "{\"Condition\": [\"derline or sonality Disorder.\\nS\", \"organic brain disorder\", \"bipolar disorder\", \"Amnestic\", \"hypersensitivity\", \"substance abuse\", \"Obsessive Compulsive Disorder\", \"Dementia\", \"eating disorder\", \"OCD\", \"lcohol buse,\", \"body dysmorphic disorder\", \"allergy\", \"seizure disorder\", \"febrile seizures\", \"Personality Disorder\", \"psychiatric diagnosis\", \"psychotic disorders\", \"cognitive disorders\", \"Antisocial Personality Disorder\", \"thyroid pathology\", \"Delirium\", \"history of seizure disorders\", \"Schizophrenia\"], \"Device\": [], \"Drug\": [\"St John\\u0027s Wort\", \"Kava Kava\", \"zopiclone\", \"anticonvulsants\", \"Sleep medication\", \"pregabalin\", \"Valerian\", \"psychotropics\", \"neuroleptic drugs\", \"L-tryptophan\", \"betablockers\", \"chloral-hydrate\"], \"Measurement\": [\"ndings not by fur\", \"ECG findings\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [\"except\", \"not\", \"acceptable\", \"in addition to\"], \"Observation\": [\"risk of committing suicide\"], \"Person\": [], \"Procedure\": [\"cognitive behavioural therapy\", \"mental status examination\", \"herbal psychoactive treatments\", \"psychotherapy\", \"treatment\"], \"Qualifier\": [\"childhood\", \"significant\", \"primary\", \"any other\", \"other\", \"oral\", \"Axis II Cluster A\", \"stabilized\", \"DSM-IV\"], \"ReferencePoint\": [\"first visit\", \"study entry\"], \"Scope\": [\"any other primary DSM-IV\", \"eating disorder, body dysmorphic disorder, current alcohol or substance abuse\", \"oral chloral-hydrate or zopiclone\", \"betablockers and other anticonvulsants\", \"allergy or hypersensitivity\", \"St John\\u0027s Wort, Valerian, Kava Kava, L-tryptophan\", \"laboratory, or ECG findings\"], \"Temporal\": [\"within four weeks of study entry\", \"in the last 4 months prior to the first visit\", \"at least three months\", \"in the two months prior to study entry\"], \"Value\": [\"significant abnormal\"], \"Visit\": []}" } ]
science.healthver_entailment
science.healthver_entailment.1659
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Hydroxychloroquine is a medication that doctors prescribe to treat a variety of conditions, including malaria, arthritis, and lupus.\n\nAbstract:\nBackground: There is no effective therapy for COVID-19.\nHydroxychloroquine (HCQ) and chloroquine (CQ) have been used for its treatment but their safety and efficacy remain uncertain.\nObjective: We performed a systematic review to synthesize the available data on the efficacy and safety of CQ and HCQ for the treatment of COVID-19.\nMethods: Two reviewers searched for published and pre-published relevant articles between December 2019 to 8th June 2020.\nThe data from the selected studies were abstracted and analyzed for efficacy and safety outcomes.\nCritical appraisal of the evidence was done by Cochrane risk of bias tool and Newcastle Ottawa scale.\nThe quality of evidence was graded as per the GRADE approach.\nResults: We reviewed 12 observational and 3 randomized trials which included 10659 patients of whom 5713 received CQ/HCQ and 4966 received only standard of care.\nThe efficacy of CQ/HCQ for COVID-19 was inconsistent across the studies.\nMeta-analysis of included studies revealed no significant reduction in mortality with HCQ use [RR 0.98 95% CI 0.66-1.46] , time to fever resolution [mean difference -0.54 days (-1.19-011)] or clinical deterioration/development of ARDS with HCQ [RR 0.90 95% CI 0.47-1.71].\nThere was a higher risk of ECG abnormalities/arrhythmia with HCQ/CQ [RR 1.46 95% CI 1.04 to 2.06].\nThe quality of evidence was graded as very low for these outcomes.\nConclusions: The available evidence suggests that CQ or HCQ does not improve clinical outcomes in COVID-19.\nWell-designed randomized trials are required for assessing the efficacy and safety of HCQ and CQ for COVID-19." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.healthver_entailment
science.healthver_entailment.4783
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: remdesivir was superior to placebo in shortening the time to recovery in adults who were hospitalized with Covid-19 and had evidence of lower respiratory tract infection. \n\nAbstract:\nBackground Coronavirus disease 2019 (COVID-19), caused by the novel coronavirus SARS-CoV-2, has led to significant global mortality and morbidity.\nUntil now, no treatment has proven to be effective in COVID-19.\nTo explore whether the use of remdesivir, initially an experimental broad-spectrum antiviral, is effective in the treatment of hospitalized patients with COVID-19, we conducted a systematic review and meta-analysis of randomized, placebo-controlled trials investigating its use.\nMethods A rapid search of the MEDLINE and EMBASE medical databases was conducted for randomized controlled trials.\nA systematic approach was used to screen, abstract, and critically appraise the studies.\nGrading of Recommendations Assessment, Development, and Evaluation (GRADE) method was applied to rate the certainty and quality of the evidence reported per study.\nResults Two RCTs studies were identified (n=1,299).\nA fixed-effects meta-analysis revealed reductions in mortality (RR=0.69, 0.49 to 0.99), time to clinical improvement (3.95 less days, from 3.86 days less to 4.05 less days ), serious adverse events (RR=0.77, 0.63 to 0.94) and all adverse events (RR=0.87, 0.79 to 0.96).\nConclusion In this rapid systematic review, we present pooled evidence from the 2 included RCT studies that reveal that remdesivir has a modest yet significant reduction in mortality and significantly improves the time to recovery, as well as significantly reduced risk in adverse events and serious adverse events.\nIt is more than likely that as an antiviral, remdesivir is not sufficient on its own and may be suitable in combination with other antivirals or treatments such as convalescent plasma.\nResearch is ongoing to clarify and contextual these promising findings." }, { "role": "assistant", "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"It is more than likely that as an antiviral, remdesivir is not sufficient on its own and may be suitable in combination with other antivirals or treatments such as convalescent plasma.\"]}" } ]
science.chemprot_ner
science.chemprot_ner.505
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEscitalopram.\nEscitalopram oxalate (S-citalopram, Lexapro), a selective serotonin re-uptake inhibitor antidepressant which is the S-enantiomer of citalopram, is in clinical development worldwide for the treatment of depression and anxiety disorders. Preclinical studies demonstrate that the therapeutic activity of citalopram resides in the S-isomer and that escitalopram binds with high affinity to the human serotonin transporter. Conversely, R-citalopram is approximately 30-fold less potent than escitalopram at this transporter. Escitalopram has linear pharmacokinetics, so that plasma levels increase proportionately and predictably with increased doses and its half-life of 27 - 32 h is consistent with once-daily dosing. In addition, escitalopram has negligible effects on cytochrome P450 drug-metabolising enzymes in vitro, suggesting a low potential for drug-drug interactions. The efficacy of escitalopram in patients with major depressive disorder has been demonstrated in multiple short-term, placebo-controlled clinical trials, three of which included citalopram as an active control, as well as in a 36-week study evaluating efficacy in the prevention of depression relapse. In these studies, escitalopram was shown to have robust efficacy in the treatment of depression and associated symptoms of anxiety relative to placebo. Efficacy has also been shown in treating generalised anxiety disorder, panic disorder and social anxiety disorder. Results also suggest that, at comparable doses, escitalopram demonstrates clinically relevant and statistically significant superiority to placebo treatment earlier than citalopram. Analysis of the safety database shows a low rate of discontinuation due to adverse events, and there was no statistically significant difference between escitalopram 10 mg/day and placebo in the proportion of patients who discontinued treatment early because of adverse events. The most common adverse events associated with escitalopram which occurred at a rate greater than placebo include nausea, insomnia, ejaculation disorder, diarrhoea, dry mouth and somnolence. Only nausea occurred in > 10% of escitalopram-treated patients." }, { "role": "assistant", "content": "{\"Chemical\": [\"citalopram\", \"Escitalopram oxalate\", \"serotonin\", \"S-enantiomer of citalopram\", \"R-citalopram\", \"Lexapro\", \"S-citalopram\", \"Escitalopram\", \"escitalopram\"], \"Protein\": [\"human serotonin transporter\", \"cytochrome P450\"]}" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.27096
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nThis paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.\n\nCited Abstract(s):\n[1]: We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.\n\n[2]: Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.\n\n[3]: Automatic image annotation aims at predicting a set of textual labels for an image that describe its semantics. These are usually taken from an annotation vocabulary of few hundred labels. Because of the large vocabulary, there is a high variance in the number of images corresponding to different labels (\"class-imbalance\"). Additionally, due to the limitations of manual annotation, a significant number of available images are not annotated with all the relevant labels (\"weak-labelling\"). These two issues badly affect the performance of most of the existing image annotation models. In this work, we propose 2PKNN, a two-step variant of the classical K-nearest neighbour algorithm, that addresses these two issues in the image annotation task. The first step of 2PKNN uses \"image-to-label\" similarities, while the second step uses \"image-to-image\" similarities; thus combining the benefits of both. Since the performance of nearest-neighbour based methods greatly depends on how features are compared, we also propose a metric learning framework over 2PKNN that learns weights for multiple features as well as distances together. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label prediction. For scalability, we implement it by alternating between stochastic sub-gradient descent and projection steps. Extensive experiments demonstrate that, though conceptually simple, 2PKNN alone performs comparable to the current state-of-the-art on three challenging image annotation datasets, and shows significant improvements after metric learning.\n\n[4]: Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art." }, { "role": "assistant", "content": "The data-driven image annotation approaches of [4] [2] [3] use discriminative learning to obtain a metric or a weighting of different features to improve the relevance of database images retrieved for a query. Unfortunately, the learning stage is very computationally expensive -- for example, in the TagProp method of [4] , it scales quadratically with the number of images. In fact, the standard datasets used for image annotation by [2] [4] [3] consist of 5K-20K images and have 260-290 tags each.By contrast, our datasets (shown in Figure ) range in size from 71K to 270K and have tag vocabularies of size 1K-20K. While it is possible to develop scalable metric learning algorithms using stochastic gradient descent (e.g., [1] ), our work shows that learning a linear embedding using CCA can serve as a simpler attractive alternative." } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.2151
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nRemoxipride versus thioridazine in elderly psychotic patients.\nAbstract 1:\nA total of 18 hospitalized elderly psychotic patients in need of antipsychotic treatment took part in a double-blind exploratory study concerning the safety, tolerability and efficacy of remoxipride and thioridazine. Their median age was 78 years (range 66-90 years). Over the study period of 6 weeks, 9 patients received remoxipride and 9 thioridazine. Both drugs were given in a dosage range of 50-200 mg/day. The median total score on the Brief Psychiatric Rating Scale at the start of active treatment was 18 in the remoxipride group and 24 in the thioridazine group. The scores were reduced to 6 and 7, respectively, at the last rating. The Clinical Global Impression at last rating showed 4 of the 9 remoxipride patients to be very much improved compared to 1 of the 9 thioridazine patients. Apart from three reports of severe drowsiness in the thioridazine group, the incidence of treatment-emergent adverse effects was low in both groups. No clinically significant aberrations were seen in laboratory tests or cardiovascular data. In conclusion, remoxipride seemed to be well tolerated in this group of patients and its antipsychotic efficacy in the doses used is promising.\n\n\nTitle 2:\nInternational multisite double-blind trial of the atypical antipsychotics risperidone and olanzapine in 175 elderly patients with chronic schizophrenia.\nAbstract 2:\nThe authors compared the effects of the two most commonly used atypical antipsychotics, risperidone and olanzapine, in elderly patients with schizophrenia.\n In an 8-week, international, double-blind study, patients (outpatients, hospital inpatients, and residents of nursing or boarding homes) were randomly assigned to receive risperidone (1 mg to 3 mg/day) or olanzapine (5 mg to 20 mg/day). The main outcome measures were changes in Positive and Negative Syndrome Scale (PANSS) total scores and rates of extrapyramidal symptoms (EPS).\n Subjects were 175 patients age 60 years or over with schizophrenia or schizoaffective disorder. The mean duration of illness was 36.5 years. Median doses were 2 mg/day of risperidone and 10 mg/day of olanzapine. PANSS total scores and four of the five PANSS factor scores (positive symptoms, negative symptoms, disorganized thoughts, and anxiety/depression) improved significantly at all time-points and at endpoint in both groups; between-treatment differences were not significant. EPS-related adverse events were reported by 9.2% of patients in the risperidone group and 15.9% in the olanzapine group; the between-treatment difference was not significant. Total scores on the Extrapyramidal Symptom Rating Scale were reduced in both groups at endpoint; between-treatment differences were not significant. Clinically relevant weight gain was seen in both groups, but was significantly less frequent in risperidone patients than in olanzapine patients.\n Stable elderly patients with chronic schizophrenia receiving appropriate doses of risperidone or olanzapine over an 8-week period experienced significant reductions in the severity of psychotic and extrapyramidal symptoms, with a relatively low risk of side effects." }, { "role": "assistant", "content": "Antipsychotics may be widely used in the treatment of elderly people with schizophrenia, however, based on this systematic review, there are little robust data available to guide the clinician with respect to the most appropriate drug to prescribe. Clearly reported large short, medium and long-term randomised controlled trials with participants, interventions and primary outcomes that are familiar to those wishing to help elderly people with schizophrenia are long overdue." } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.48256
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthis investigation was part of a trial evaluating a(h1n1)pdm09 transmission among household contacts , conducted during the first wave of the 2009 pandemic ( may july 2009 ) in quebec city , quebec , canada ( 5 ) .\nclinical data and samples were serially obtained from index case - patients and their contacts in 42 households .\nnasopharyngeal secretions were collected from all participants during the first visit and tested by 2 different assays : a conventional reverse transcription pcr ( rt - pcr ) targeting the hemagglutinin gene of a(h1n1)pdm09 virus ( 6 ) and a universal rt - pcr targeting the matrix gene of all influenza a viruses ( 7 ) .\nblood was collected from persons > 7 years of age at their initial visit ( acute - phase sample ) and 34 weeks later ( convalescent - phase sample ) .\nserum was tested by microneutralization assay according to world health organization standard protocols with minor modifications ( 8) . this serologic study comprised 122 persons from the 42 households .\ntwenty - four persons were rt - pcr confirmed index case - patients ( median age 15 years , range 756 years ) , and 98 were household contacts ( median age 30.5 years , range 761 years ) , of whom 34 also were positive for a(h1n1)pdm09 virus by rt - pcr . for 67 patients ( median age 20 years , range 761 years ) , a(h1n1)pdm09 was confirmed by rt - pcr and/or microneutralization assay : 10 ( 15% ) by rt - pcr alone , 9 ( 13% ) by microneutralization assay alone , and 48 ( 72% ) by rt - pcr and microneutralization assay .\nof the 67 a(h1n1)pdm09-infected persons , 8 ( 12% ) seroconverted to a / brisbane/59/2007 ( a[h1n1 ] vaccine strain for 200809 ) ( table a1 ) .\nseven a / brisbane/59/2007 seroconverters were rt - pcr positive and a(h1n1)pdm09 seroconverters , and 1 was rt - pcr positive and a a(h1n1)pdm09 nonseroconverter . in comparison , 1 ( 2% ) of the 55 a(h1n1)pdm09-negative patients seroconverted to a / brisbane/59/2007 ( fisher exact test , p<0.05 ) .\nseasonal influenza viruses were not circulating in the province of quebec at the time of this study .\nonly 1 of 9 a / brisbane/59/2007 seroconverters had previously received the inactivated 200809 seasonal influenza vaccines .\nno participants were vaccinated against a(h1n1)pdm09 virus , and none received antiviral therapy or prophylaxis .\nwe then assessed whether this cross - reactivity was limited to the a / brisbane/59/2007 strain , the most recent seasonal a ( h1n1 ) virus to have circulated before a(h1n1)pdm09 virus , or whether it was broader . to this end , we tested all paired serum samples against an older seasonal a ( h1n1 ) influenza virus , i.e. , a / new caledonia/20/1999 ( h1n1 vaccine strain used during the 200001 through 200607 seasons ) , and a past a(h3n2 ) virus , i.e. , a / panama/7/2004 ( h3n2 vaccine component used during the 200001 through 200304 seasons ) . seven ( 10% ) a(h1n1)pdm09 virus \npositive persons also seroconverted to a / new caledonia/20/1999(h1n1 ) , all of whom were rt - pcr - positive and a(h1n1)pdm09 virus seroconverters , whereas none of the a(h1n1)pdm09 virus negative persons seroconverted to this older strain ( p<0.05 ) . on the other hand , seroconversion rates for a / panama/7/2004(h3n2 ) did not differ significantly between a(h1n1)pdm09 virus - positive ( 9% ) and -negative ( 5% ) patients .\nin addition , we identified 4 ( 6% ) persons with laboratory - confirmed a(h1n1)pdm09 virus infections who seroconverted to both seasonal ( h1n1 ) viruses and 2 ( 3% ) who seroconverted to a / brisbane/59/2007(h1n1 ) and a / panama/2007/99(h3n2 ) ( table ) .\nparticipant 44c , a household contact of a confirmed case - patient with a negative rt - pcr for a(h1n1)pdm09 and low antibody titers in the convalescent - phase serum , showed cross - neutralizing antibodies meeting 4-fold seroconversion criteria for a / brisbane/59/2007(h1n1 ) and a / panama/7/2004(h3n2 ) . * rt - pcr , reverse transcription pcr ; ari , acute respiratory illness ( i.e. , presence of 2 of the following signs / symptoms : fever [ 37.8c ] or feverishness , cough , sore throat , or rhinorrhea ) ; ili , influenza - like illness ( i.e. , fever and cough and/or sore throat ) ; + , positive ; , negative . received seasonal vaccine in 200809 .\nduring this study , the only influenza virus detected in the province of quebec was a(h1n1)pdm09 virus . yet , 8 ( 12% ) of 67 a(h1n1)pdm09 virus \ninfected persons in our study had a concomitant significant increase in microneutralization antibody titers against the most recent a / brisbane/59/2007(h1n1 ) strain , of whom 5 persons had 48-fold , 2 had 16-fold , and 1 had 32-fold rises .\nin addition , 4 of these 8 persons also seroconverted to an older a / new caledonia/20/1999(h1n1 ) virus , of whom 3 persons had 4-fold and 1 had 16-fold rises between acute - phase and convalescent - phase serum .\nthe cross - reactivity observed in the study population does not seem to be completely subtype specific because some persons also showed rising titers against an old influenza a ( h3n2 ) strain ( a / panama/2007/99 ) , although in this case , seroconversion rates did not differ significantly between a(h1n1)pdm09 virus positive and negative persons .\na recent study in hong kong of 28 paired serum samples showed that infection with the pandemic virus could broaden cross - reactive immunity to other recent subtype h1 swine viruses .\nin contrast to our study , perhaps because of the small number of participants or older age of a(h1n1)pdm09 virus positive case - patients ( 30.5 vs. 20 years ) , no cross - reactive response was shown against the more recent seasonal influenza virus a / hk/400599/2008(h1n1 ) ( 9 ) . we could not determine the extent to which past seasonal influenza vaccinations and/or natural infections contributed to the generation of cross - neutralizing antibodies to a(h1n1)pdm09 and the seasonal influenza strains .\nour next step will be to investigate potential cross - neutralizing determinants between these seasonal and pandemic viruses .\nneutralizing antibodies that bind to the stalk region of ha2 have been shown to confer broad cross - neutralizing activity against several subtypes of viruses across clades and to provide protection in animal models ( 10,11 ) .\nsix of the 8 persons who seroconverted to a / brisbane/59/2007(h1n1 ) by microneutralization assay did not meet the 4-fold criteria by hemagglutinin inhibition assay ( data not shown ) , suggesting that the cross - reactivity might result from conserved epitopes in the stalk region of ha2 or in other proteins .\ngreenbaum et al . recently showed that , overall , 49% of the epitopes reported in recently circulating seasonal a ( h1n1 ) strains were conserved in the a(h1n1)pdm09 virus ( 12 ) . specifically ,\n31% , 41% , and 69% of the b - cell , cd4 and cd8 t - cell epitopes , respectively , were conserved . natural infection with a(h1n1)pdm09 virus also could have elicited cross - reactive responses against internal components of older viral strains ( 13 ) .\nwe find intriguing the elaboration of cross - reactive neutralizing antibodies to more recently circulating influenza a ( h1n1 ) strains as a result of novel a(h1n1)pdm09 virus infection , whereas the reverse has not generally been evident in serosurveys for cross - reactive a(h1n1)pdm09 antibody , except in elderly persons who had substantial cross - reactive antibodies to a(h1n1)pdm09 virus ( 14,15 ) .\nunfortunately , because of the small sample size of our study and lack of serum from children < 7 years of age , we could not assess whether cross - reactivity was an age - dependent phenomenon .\nhowever , all but 3 of the cross - reactive seroconverters ( 13/16 [ 81% ] ) were 730 years of age . to explore preferential responses to the original infecting virus ( original antigenic sin ) , we assessed cross - reactivity for the older a / new caledonia/99(h1n1 ) virus that was potentially the priming antigen for some of our younger participants or was closely related to the priming antigen in older participants .\nhowever , seroconversion rates for a / new caledonia ( 10% ) were comparable to those of the more recent a / brisbane/59/2007(h1n1 ) strain ( 12% ) , and thus we could not distinguish original antigenic sin on that basis .\nthese antigens may have been too closely related antigenically to demonstrate that in this young cohort .\nour work supports the notion that natural a(h1n1)pdm09 virus infection induces broad heterosubtypic ( h1 and even h3 ) responses .\nit also highlights the need for further investigation of the mechanisms behind cross - protection because they could be keys to creating improved influenza vaccines with broader protection ." }, { "role": "assistant", "content": "we looked for cross - reactive antibodies in 122 persons with paired serum samples collected during the 2009 pandemic of influenza virus a(h1n1)pdm09 . \n eight ( 12% ) of 67 persons with a(h1n1)pdm09 infection confirmed by reverse transcription pcr and/or serology also seroconverted to the seasonal a / brisbane/59/2007 ( h1n1 ) virus , compared with 1 ( 2% ) of 55 a(h1n1)pdm09-negative persons ( p<0.05 ) ." } ]
science.medmentions_ner
science.medmentions_ner.507
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nLong noncoding RNA XIST acts as an oncogene in non-small cell lung cancer by epigenetically repressing KLF2 expression Recently, long noncoding RNAs (lncRNAs) have been identified as critical regulators in numerous types of cancers, including non-small cell lung cancer (NSCLC). X inactivate-specific transcript (XIST) has been found to be up-regulated and acts as an oncogene in gastric cancer and hepatocellular carcinoma, but little is known about its expression pattern, biological function and underlying mechanism in NSCLC. Here, we identified XIST as an oncogenic lncRNA by driving tumorigenesis in NSCLC. We found that XIST is over- expressed in NSCLC, and its increased level is associated with shorter survival and poorer prognosis. Knockdown of XIST impaired NSCLC cells proliferation, migration and invasion in vitro, and repressed the tumorigenicity of NSCLC cells in vivo. Mechanistically, RNA immune-precipitation (RIP) and RNA pull-down experiment demonstrated that XIST could simultaneously interact with EZH2 to suppress transcription of its potential target KLF2. Additionally, rescue experiments revealed that XIST 's oncogenic functions were partly depending on silencing KLF2 expression. Collectively, our findings expound how XIST over- expression endows an oncogenic function in NSCLC." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [\"oncogene\", \"oncogenic\", \"regulators\"], \"Bacterium\": [], \"BiologicFunction\": [\"silencing\", \"biological function\", \"expressed\", \"cancers\", \"gastric cancer\", \"cells proliferation\", \"hepatocellular carcinoma\", \"migration\", \"expression\", \"NSCLC\", \"invasion\", \"tumorigenesis\", \"tumorigenicity\", \"non-small cell lung cancer\", \"transcription\", \"epigenetically repressing\", \"up-regulated\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"lncRNAs\", \"Long noncoding RNA\", \"lncRNA\", \"RNA\", \"EZH2\", \"long noncoding RNAs\", \"XIST\", \"X inactivate-specific transcript\", \"KLF2\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"shorter survival\", \"poorer prognosis\", \"findings\"], \"Food\": [], \"HealthCareActivity\": [\"immune-precipitation\", \"RIP\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"experiments\", \"Knockdown\", \"pull-down experiment\"], \"SpatialConcept\": [\"pattern\"], \"Virus\": []}" } ]
science.chia_ner
science.chia_ner.111
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nASA 1 \nASA 2 \nPts have current treatment plan at OHSU for extraction of some or all of remaining teeth and scheduled for delivery of a removable appliance post extraction \nTeeth used are able to be isolated with rubber dam \nUnderstand and sign consent form" }, { "role": "assistant", "content": "{\"Condition\": [\"ASA 2\", \"ASA 1\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [\"scheduled for\"], \"Multiplier\": [], \"Negation\": [], \"Observation\": [\"treatment plan at OHSU\"], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.103175
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nPhotodissociation Mass Spectrometry Accurately Localizes Sites of Backbone Deuteration in Peptides\n\nPaper sections:\n\n<p>Detailed insight into the dynamic conformation of proteins is critical for understanding and deciphering the molecular mechanisms of most cellular processes. 1 Hydrogen/deuterium exchange mass spectrometry (HDX-MS) has become a popular method for studying the conformation, dynamics and interactions of proteins in solution under a variety of conditions. [2][3][4] The method tracks the H/D exchange of protein backbone amides and thus provides a sensitive analytical probe of the hydrogen bonding networks and the solvation of local higher-order structure. An experimental approach to increase the spatial resolution of an HDX-MS experiment is to subject peptides (bottomup) or intact proteins (top-down) to gas-phase fragmentation (MS/MS) inside the mass spectrometer prior to mass analysis. 5 This conceptionally simple HDX-MS/MS approach, however, requires that the deuterium labeling of backbone amides from solution remains unperturbed during ionization and gas-phase dissociation. [6][7][8] The common fragmentation technique of collision induced dissociation (CID) as well as infrared multiphoton dissociation (IRMPD), increase the internal energy of a given ion on a relatively slow timescale (µs-ms) through multiple collisions or absorption of low energy photons, thus mobilizing charging protons prior to covalent bond dissociation. 9,10 Both techniques have been shown to randomize the selective deuterium labeling of backbone amides in peptides and proteins, here referred to as H/D scrambling. 7,8 In contrast, fast (picosecond) fragmentation by electron capture dissociation (ECD) was found to proceed without H/D scrambling in peptides and proteins. 11,12 The related technique of electron transfer dissociation (ETD) also proceeds without H/D scrambling, 13 and was in a proof-of-concept study shown to provide a more accessible LC-compatible alternative to ECD, for high-resolution HDX-MS/MS analysis of proteins. 14 Subsequent applications to other proteins and protein-ligand/protein-protein complexes have further underscored the use of ETD to increase the spatial resolution of HDX-MS experiments. [15][16][17][18] Use of a more efficient fragmentation technique could greatly empower the HDX-MS/MS approach. Ultraviolet photodissociation (UVPD) has been shown to provide efficient and highly informative fragmentation of protonated peptides and proteins, producing primarily a/x ions, but also c/z and b/y ions as well as side-chain cleavages (v, w, and d ions). UVPD of peptides and proteins have been implemented at different wavelengths ranging from 157 nm to 266 nm. At wavelengths above 266 nm, excitation happens primarily for peptides containing aromatic residues. 19 However, when using higher energy UV photons at lower wavelengths, the primary accessible chromophore becomes the amide group (with transitions to π* either from π at 193 nm or from n at 213 nm). 9,20,21 The absorption of a single photon at these wavelengths deposits a high amount of energy (7.9 eV at 157 nm, 6.4 eV at 193 nm, and 5.8 eV at 213 nm) which exceeds peptide bond energies of 3-4 eV 20 resulting in comprehensive fragmentation of both peptides and intact proteins, largely independent of ion charge and size. 9 Furthermore, UVPD occurs without charge-reduction, thus greatly enabling detection of both N-and C-terminal fragments following dissociation. 20 The increase in availability and ease of implementation of UV lasers on commercial mass spectrometers, combined with the ability of UV lasers to deliver very short high-energy pulses that deposits high energies into peptide and protein ions on a LC-compatible timescale, makes UVPD a promising method to empower both bottom-up and top-down HDX-MS experiments. It is however critical to first establish if fragment ions from UVPD retain the selective solution-phase deuterium labeling of backbone amides in a polypeptide.</p><p>Here we investigate the occurrence of H/D scrambling at backbone amide positions during UVPD of a regioselective deuterium labeled 12-mer peptide, peptide P1 (HHHHHHIIKIIK). 22 For this peptide, differences in the intrinsic H/D exchange rates of the constituent amino acids allow deuterium to be retained solely in the backbone amide residues in the C-terminal part (-IIKIIK) for several minutes in solution. 22 Upon ionization, gas-phase fragmentation and mass analysis, the deuterium content can be measured for both intact precursor ion and MS/MS fragment ions, which enables a sensitive quantitation of the level of H/D scrambling. UVPD was performed on peptide P1 using a 213 nm laser operated at 1 kHz and delivering 2.5 µJ/pulse, which was implemented on a commercially available ion mobility enabled high-resolution Q-TOF mass spectrometer (Figure 1a). The triply-protonated peptide P1 was mass selected in the quadrupole, trapped in the trap T-Wave ion guide and photoactivated for 600 ms. The photofragments are then extracted to the TOF for mass analysis (Figure S-1). UVPD resulted in multiple singly or doubly charged fragment ions, primarily a/x and c/z ions of high abundance, b/y ions of low abundance, a single d ion, and a His immonium ion/a1 ion (Figure 1b-d). The relative abundance of the different fragment ions is plotted in Figure S-2. The deuterium content of all fragment ions of sufficient signal-to-noise ratio (S/N>15) were used for quantitating H/D scrambling (Figure 1d in bold). Further details concerning experimental settings and data analysis are provided in the Supporting Information.</p><p>To avoid collisional activation and thus H/D scrambling in the front-end of the mass spectrometer prior to UVPD of the labeled peptide, a systematic optimization of ion source parameters was performed to facilitate gentle ionization and ion transport. ETD has been shown to proceed inherently without H/D scrambling. 13,23 By performing ETD in the trap T-Wave of the Q-TOF instrument we quantitated the influence of several instrument parameters on H/D scrambling for the triply-protonated peptide P1 prior to fragmentation (Figure S-3). Four parameters had a significant impact on H/D scrambling (Table S- 1). At elevated settings, the \"sampling cone\" induced complete randomization of the deuterium label of peptide P1, in good correlation to earlier work. 24 The other three important parameters \"cone gas\", \"source offset\", and \"StepWave 2 offset\", the latter two unique to the StepWave ion guide of this Q-TOF instrument, induced a moderate increase in scrambling of 15-25% at elevated settings (Figure S-3). The four parameters were tuned to define \"mild\" ion source settings that allowed maximum ion transmission while retaining minimal scrambling (<10%) in ETD-generated c/z ions. \"Harsh\" ion source settings were similarly defined that induced scrambling in ETD-generated c/z ions (Table S-1). The harsh conditions were obtained from the mild conditions by increasing the electrostatic potential of the sampling cone from 20 V to 105 V. UVPD was performed for labeled peptide P1 under both mild and harsh conditions (Figure 2). The deuterium content of a, x, and c ions, as well as a single z ion resulting from UVPD at mild conditions was plotted as a function of fragment ion length and compared to theoretical models in case of either 0% or 100% scrambling (Figure 2). Results from one a ion series are shown in Figure 2d, with data from replicate measurements and other fragment ion types shown in Figure S-4. Evidently, a, c, x, and z fragment ions from UVPD at mild ion source conditions reproducibly retained the regioselective labeling pattern of the intact peptide from solution. The deuterium contents of all detected fragment ions correlated closely with the theoretical model for 0% scrambling (<10% scrambling). The fact that both a/x and c/z ions were generated by UVPD at 213 nm without H/D scrambling suggests that the a/x and c/z fragments were generated on a comparably fast timescale through mechanisms that do not involve reversible protonation or hydrogen migration of backbone amide groups. In contrast, the deuterium content of a/x and c/z fragment ions generated through UVPD of labeled peptide P1 at harsh ion source conditions (Figures 2e and S-5a), correlated closely with the model for 100% scrambling (>85% scrambling). This control experiment validated the applied analytical approach by demonstrating the ability of UVPD-generated fragment ions of the selectively labeled peptide to report the occurrence of H/D scrambling.</p><p>We next compared the occurrence of H/D scrambling in labeled peptide P1 following UVPD, ETD, or CID performed under identical mild ion source conditions (Figures 3 and S-5). ETD yielded c/z fragment ions with a similar low degree of H/D scrambling (8% ± 3%) to a/x and c/z fragment ions from UVPD. We assign the minimal residual scrambling level detected (<10%) to method uncertainty 6 and minor inherent H/D scrambling occurring during ion transmission in the mass spectrometer at mild ion source conditions. Comparative UVPD and ETD experiments performed at harsh ion source conditions both produced full scrambling at >90% (Figure S-5). Furthermore, CIDgenerated a/x and b/y fragment ions of labeled peptide P1 displayed a complete randomization of the solution-phase labeling pattern (>90% scrambling), as previously shown for b/y ions. 7 Notably, UVPD gave a/x fragment ions with negligible H/D scrambling, demonstrating differences in the mechanism by which a and x ions are generated in UVPD and CID. In conclusion, by measuring the deuterium content of a/x and c/z fragment ions of the selectively model peptide, our results show that UVPD at 213 nm can proceed without H/D scrambling. By demonstrating the ability of UVPD to accurately measure the deuterium levels of individual amide hydrogens in a peptide, our results show the feasibility of using UVPD to increase the spatial resolution of an HDX-MS experiment. Distinct mechanisms may give rise to the diverse fragment ions observed during UVPD at 213 nm (e.g. a/x, c/z, b/y). Here we observed that both singly and doubly protonated a/x and c/z ions were generated without randomization of the deuterium labeling of backbone amide groups, supporting that such fragment ions are generated by direct mechanisms occurring promptly after absorption of the high-energy photon (5.8 eV) on a microsecond to nanosecond timescale. 19 UVPD-generated a/x and c/z ions are thus formed faster than the rate of proton mobilization in the activated peptide and the underlying radical-based mechanism(s) do not require migration of hydrogens to facilitate bond cleavage. Absorption of photons of even higher energy during UVPD at 193 or 157 nm (6.4-7.9 eV) is therefore likely also to yield a/x and c/z ions without H/D scrambling, although this remains to be validated experimentally. Whilst the low abundance b/y ions formed during 213 nm UVPD of peptide P1 precluded reliable quantitative analysis, our results are in line with mechanistic studies of UVPD at 157 nm suggesting that all fragment ions from UVPD using short-wavelength lasers originate from fast radical based mechanisms. 25 Our findings indicate that UVPD at 213 nm offers a powerful complement to ETD in bottom-up HDX-MS/MS workflows, with the potential to provide alternative fragmentation, with less dependency on precursor size and charge state, 26 and even allow the extraction of site-specific information for singly-charged peptides. 21 Furthermore, the current implementation of UVPD on a high resolution Q-TOF capable of ion mobility separation 27,28 should enhance extraction of residue-specific deuterium levels from overlapping fragment ions from UVPD of larger peptides and intact proteins in both bottom-up and top-down HDX-MS/MS applications.</p>" }, { "role": "assistant", "content": "Hydrogen/deuterium exchange mass spectrometry (HDX-MS) is now positioned as a routinely used technique to inform on protein structure, dynamics, and interactions. Localizing the deuterium content on a single residue basis increases the spatial resolution of this technique enabling detailed structural analysis. Here we investigate the use of ultraviolet photodissociation (UVPD) at 213 nm to localize incorporated deuterium with single residue resolution in HDX-MS experiments. Using a selectively labeled peptide, we show that UVPD occurs without H/D scrambling as the peptide probe accurately retains its solution-phase deuterium labeling pattern. Our results indicate that UVPD provides an attractive alternative to electron mediated dissociation to increase the spatial resolution of the HDX-MS experiment, combining high fragmentation efficiency, high fragment ion diversity, and low chargestate de-pendency." } ]
science.chemprot_re
science.chemprot_re.244
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nA comparison of the effects of nabumetone vs meloxicam on serum thromboxane B2 and platelet function in healthy volunteers. AIMS: To compare the effects of nabumetone and meloxicam, two cyclo-oxygenase-2 (COX-2) preferential nonsteroidal anti-inflammatory drugs (NSAIDs), on platelet COX-1 activity and platelet function. METHODS: Twelve healthy volunteers (3 male, 9 female, median age 22 years) participated in an open, randomized, cross-over trial of nabumetone 1000 mg twice daily vs meloxicam 7.5 mg twice daily during 1 week with 2 weeks wash-out. After a second 2 week wash-out period, one dose of indomethacin 50 mg was given as a positive control to check for NSAID induced inhibition of platelet function. COX-1 inhibition was measured as percentage inhibition of serum TXB2 generation in clotting whole blood, and as closure time with use of the platelet function analyser PFA-100. Data are reported as median with range. Paired variables were analysed using Wilcoxons signed rank test. RESULTS: TXB2 levels decreased significantly after all three medications, but percentage inhibition after nabumetone and indomethacin (88% and 97%, respectively) was significantly higher than after meloxicam (63%) (P<0.05). Closure times increased significantly after administration of all three medications (P<0.05). Increases in closure time after administration did not differ between nabumetone and meloxicam (24% and 14%, respectively), but were significantly larger after indomethacin administration (63%) (P<0.01). CONCLUSIONS: In the maximum registered dosage, nabumetone inhibits thromboxane production much more than meloxicam, signifying less COX-2 selectivity of the former. However, both nabumetone and meloxicam cause only minor impairment in platelet function in comparison with indomethacin and the difference between them is not significant." }, { "role": "assistant", "content": "[[\"TXB2\", \"PRODUCT-OF\", \"COX-1\"], [\"nabumetone\", \"DOWNREGULATOR\", \"COX-2\"]]" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.13546
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: A comparative survey of Personalised Information Retrieval and Adaptive Hypermedia techniques \nChapter Title: Information Access Paradigms of PIR and AH \n1. Abstract of Cited Paper (BIB001): As information becomes more ubiquitous and the demands that searchers have on search systems grow, there is a need to support search behaviors beyond simple lookup. Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Exploratory search describes an information-seeking problem context that is open-ended, persistent, and multifaceted, and information-seeking processes that are opportunistic, iterative, and multitactical. Exploratory searchers aim to solve complex problems and develop enhanced mental capacities. Exploratory search systems support this through symbiotic human-machine relationships that provide guidance in exploring unfamiliar information landscapes. Exploratory search has gained prominence in recent years. There is an increased interest from the information retrieval, information science, and human-computer interaction communities in moving beyond the traditional turn-taking interaction model support d by major Web search engines, and toward support for human intelligence amplification and information use. In this lecture, we introduce exploratory search, relate it to relevant extant research, outline the features of exploratory search systems, discuss the evaluation of these systems, and suggest some future directions for supporting exploratory search. Exploratory search is a new frontier in the search domain and is becoming increasingly important in shaping our future world. Table of Contents: Introduction / Defining Exploratory Search / Related Work / Features of Exploratory Search Systems / Evaluation of Exploratory Search Systems / Future Directions and concluding Remarks\n\nLiterature Review:" }, { "role": "assistant", "content": "One of the major distinctions between PIR and AH lies in the overall information access paradigms underlying both areas. This distinction has driven most of the research in both fields and has lead to their respective techniques and technologies towards adaptation and personalisation. PIR is based on the standard Information Retrieval model, which is traditionally focused on the retrieval of documents that are relevant to a unitary query. While PIR extends this model by taking into account historical interactions, the paradigm remains finding the most relevant documents for a single user query. This fundamental underpinning of PIR makes such systems particularly suitable for the general information access paradigm of searching by query, where it is assumed that a user can express their information need in a relatively precise user query. However, this assumption is not always correct, especially in cases where users are uncertain about their actual information needs or the correct query terms to express these needs. AH systems stem from the information access paradigm of searching by browsing, where users generally have less precise information needs and therefore need to surf and explore pages. AH is facilitating this type of search by providing the most relevant browsing content and links with respect to a rich representation of user characteristics (such as preferences, history or prior knowledge). AH is assisting a user's information exploration by creating or adapting the navigation across hypertext and hypermedia using e.g. link creation, link/content hiding or link/content annotation. This distinction in information access paradigms has also driven the application of the respective techniques towards specific user information needs. In , search needs are classified into three categories, Navigational searching, Transactional searching and Informational searching. Navigational searches typically only require one precise answer, for example the web address of a particular company or the homepage of an individual. Similarly, Transactional searches are very specific, as the main goal is to purchase a particular product or use a particular service. As PIR focuses on finding the most relevant documents for precise user queries, it is particularly suited for these information needs. However, these two types of searches are shown to constitute only 20% of current searches on the web . The remaining 80% of searches can be classified as Informational searching, i.e. a user searching for deeper information on a particular topic. Current (P)IR solutions typically do not assist this Informational searching adequately, as often more than a few very precise results are needed to fill a user's knowledge gap. AH systems have inherently focussed on providing assistance towards such Informational searching needs, since they typically assist users in gaining deep knowledge in a particular domain. In particular, AH systems have proven successful in e-learning systems, where a user's information need is to learn about a particular topic. This represents a good example of a search reaching beyond a few precise documents and corresponds to a case where a user is uncertain about the range of query terms to choose. However, while AH systems have proven effective for such \"formal\" learning tasks, their closed corpus nature has prevented a more widespread applicability for web-scale Informational searching (i.e. \"informal\" learning activities). Despite PIR and AH stemming from distinct information access paradigms, both fields effectively strive towards similar aims of providing the most personally relevant information to an individual user. This has led to many conceptual commonalities between the techniques of both fields, which can potentially complement each other in order to better assist a user's information exploration on a web-scale. The aim of assisting such searches is also actively researched in the area of Exploratory Search BIB001 . This field distinguishes itself from standard search through the following 6 characteristics BIB002 ): 1) multiple query iterations, 2) open-ended information needs, 3) close coupling with task context, 4) combination of browsing and focused searching, 5) possible collaboration of multiple people and 6) advanced system evaluation with respect to learning, insight, task outcomes and system utility. One could thus argue that a hybrid of both PIR and AH systems could potentially be characterised by this search paradigm. By analysing and contrasting the various PIR and AH techniques, this survey is able to provide a set of affordances that such a hybrid system's components require in order to improve user assistance across Informational and Exploratory search." } ]
science.craftchem_ner
science.craftchem_ner.9734
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nWt1andCited1(red)stainingrevealedanincreaseofapproximately30%inthethicknessofthecappingmesenchymesurroundingthemutanturetericbuds(Figure3,Table2)." }, { "role": "assistant", "content": "[]" } ]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
12
Edit dataset card