instruction
stringclasses
20 values
input
stringlengths
44
177
output
stringlengths
0
141k
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
I've come across the term 'Steatopygia' in a medical context, but I'm not sure what it means. Can you clarify?
Steatopygia is the state of having substantial levels of tissue on the buttocks and thighs. This build is not confined to the gluteal regions, but extends to the outside and front of the thighs, and tapers to the knee producing a curvilinear figure. The term is from the Greek stéar (στέαρ), meaning "tallow", and pugḗ (πυγή), meaning "rump". Steatopygia, a genetic characteristic leading to increased accumulation of adipose tissue in the buttock region, is most notably (but not solely) found among the Khoisan of Southern Africa. It has also been observed among Pygmies of Central Africa and also the Andamanese people, such as the Onge tribe in the Andaman Islands. This genetic characteristic is prevalent among women but occurs to a lesser degree in men.Steatopygia would seem to have been a characteristic of a population which once extended from the Gulf of Aden to the Cape of Good Hope, from which peoples the Khoisan and Pygmies may be remnants. Among the Khoisan, it begins in infancy and is fully developed by the time of the first pregnancy.It has been suggested that this feature was once more widespread. Paleolithic Venus figurines, sometimes referred to as "Steatopygian Venus" figures, discovered from Europe to Asia presenting a remarkable development of the thighs, and even the prolongation of the labia minora, have been used to support this theory. Whether these were intended to be lifelike, exaggeratory, or idealistic is unclear. These figures, however, may not qualify as steatopygian, since they exhibit an angle of approximately 120 degrees between the back and the buttocks, while steatopygia is typically described with an angle of about 90 degrees only.In Victorian England, freak shows were known to have exploited a woman with steatopygia at least once. The most well-known example was a South African Khoikhoi woman named Saartjie Baartman, who is thought to have had lipedema. See also Female body shape Feminine beauty ideal References == External links ==
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
I've come across the term 'Estradiol valerate/dienogest' in a medical context, but I'm not sure what it means. Can you clarify?
Estradiol valerate/dienogest (EV/DNG), sold under the brand names Lafamme, Natazia and Qlaira among others, is a combination product of estradiol valerate, an estrogen, and dienogest, a progestogen, which is used in menopausal hormone therapy in and as a birth control pill to prevent pregnancy in women. It is taken by mouth. Birth control pills containing EV/DNG are associated with a significantly increased risk of venous thromboembolism. However, they are associated with a significantly lower risk of venous thromboembolism than birth control pills containing ethinylestradiol and a progestin. See also Estradiol valerate/cyproterone acetate Estradiol/nomegestrol acetate List of combined sex-hormonal preparations References External links "Dienogest mixture with Estradiol valerate". Drug Information Portal. U.S. National Library of Medicine.
You are a guide to medical language. Please provide in-depth and understandable explanations of medical terms, highlighting their significance in healthcare.
Could you provide a brief overview of 'Cephalic presentation' in a medical context?
A cephalic presentation or head presentation or head-first presentation is a situation at childbirth where the fetus is in a longitudinal lie and the head enters the pelvis first; the most common form of cephalic presentation is the vertex presentation, where the occiput is the leading part (the part that first enters the birth canal). All other presentations are abnormal (malpresentations) and are either more difficult to deliver or not deliverable by natural means. Engagement The movement of the fetus to cephalic presentation is called head engagement. It occurs in the third trimester. In head engagement, the fetal head descends into the pelvic cavity so that only a small part (or none) of it can be felt abdominally. The perineum and cervix are further flattened and the head may be felt vaginally. Head engagement is known colloquially as the baby drop, and in natural medicine as the lightening because of the release of pressure on the upper abdomen and renewed ease in breathing. However, it severely reduces bladder capacity resulting in a need to void more frequently. Classification In the vertex presentation the head is flexed and the occiput leads the way. This is the most common configuration and seen at term in 95% of singletons. If the head is extended, the face becomes the leading part. Face presentations account for less than 1% of presentations at term. In the sinicipital presentation the large fontanelle is the presenting part; with further labor the head will either flex or extend more so that in the end this presentation leads to a vertex or face presentation. In the brow presentation the head is slightly extended, but less than in the face presentation. The chin presentation is a variant of the face presentation with maximum extension of the head. Non-cephalic presentations are the breech presentation (3.5%) and the shoulder presentation (0.5%). Vertex presentation The vertex is the area of the vault bounded anteriorly by the anterior fontanelle and the coronal suture, posteriorly by the posterior fontanelle and the lambdoid suture and laterally by 2 lines passing through the parietal eminences. In the vertex presentation the occiput typically is anterior and thus in an optimal position to negotiate the pelvic curve by extending the head. In an occiput posterior position, labor becomes prolonged, and more operative interventions are deemed necessary. The prevalence of the persistent occiput posterior is given as 4.7%.The vertex presentations are further classified according to the position of the occiput, both right, left, or transverse and anterior or posterior: Left Occipito-Anterior (LOA), Left Occipito-Posterior (LOP), Left Occipito-Transverse (LOT) Right Occipito-Anterior (ROA), Right Occipito-Posterior (ROP), Right Occipito-Transverse (ROT)The Occipito-Anterior position is ideal for birth; it means that the baby is lined up so as to fit through the pelvis as easily as possible. The baby is head down, facing the spine, with its back anterior. In this position, the babys chin is tucked onto its chest, so that the smallest part of its head will be applied to the cervix first. The position is usually "Left Occiput Anterior", or LOA. Occasionally, the baby may be "Right Occiput Anterior", or ROA. Face presentation Factors that predispose to face presentation are prematurity, macrosomia, anencephaly and other malformations, cephalopelvic disproportion, and polyhydramnios. In an uncomplicated face presentation duration of labor is not altered. Perinatal losses with face presentation occur with traumatic version and extraction and midforceps procedures Duff indicates that the prevalence of face presentations is about 1/500–600., while Benedetti et al. found it to be 1/1,250 term deliveries.Face presentations are classified according to the position of the chin (mentum): Left Mento-Anterior (LMA), Left Mento-Posterior (LMP), Left Mento-Transverse (LMT) Right Mento-Anterior (RMA), Right Mento-Posterior (RMP), Right Mento-Transverse (RMT) Brow presentation While some consider the brow presentation as an intermediate stage towards the face presentation, others disagree. Thus Bhal et al. indicated that both conditions are about equally common (1/994 face and 1/755 brow positions), and that prematurity was more common with face while postmaturity was more common with brow positions. Oskie presentation The Oskie presentation is similar to the Occipito-Anterior position, where the baby is head down, facing the spine, with back on the ventral side of the uterus; however, in this position, while the torso is aligned with the mothers longitudinal axis, the legs of the fetus are extended straight along the frontal axis of the mother, as if the baby is creating a right angle with its body. For the Oskie position to occur the babys head must be far down the pelvis in order to allow room for leg extension, typically the arms are bent, tucked against the babys body. There are no known complications for labor and delivery. This presentation is rare and is not well researched. Reasons for predominance The piriform (pear-shaped) morphology of the uterus has been given as the major cause for the finding that most singletons favor the cephalic presentation at term. The fundus is larger and thus a fetus will adapt its position so that the bulkier and more movable podalic pole makes use of it, while the head moves to the opposite site. Factors that influence this positioning include the gestational age (earlier in gestation breech presentations are more common as the head is relatively bigger), size of the head, malformations, amount of amniotic fluid, presence of multiple gestations, presence of tumors, and others. Two-thirds of all vertex presentations are LOA, possibly because of the asymmetry created by the descending colon that is on the left side of the pelvis. Diagnosis Usually performing the Leopold maneuvers will demonstrate the presentation and possibly the position of the fetus. Ultrasound examination delivers the precise diagnosis and may indicate possible causes of a malpresentation. On vaginal examination, the leading part of the fetus becomes identifiable after the amniotic sac has been broken and the head is descending in the pelvis. Management Many factors determine the optimal way to deliver a baby. A vertex presentation is the ideal situation for a vaginal birth, although occiput posterior positions tend to proceed more slowly, often requiring intervention in the form of forceps, vacuum extraction, or Cesarean section. In a large study, a majority of brow presentations were delivered by Cesarean section, however, because of postmaturity, factors other than labour dynamics may have played a role. Most face presentations can be delivered vaginally as long as the chin is anterior; there is no increase in fetal or maternal mortality. Mento-posterior positions cannot be delivered vaginally in most cases (unless rotated) and are candidates for Cesarean section in contemporary management. References == External links ==
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
I'm encountering the term 'Tuberculosis management' in medical literature. What's its definition?
Tuberculosis management describes the techniques and procedures utilized in treating tuberculosis (TB). The medical standard for active TB is a short course treatment involving a combination of Isoniazid, Rifampicin (also known as Rifampin), Pyrazinamide, and Ethambutol for the first two months. During this initial period, Isoniazid is taken alongside Pyridoxal phosphate to obviate Peripheral neuropathy. Isoniazid is then taken coicident with Rifampicin alone for the remaining four months of treatment. A patient is considered free of all living TB bacteria after six months. Latent tuberculosis or Latent Tuberculosis Infection (LTBI) is treated with three to nine months of Isoniazid alone, this longterm treatment often risks the development of Hepatotoxicity. A combination of Isoniazid plus Rifampicin for a period of three to four months is shown to be an equallly effective method for treating LTBI, while mitigating risks to Hepatotoxicity. Treatment of LTBI is essential in preventing the spread of active TB. Drugs First line All first-line anti-tuberculous drug names have semistandardized three-letter and single-letter abbreviations: ethambutol is EMB or E, isoniazid is INH or H, pyrazinamide is PZA or Z, rifampicin is RMP or R, streptomycin is SM or S.First-line anti-tuberculous drug names are often remembered with the mnemonic "RIPE," referring to the use of a rifamycin (like rifampin), isoniazid, pyrazinamide, and ethambutol. US practice uses abbreviations and names that are not internationally convened: rifampicin is called rifampin and abbreviated RIF; streptomycin is abbreviated STM. Other abbreviations have been widely used (for example, the notations RIF, RFP, and RMP have all been widely used for rifampicin, and the combination regimens have notations such as IRPE, HRZE, RIPE, and IREP that are variously synonyms or near-synonyms, depending on dosage schedules), but for clarity, the semistandardized abbreviations used above are used in the rest of this article. In this system, which the World Health Organization (WHO) supports, "RIPE" is "RHZE". (Both have mnemonic potential, as tuberculosis is named after tubercles (small tubers), and a tuber can be ripe and can be a rhizome.)Drug regimens are similarly abbreviated in a semistandardised manner. The drugs are listed using their single letter abbreviations (in the order given above, which is roughly the order of introduction into clinical practice). A prefix denotes the number of months the treatment should be given for; a subscript denotes intermittent dosing (so 3 means three times a week) and no subscript means daily dosing. Most regimens have an initial high-intensity phase, followed by a continuation phase (also called a consolidation phase or eradication phase): the high-intensity phase is given first, then the continuation phase, the two phases divided by a slash.So, 2HREZ/4HR3means isoniazid, rifampicin, ethambutol, pyrazinamide daily for two months, followed by four months of isoniazid and rifampicin given three times a week.In the US only, streptomycin is not considered a first line drug by ATS/IDSA/CDC because of high rates of resistance. The WHO have made no such recommendation. Second line The second line drugs (WHO groups 2, 3 and 4) are only used to treat disease that is resistant to first line therapy (i.e., for extensively drug-resistant tuberculosis (XDR-TB) or multidrug-resistant tuberculosis (MDR-TB)). A drug may be classed as second-line instead of first-line for one of three possible reasons: it may be less effective than the first-line drugs (e.g., p-aminosalicylic acid); or, it may have toxic side-effects (e.g., cycloserine); or it may be effective, but unavailable in many developing countries (e.g., fluoroquinolones): aminoglycosides (WHO group 2): e.g., amikacin (AMK), kanamycin (KM); polypeptides (WHO group 2): e.g., capreomycin, viomycin, enviomycin; fluoroquinolones (WHO group 3): e.g., ciprofloxacin (CIP), levofloxacin, moxifloxacin (MXF); thioamides (WHO group 4): e.g. ethionamide, prothionamide cycloserine (WHO group 4) terizidone (WHO group 5) Third line Third-line drugs (WHO group 5) include drugs that may be useful, but have doubtful or unproven efficacy: rifabutin macrolides: e.g., clarithromycin (CLR); linezolid (LZD); thioacetazone (T); thioridazine; arginine; vitamin D; bedaquiline.These drugs are listed here either because they are not very effective (e.g., clarithromycin) or because their efficacy has not been proven (e.g., linezolid, R207910). Rifabutin is effective, but is not included on the WHO list because for most developing countries, it is impractically expensive. Standard regimen Rationale and evidence Tuberculosis has been treated with combination therapy for over fifty years. Drugs are not used singly (except in latent TB or chemoprophylaxis), and regimens that use only single drugs result in the rapid development of resistance and treatment failure. The rationale for using multiple drugs to treat TB are based on simple probability. The rate of spontaneous mutations that confer resistance to an individual drug are well known: 1 mutation for every 107 cell divisions for EMB, 1 for every 108 divisions for STM and INH, and 1 for every 1010 divisions for RMP.Patients with extensive pulmonary TB have approximately 1012 bacteria in their body, and therefore will probably be harboring approximately 105 EMB-resistant bacteria, 104 STM-resistant bacteria, 104 INH-resistant bacteria and 10² RMP-resistant bacteria. Resistance mutations appear spontaneously and independently, so the chances of them harbouring a bacterium that is spontaneously resistant to both INH and RMP is 1 in 108 × 1 in 1010 = 1 in 1018, and the chances of them harbouring a bacterium that is spontaneously resistant to all four drugs is 1 in 1033. This is, of course, an oversimplification, but it is a useful way of explaining combination therapy.There are other theoretical reasons for supporting combination therapy. The different drugs in the regimen have different modes of action. INH are bacteriocidal against replicating bacteria. EMB is bacteriostatic at low doses, but is used in TB treatment at higher, bactericidal doses. RMP is bacteriocidal and has a sterilizing effect. PZA is only weakly bactericidal, but is very effective against bacteria located in acidic environments, inside macrophages, or in areas of acute inflammation.All TB regimens in use were 18 months or longer until the appearance of rifampicin. In 1953, the standard UK regimen was 3SPH/15PH or 3SPH/15SH2. Between 1965 and 1970, EMB replaced PAS. RMP began to be used to treat TB in 1968 and the BTS study in the 1970s showed that 2HRE/7HR was efficacious. In 1984, a BTS study showed that 2HRZ/4HR was efficacious, with a relapse rate of less than 3% after two years. In 1995, with the recognition that INH resistance was increasing, the British Thoracic Society recommended adding EMB or STM to the regimen: 2HREZ/4HR or 2SHRZ/4HR, which are the regimens currently recommended. The WHO also recommend a six-month continuation phase of HR if the patient is still culture positive after 2 months of treatment (approximately 15% of patients with fully sensitive TB) and for those patients who have extensive bilateral cavitation at the start of treatment. Monitoring, DOTS, and DOTS-Plus DOTS stands for "Directly Observed Treatment, Short-course" and is a major plank in the World Health Organization (WHO) Global Plan to Stop TB. The DOTS strategy focuses on five main points of action. The first element of DOTS involves creating increased sustainable financial services and a short and long-term plan provided by the government, dedicated to eliminating tuberculosis.The World Health Organization helps encourage mobilized funding to reduce poverty standards that will prevent tuberculosis. The second component of the DOTS strategy is case detection, which involves improving the accuracy of laboratory tests for bacteriology and improving communication from labs to doctors and patients. Case detection means that laboratories that detect and test for bacteriology are accurate and communicative to its doctors and patients. The third strategy is to provide standard treatment and patient support. The guidelines to adhere to adequate treatment is to provide pharmaceutical drugs that will help eliminate tuberculosis and follow-up check-ups to ensure that tuberculosis is not a deterring factor in a patients life. There are many cultural barriers as many patients might continue to work under unsanitary living conditions or not have enough money to pay for the treatments. Programs that provide stipends and incentives to allow citizens to seek treatment are also necessary. The fourth element to the DOTS approach is to have a management program that supplies a sustainable long-term supply of reliable antibiotics. Lastly, the fifth component is to record and monitor treatment plans to ensure that the DOTS approach is effective. The DOTS approach not only aims to provide structure for tuberculosis programs, but also to ensure that citizens diagnosed with tuberculosis adhere to protocols which will prevent future bacterial infections.These include government commitment to control TB, diagnosis based on sputum-smear microscopy tests done on patients who actively report TB symptoms, direct observation short-course chemotherapy treatments, a definite supply of drugs, and standardized reporting and recording of cases and treatment outcomes. The WHO advises that all TB patients should have at least the first two months of their therapy observed (and preferably the whole of it observed): this means an independent observer watching patients swallow their anti-TB therapy. The independent observer is often not a healthcare worker and may be a shopkeeper or a tribal elder or similar senior person within that society. DOTS is used with intermittent dosing (thrice weekly or 2HREZ/4HR3). Twice weekly dosing is effective but not recommended by the World Health Organization (WHO), because there is no margin for error (accidentally omitting one dose per week results in once weekly dosing, which is ineffective).Treatment with properly implemented DOTS has a success rate exceeding 95% and prevents the emergence of further multi-drug resistant strains of tuberculosis. Administering DOTS, decreases the possibilities of tuberculosis from recurring, resulting in a reduction in unsuccessful treatments. This is in part due to the fact that areas without the DOTS strategy generally provide lower standards of care. Areas with DOTS administration help lower the number of patients seeking help from other facilities where they are treated with unknown treatments resulting in unknown outcomes. However, if the DOTS program is not implemented or done so incorrectly positive results will be unlikely. For the program to work efficiently and accurately health providers must be fully engaged, links must be built between public and private practitioners, health services must be available to all, and global support is provided to countries trying to reach their TB prevention, and treatment aims. Some researchers suggest that, because the DOTS framework has been so successful in the treatment of tuberculosis in sub-Saharan Africa, DOTS should be expanded to non-communicable diseases such as diabetes mellitus, hypertension, and epilepsy. DOTS-Plus strategy The WHO extended the DOTS programme in 1998 to include the treatment of MDR-TB (called "DOTS-Plus"). Implementation of DOTS-Plus requires the capacity to perform drug-susceptibility testing (not routinely available even in developed countries) and the availability of second-line agents, in addition to all the requirements for DOTS. DOTS-Plus is therefore much more resource-expensive than DOTS, and requires much greater commitment from countries wishing to implement it. Community engagement is a new approach that is being initiated alongside the DOTS individualized treatment. By creating a community for health workers to give support to patients and hospital faculty, the DOTS-plus model also incorporates psychological structural support treatments to help accommodate patients to ensure completion of treatment. Treatment with the new strategy is a total duration of 18–24 months.Monthly surveillance until cultures convert to negative is recommended for DOTS-Plus, but not for DOTS. If cultures are positive or symptoms do not resolve after three months of treatment, it is necessary to re-evaluate the patient for drug-resistant disease or nonadherence to drug regimen. If cultures do not convert to negative despite three months of therapy, some physicians may consider admitting the patient to hospital so as to closely monitor therapy. Extra-pulmonary tuberculosis Tuberculosis not affecting the lungs is called extra-pulmonary tuberculosis. Disease of the central nervous system is specifically excluded from this classification. The United Kingdom and the World Health Organization (WHO) recommendation is 2HREZ/4HR; the US recommendation is 2HREZ/7HR. There is good evidence from randomised-controlled trials to say that in tuberculous lymphadenitis and in TB of the spine, the six-month regimen is equivalent to the nine-month regimen; the US recommendation is therefore not supported by the evidence.Up to 25% of patients with TB of the lymph nodes (TB lymphadenitis) will get worse on treatment before they get better and this usually happens in the first few months of treatment. A few weeks after starting treatment, lymph nodes often start to enlarge, and previously solid lymph nodes may soften and develop into tuberculous cervical lymphadenitis. This should not be interpreted as failure of therapy and is a common reason for patients (and their physicians) to panic unnecessarily. With patience, two to three months into treatment the lymph nodes start to shrink again and re-aspiration or re-biopsy of the lymph nodes is unnecessary: if repeat microbiological studies are ordered, they will show the continued presence of viable bacteria with the same sensitivity pattern, which further adds to the confusion: physicians inexperienced in the treatment of TB will then often add second-line drugs in the belief that the treatment is not working. In these situations, all that is required is re-assurance. Steroids may be useful in resolving the swelling, especially if it is painful, but they are unnecessary. Additional antibiotics are unnecessary and the treatment regimen does not need to be lengthened.There is no evidence that 6 months regimen is inadequate in treating abdominal TB and there is no additional benefits for 9-month regimen for preventing relapse. However, more large scale studies are needed to confirm the above conclusion. Tuberculosis of the central nervous system Central nervous system tuberculosis takes two major forms: tuberculous meningitis and tuberculoma.Tuberculosis may affect the central nervous system (meninges, brain or spinal cord) in which case it is called TB meningitis, TB cerebritis, and TB myelitis respectively; the standard treatment is 12 months of drugs (2HREZ/10HR) and steroid are mandatory.Diagnosis is difficult as CSF culture is positive in less than half of cases, and therefore a large proportion of cases are treated on the basis of clinical suspicion alone. PCR of CSF does not significantly improve the microbiology yield; culture remains the most sensitive method and a minimum of 5 mL (preferably 20 mL) of CSF should be sent for analysis. TB cerebritis (or TB of the brain) may require brain biopsy to make the diagnosis, because the CSF is commonly normal: this is not always available and even when it is, some clinicians would debate whether it is justified putting a patient through such an invasive and potentially dangerous procedure when a trial of anti-TB therapy may yield the same answer; probably the only justification for brain biopsy is when drug-resistant TB is suspected.It is possible that shorter durations of therapy (e.g., six months) may be sufficient to treat TB meningitis, but no clinical trial has addressed this issue. The CSF of patients with treated TB meningitis is commonly abnormal even at 12 months; the rate of resolution of the abnormality bears no correlation with clinical progress or outcome, and is not an indication for extending or repeating treatment; repeated sampling of CSF by lumbar puncture to monitor treatment progress should therefore not be done.Although TB meningitis and TB cerebritis are classified together, the experience of many clinicians is that their progression and response to treatment is not the same. TB meningitis usually responds well to treatment, but TB cerebritis may require prolonged treatment (up to two years) and the steroid course needed is often also prolonged (up to six months). Unlike TB meningitis, TB cerebritis often required repeated CT or MRI imaging of the brain to monitor progress.Central nervous system TB may be secondary to blood-borne spread: therefore some experts advocate the routine sampling of CSF in patients with miliary TB.The anti-TB drugs that are most useful for the treatment of Central nervous system TB are: INH (CSF penetration 100%) RMP (10–20%) EMB (25–50% inflamed meninges only) PZA (100%) STM (20% inflamed meninges only) LZD (20%) Cycloserine (80–100%) Ethionamide (100%) PAS (10–50%) (inflamed meninges only)The use of steroids is routine in TB meningitis (see section below). There is evidence from one poorly designed trial that aspirin may be beneficial, but further work is required before this can be recommended routinely. Steroids The usefulness of corticosteroids (e.g., prednisolone or dexamethasone) in the treatment of TB is proven for TB meningitis and TB pericarditis. The dose for TB meningitis is dexamethasone 8 to 12 mg daily tapered off over six weeks (for those who prefer more precise dosing should refer to Thwaites et al., 2004). The dose for pericarditis is prednisolone 60 mg daily tapered off over four to eight weeks.Steroids may be of temporary benefit in pleurisy, extremely advanced TB, and TB in children: Pleurisy: prednisolone 20 to 40 mg daily tapered off over 4 to 8 weeks Extremely advanced TB: 40 to 60 mg daily tapered off over 4 to 8 weeks TB in children: 2 to 5 mg/kg/day for one week, 1 mg/kg/day the next week, then tapered off over 5 weeksSteroids may be of benefit in peritonitis, miliary disease, tubercular osteomyelitis, TB osteomyelitis, laryngeal TB, lymphadenitis and genitourinary disease, but the evidence is scant and the routine use of steroids cannot be recommended. Steroid treatment in these patients should be considered on a case-by-case basis by the attending physician. The long-term impact of pleural TB on respiratory function is unknown. Therefore, such impact should be quantified first before assessing the need of further clinical trials of corticosteroids with pleural TB.Thalidomide may be of benefit in TB meningitis and has been used in cases where patients have failed to respond to steroid treatment. Non-compliance Patients who take their TB treatment in an irregular and unreliable way are at greatly increased risk of treatment failure, relapse and the development of drug-resistant TB strains.There are variety of reasons why patients fail to take their medication. The symptoms of TB commonly resolve within a few weeks of starting TB treatment and many patients then lose motivation to continue taking their medication. Regular follow-up is important to check on compliance and to identify any problems patients are having with their medication. Patients need to be told of the importance of taking their tablets regularly, and the importance of completing treatment, because of the risk of relapse or drug-resistance developing otherwise.One of the main complaints is the bulkiness of the tablets. The main offender is PZA (the tablets being the size of horse tablets). PZA syrup may be offered as a substitute, or if the size of the tablets is truly an issue and liquid preparations are not available, then PZA can be omitted altogether. If PZA is omitted, the patient should be warned that this results in a significant increase in the duration of treatment (details of regimens omitting PZA are given below). The other complaint is that the medicines must be taken on an empty stomach to facilitate absorption. This can be difficult for patients to follow (for example, shift workers who take their meals at irregular times) and may mean the patient waking up an hour earlier than usual everyday just to take medication. The rules are actually less stringent than many physicians and pharmacists realise: the issue is that the absorption of RMP is reduced if taken with fat, but is unaffected by carbohydrate, protein, or antacids. So the patient can in fact have his or her medication with food as long as the meal does not contain fat or oils (e.g., a cup of black coffee or toast with jam and no butter). Taking the medicines with food also helps ease the nausea that many patients feel when taking the medicines on an empty stomach. The effect of food on the absorption of INH is not clear: two studies have shown reduced absorption with food but one study showed no difference. There is a small effect of food on the absorption of PZA and of EMB that is probably not clinically important.It is possible to test urine for isoniazid and rifampicin levels to check for compliance. The interpretation of urine analysis is based on the fact that isoniazid has a longer half-life than rifampicin: urine positive for isoniazid and rifampicin – patient probably fully compliant urine positive for isoniazid only – patient has taken his medication in the last few days preceding the clinic appointment, but had not yet taken a dose that day. urine positive for rifampicin only – patient has omitted to take his medication the preceding few days, but did take it just before coming to clinic. urine negative for both isoniazid and rifampicin – patient has not taken either medicine for a number of daysIn countries where doctors are unable to compel patients to take their treatment (e.g., the UK), some say that urine testing only results in unhelpful confrontations with patients and does not help increase compliance. In countries where legal measures can be taken to force patients to take their medication (e.g., the US), then urine testing can be a useful adjunct in assuring compliance.RMP colours the urine and all bodily secretions (tears, sweat, etc.) an orange-pink colour and this can be a useful proxy if urine testing is not available (although this colour fades approximately six to eight hours after each dose).In study on cases of extra-pulmonary TB (EPTB), researchers at the University of the Philippines Manila found that similarity of symptoms of EPTB to other diseases results to delayed identification of the disease and late provision of medication. This, ultimately contribute to increasing rates of mortality and incidence rates of EPTB.The World Health Organization (WHO) recommends prescription of fixed-dose combination drugs, to improve adherence to treatment by reducing the number of tablets that need to be taken by people, and also possibly reducing prescribing errors. A Cochrane review, published in 2016, found moderate quality evidence that "there is probably little or no difference in fixed-dose combination drugs compared to single-drug formulations". Treatment adherence strategies As stated above, non-compliance to anti-tuberculin treatment can result in treatment failure or development of drug-resistant tuberculosis. Therefore, overall treatment strategies should be focused on promoting adherence. WHO and the Centers for Disease Control and Prevention (CDC) recommend a multi-faceted patient centered care approach. Public health and private sector practitioners can promote TB treatment adherence by allowing patients to be active partners in making their own treatment decisions; improving patients knowledge and understanding of tuberculosis disease, treatment and potential spread; and by discussing expected interim and long-term outcomes with patients. CDC also recommends use of incentives and enablers. Incentives are monetary rewards for a healthy behavior (e.g.transport or food vouchers), while enablers function to remove economic burdens impeding healthcare access (e.g. grouping clinic visits, providing after hours clinic visits, or home visits). However, more research is needed to determine whether incentives and enablers have a significant effect on long-term treatment adherence for TB. Smartphones are considered to have potential to improve compliance.Individuals with tuberculosis may also benefit from the emotional support of peers and survivors. Advocacy organizations and patient support groups such as STOP TB, TB Alert, Treatment Action Group (TAG) and others work to connect TB survivors. Adverse effects For information on adverse effects of individual anti-TB drugs, please refer to the individual articles for each drug. The relative incidence of major adverse effects has been carefully described: INH 0.49 per hundred patient months RMP 0.43 EMB 0.07 PZA 1.48 All drugs 2.47This works out to an 8.6% risk that any one patient will need to have his drug therapy changed during the course of standard short-course therapy (2HREZ/4HR). The people identified to be most at risk of major adverse side effects in this study were: age >60, females, HIV positive patients, and Asians.It can be extremely difficult identifying which drug is responsible for which side effect, but the relative frequency of each is known. The offending drugs are given in decreasing order of frequency: Thrombocytopenia: Rifampicin (RMP) Neuropathy: Isoniazid (INH) Vertigo: Streptomycin (STM) Hepatitis: Pyrazinamide (PZA), RMP, INH Rash: PZA, RMP, Ethambutol (EMB)Thrombocytopenia is only caused by RMP and no test dosing need be done. Regimens omitting RMP are discussed below. Please refer to the entry on rifampicin for further details. The most frequent cause of neuropathy is INH. The peripheral neuropathy of INH is always a pure sensory neuropathy and finding a motor component to the peripheral neuropathy should always prompt a search for an alternative cause. Once a peripheral neuropathy has occurred, INH must be stopped and pyridoxine should be given at a dose of 50 mg thrice daily. Simply adding high dose pyridoxine to the regimen once neuropathy has occurred will not stop the neuropathy from progressing. Patients at risk of peripheral neuropathy from other causes (diabetes mellitus, alcoholism, renal failure, malnutrition, pregnancy, etc.) should all be given pyridoxine 10 mg daily at the start of treatment. Please refer to the entry on isoniazid for details on other neurological side effects of INH. Rashes are most frequently due to PZA, but can occur with any of the TB drugs. Test dosing using the same regimen as detailed below for hepatitis may be necessary to determine which drug is responsible. Itching RMP commonly causes itching without a rash in the first two weeks of treatment: treatment should not be stopped and the patient should be advised that the itch usually resolves on its own. Short courses of sedative antihistamines such as chlorpheniramine may be useful in alleviating the itch. Fever during treatment can be due to a number of causes. It can occur as a natural effect of tuberculosis (in which case it should resolve within three weeks of starting treatment). Fever can be a result of drug resistance (but in that case the organism must be resistant to two or more of the drugs). Fever may be due to a superadded infection or additional diagnosis (patients with TB are not exempt from getting influenza and other illnesses during the course of treatment). In a few patients, the fever is due to drug allergy. The clinician must also consider the possibility that the diagnosis of TB is wrong. If the patient has been on treatment for more than two weeks and if the fever had initially settled and then come back, it is reasonable to stop all TB medication for 72 hours. If the fever persists despite stopping all TB medication, then the fever is not due to the drugs. If the fever disappears off treatment, then the drugs need to be tested individually to determine the cause. The same scheme as is used for test dosing for drug-induced hepatitis (described below) may be used. The drug most frequently implicated as causing a drug fever is RMP: details are given in the entry on rifampicin. Drug-induced hepatitis Drug-induced hepatitis, from TB treatment, has a mortality rate of around 5%. Three drugs can induce hepatitis: PZA, INH and RMP (in decreasing order of frequency).[1] It is not possible to distinguish between these three causes based purely on signs and symptoms. Test dosing must be carried out to determine which drug is responsible (this is discussed in detail below). Liver function tests (LFTs) should be checked at the start of treatment, but, if normal, need not be checked again; the patient need only be warned of the symptoms of hepatitis. Some clinicians insist on regular monitoring of LFTs while on treatment, and in this instance, tests need only be done two weeks after starting treatment and then every two months thereafter, unless any problems are detected. Elevations in bilirubin must be expected with RMP treatment (RMP blocks bilirubin excretion) and usually resolve after 10 days (liver enzyme production increases to compensate). Isolated elevations in bilirubin can be safely ignored. Elevations in liver transaminases (ALT and AST) are common in the first three weeks of treatment. If the patient is asymptomatic and the elevation is not excessive then no action need be taken; some experts suggest a cut-off of four times the upper limit of normal, but there is no evidence to support this particular number over and above any other number. Some experts consider that treatment should only be stopped if jaundice becomes clinically evident. If clinically significant hepatitis occurs while on TB treatment, then all the drugs should be stopped until the liver transaminases return to normal. If the patient is so ill that TB treatment cannot be stopped, then STM and EMB should be given until the liver transaminases return to normal (these two drugs are not associated with hepatitis). Fulminant hepatitis can occur in the course of TB treatment, but is fortunately rare; emergency liver transplantation may be necessary and deaths do occur. Test dosing for drug-induced hepatitis Drugs should be re-introduced individually. This cannot be done in an outpatient setting, and must be done under close observation. A nurse must be present to take patients pulse and blood pressure at 15-minute intervals for a minimum of four hours after each test dose is given (most problems will occur within six hours of test dosing, if they are going to occur at all). Patients can become very suddenly unwell and access to intensive care facilities must be available. The drugs should be given in this order: Day 1: INH at 1/3 or 1/4 dose Day 2: INH at 1/2 dose Day 3: INH at full dose Day 4: RMP at 1/3 or 1/4 dose Day 5: RMP at 1/2 dose Day 6: RMP at full dose Day 7: EMB at 1/3 or 1/4 dose Day 8: EMB at 1/2 dose Day 9: EMB at full doseNo more than one test dose per day should be given, and all other drugs should be stopped while test dosing is being done. So on day 4, for example, the patient only receives RMP and no other drugs are given. If the patient completes the nine days of test dosing, then it is reasonable to assume that PZA has caused the hepatitis and no PZA test dosing need be done. The reason for using the order for testing drugs is because the two most important drugs for treating TB are INH and RMP, so these are tested first: PZA is the most likely drug to cause hepatitis and is also the drug that can be most easily omitted. EMB is useful when the sensitivity pattern of the TB organism are not known and can be omitted if the organism is known to be sensitive to INH. Regimens omitting each of the standard drugs are listed below. The order in which the drugs are tested can be varied according to the following considerations: The most useful drugs (INH and RMP) should be tested first, because the absence of these drugs from a treatment regimen severely impairs its efficacy. The drugs most likely to be causing the reaction should be tested as late as possible (and possibly need not be tested at all). This avoids rechallenging patients with a drug to which they have already had a (possibly) dangerous adverse reaction.A similar scheme may be used for other adverse effects (such as fever and rash), using similar principles. Dysbiosis caused by HRZE antibiotic treatment Tuberculosis treatment results in changes to the structure of the gut microbiome both during and after treatment in mice and humans. It is currently unknown what the long term effects of this dysbiosis are on systemic immunity. Deviations from the standard regimen There is evidence supporting some deviations from the standard regimen when treating pulmonary TB. Sputum culture-positive patients who are smear-negative at the start of treatment do well with only 4 months of treatment (this has not been validated for HIV-positive patients); sputum culture-negative patients do well on only 3 months of treatment (possibly because some of these patients never had TB at all). It is unwise to treat patients for only three or four months, but all TB physicians will have patients who stop their treatment early (for whatever reason), and it can be reassuring to know that sometimes retreatment is unnecessary. Elderly patients who are already taking a large number of tablets may be offered 9HR, omitting PZA which is the bulkiest part of the regimen. It may not always be necessary to treat with four drugs from the beginning. An example might be a close contact of a patient known to have a fully sensitive strain of tuberculosis: in this case, it is acceptable to use 2HRZ/4HR (omitting EMB and STM) in the expectation that their strain will be INH susceptible also. Indeed, this was previously the recommended standard regimen in many countries until the early 1990s, when isoniazid-resistance rates increased. TB involving the brain or spinal cord (meningitis, encephalitis, etc.) is currently treated with 2HREZ/10HR (12 months of treatment in total), but there is no evidence to say that this is superior to 2HREZ/4HR. There is no difference in relapse rates amongst those who are treated with 6 months or longer period of time. However, more well-designed studies are needed to answer this question. Regimens omitting isoniazid Isoniazid resistance accounts 6.9% of isolates in the UK (2010). Worldwide, it is the most common type of resistance encountered, hence the current recommendation of using HREZ at the beginning of treatment until sensitivities are known. It is useful to know of current reported outbreaks (like the current outbreak of INH-resistant TB in London). If patients are discovered to be infected with an isoniazid-resistant strain of TB having completed 2 months of HREZ, then they should be changed to RE for a further 10 months, and the same thing if the patient is intolerant to isoniazid (although 2REZ/7RE may be acceptable if the patient is well supervised). The US recommendation is 6RZE with the option of adding a quinolone such as moxifloxacin. The level of evidence for all these regimens is poor, and there is little to recommend one over the other. Regimens omitting rifampicin The UK prevalence of rifampicin (RMP) resistance is 1.4%. It is rare for TB strains to be resistant to RMP without also being resistant to INH, which means that rifampicin-resistance usually means resistance to INH as well (that is, MDR-TB). However, RMP intolerance is not uncommon (hepatitis or thrombocytopaenia being the most common reasons for stopping rifampicin). Of the first-line drugs, rifampicin is also the most expensive, and in the poorest countries, regimens omitting rifampicin are therefore often used. Rifampicin is the most potent sterilising drug available for the treatment of tuberculosis and all treatment regimens that omit rifampicin are significantly longer than the standard regimen. The UK recommendation is 18HE or 12HEZ. The US recommendation is 9 to 12HEZ, with the option of adding a quinolone (for example, MXF). Regimens omitting pyrazinamide PZA is a common cause of rash, hepatitis and of painful arthralgia in the HREZ regimen, and can be safely stopped in those patients who are intolerant to it. Isolated PZA resistance is uncommon in M. tuberculosis, but M. bovis is innately resistant to PZA. PZA is not crucial to the treatment of fully sensitive TB, and its main value is in shortening the total treatment duration from nine months to six. An alternative regimen is 2HRE/7HR, for which there is excellent clinical trial evidence. The 1994 US CDC guidelines for tuberculosis erroneously cite Slutkin as evidence that a nine-month regimen using only isoniazid and rifampicin is acceptable, but almost all of the patients in that study received ethambutol for the first two to three months (although this is not obvious from the abstract of that article). This mistake was rectified in the 2003 guidelines.This regimen (2HRE/7HR) is the first-line regimen used to treat M. bovis, since M. bovis is intrinsically resistant to pyrazinamide. Regimens omitting ethambutol EMB intolerance or resistance is rare. If a patient is truly intolerant or is infected with TB that is resistant to EMB, then 2HRZ/4HR is an acceptable regimen. The main motivator for including EMB in the initial two months is because of increasing rates of INH resistance. Tuberculosis and other conditions Liver disease People with alcoholic liver disease are at an increased risk of tuberculosis. The incidence of tuberculous peritonitis is particularly high in patients with cirrhosis of the liver.There are broadly two categories of treatment: A) Cirrhotic patients with essentially normal baseline liver function tests (Childs A Cirrhosis). Such patients may be treated with standard 4 drug regime for 2 months followed by 2 drugs for remaining 4 months (total 6-month treatment). B) Cirrhotic patients altered baseline liver function tests (Childs B & C). According to 2010 WHO guidelines: depending on the severity of the disease and degree of decompensation, the following regimen can be used, by altering the number of hepatotoxic drugs. One or two hepatotoxic drugs may be used in moderately severe disease (e.g., Childs B cirrhosis) whereas hepatotoxic drugs are completely avoided in decompensated Child C cirrhosis. • Two hepatotoxic drugs – 9 months of Isoniazid, Rifampin and Ethambutol (until or unless isoniazid susceptibility is documented) – 2 months of Isoniazid, Rifampin, Ethambutol and Streptomycin followed by 6 months of Isoniazid and Rifampin • One hepatotoxic drug – 2 months of Isoniazid, Ethambutol & Streptomycin followed by 10 months of Isoniazid and Ethambutol • No hepatotoxic drugs – 18–24 months of Streptomycin, Ethambutol and Quinolones Patients with liver disease should have their liver function tests monitored regularly throughout TB treatment. Drug-induced hepatitis is discussed in a separate section above. Pregnancy Pregnancy itself is not a risk factor for TB. Rifampicin makes hormonal contraception less effective, so additional precautions need to be taken for birth control while tuberculosis treatment. Untreated TB in pregnancy is associated with an increased risk of miscarriage and major fetal abnormality, and treatment of pregnant women. The US guidelines recommend omitting PZA when treating TB in pregnancy; the UK and WHO guidelines make no such recommendation, and PZA is commonly used in pregnancy. There is extensive experience with the treatment of pregnant women with TB and no toxic effect of PZA in pregnancy has ever been found. High doses of RMP (much higher than used in humans) causes neural tube defects in animals, but no such effect has ever been found in humans. There may be an increased risk of hepatitis in pregnancy and during the puerperium. It is prudent to advise all women of child-bearing age to avoid getting pregnant until TB treatment is completed. Aminoglycosides (STM, capreomycin, amikacin) should be used with caution in pregnancy, because they may cause deafness in the unborn child. The attending physician must weigh the benefits of treating the mother against the potential harm to the baby, and good outcomes have been reported in children whose mothers were treated with aminoglycosides. Experience in Peru shows that treatment for MDR-TB is not a reason to recommend termination of pregnancy, and that good outcomes are possible. Kidney disease People with kidney failure have a 10 to 30-fold increase in risk of getting TB. People with kidney disease who are being given immunosuppressive medications or are being considered for transplant should be considered for treatment of latent tuberculosis if appropriate. Aminoglycosides (STM, capreomycin and amikacin) should be avoided in patients with mild to severe kidney problems because of the increased risk of damage to the kidneys. If the use of aminoglycosides cannot be avoided (e.g., in treating drug-resistant TB) then serum levels must be closely monitored and the patient warned to report any side-effects (deafness in particular). If a person has end-stage kidney disease and has no useful remaining kidney function, then aminoglycosides can be used, but only if drug levels can be easily measured (often only amikacin levels can be measured). In mild kidney impairment, no change needs to be made in dosing any of the other drugs routinely used in the treatment of TB. In severe chronic kidney disease (GFR<30), the EMB dose should be halved (or avoided altogether). The PZA dose is 20 mg/kg/day (UK recommendation) or three-quarters the normal dose (US recommendation), but not much published evidence is available to support this. When using 2HRZ/4HR in patients on dialysis, the drugs should be given daily during the initial high-intensity phase. In the continuation phase, the drugs should be given at the end of each haemodialysis session and no dose should be taken on non-dialysis days. HIV In patients with HIV, treatment for the HIV should be delayed until TB treatment is completed, if possible. The current UK guidance (provided by the British HIV Association) is CD4 count over 200—delay treatment until the six months of TB treatment are complete. CD4 count 100 to 200—delay treatment until the initial two-month intensive phase of therapy is complete CD4 count less than 100—the situation is unclear and patients should be enrolled in clinical trials examining this question. There is evidence that if these patients are managed by a specialist in both TB and HIV then outcomes are not compromised for either disease.If HIV treatment has to be started while a patient is still on TB treatment, then the advice of a specialist HIV pharmacist should be sought. In general, there is no significant interactions with the NRTIs. Nevirapine should not be used with rifampicin. Efavirenz may be used, but dose used depends on the patients weight (600 mg daily if weight less than 50 kg; 800 mg daily if weight greater than 50 kg). Efavirenz levels should be checked early after starting treatment (unfortunately, this is not a service routinely offered in the US, but is readily available in the UK). The protease inhibitors should be avoided if at all possible: patients on rifamycins and protease inhibitors have an increased risk of treatment failure or relapse.The World Health Organization (WHO) warns against using thioacetazone in patients with HIV, because of the 23% risk of potentially fatal exfoliative dermatitis.According to Caprisa 003 (SAPiT) Study the mortality in patients who were started on anti-retrovirals during TB treatment was 56% lower than those started after TB treatment was completed (hazard ratio 0.44 (95% CI: 0.25 to 0.79); p=0.003). Epilepsy INH may be associated with an increased risk of seizures. Pyridoxine 10 mg daily should be given to all epileptics taking INH. There is no evidence that INH causes seizures in patients who are not epileptic. TB treatment involves numerous drug interactions with anti-epileptic drugs and serum drug levels should be closely monitored. There are serious interactions between rifampicin and carbamazepine, rifampicin and phenytoin, and rifampicin and sodium valproate. The advice of a pharmacist should always be sought. Covid- 19 TB and COVID-19 are a "cursed duet" and need immediate attention. TB should be considered a risk factor for severe COVID disease and patients with TB should be prioritised for COVID-19 preventative efforts, including vaccination. Drug-resistance Definitions Multi-drug resistant tuberculosis (MDR-TB) is defined as TB that is resistant at least to INH and RMP. Isolates that are multi-resistant to any other combination of anti-TB drugs but not to INH and RMP are not classed as MDR-TB. As of Oct 2006, "Extensively drug-resistant tuberculosis" (XDR-TB) is defined as MDR-TB that is resistant to quinolones and also to any one of kanamycin, capreomycin, or amikacin. The old case definition of XDR-TB is MDR-TB that is also resistant to three or more of the six classes of second-line drugs. This definition should no longer be used, but is included here because many older publications refer to it. The principles of treatment for MDR-TB and for XDR-TB are the same. The main difference is that XDR-TB is associated with a much higher mortality rate than MDR-TB, because of a reduced number of effective treatment options. The epidemiology of XDR-TB is currently not well studied, but it is believed that XDR-TB does not transmit easily in healthy populations, but is capable of causing epidemics in populations which are already stricken by HIV and therefore more susceptible to TB infection. Epidemiology of drug-resistant TB A 1997 survey of 35 countries found rates above 2% in about a third of the countries surveyed. The highest rates of drug-resistant TB were in the former USSR, the Baltic states, Argentina, India and China, and was associated with poor or failing national Tuberculosis Control programmes. Likewise, the appearance of high rates of MDR-TB in New York city the early 1990s was associated with the dismantling of public health programmes by the Reagan administration.Paul Farmer points out that the more expensive a treatment, the harder it is for poor countries to get. Farmer sees this as verging on denial of basic human rights. Africa is low in quality of treatment partly because many African cultures lack the concept of time essential to the schedule of administration.MDR-TB can develop in the course of the treatment of fully sensitive TB and this is always the result of patients missing doses or failing to complete a course of treatment. Thankfully, MDR-TB strains appear to be less fit and less transmissible. It has been known of many years that INH-resistant TB is less virulent in guinea pigs, and the epidemiological evidence is that MDR strains of TB do not dominate naturally. A study in Los Angeles found that only 6% of cases of MDR-TB were clustered. This should not be a cause for complacency: it must be remembered that MDR-TB has a mortality rate comparable to lung cancer. It must also be remembered that people who have weakened immune systems (because of diseases such as HIV or because of drugs) are more susceptible to catching TB. Children represent a susceptible population with increasing rates of MDR and XDR-TB. Since diagnosis in pediatric patients is difficult, large number of cases are not properly reported. Cases of pediatric XDR-TB have been reported in most countries including the United States.In 2006 an outbreak of XDR-TB South Africa was first reported as a cluster of 53 patients in a rural hospital in KwaZulu-Natal, with all but one dying. What was particularly worrying was that the mean survival from sputum specimen collection to death was only 16 days and that the majority of patients had never previously received treatment for tuberculosis. This is the epidemic for which the acronym XDR-TB was first used, although TB strains that fulfil the current definition have been identified retrospectively, this was the largest group of linked cases ever found. Since the initial report in September 2006, cases have now been reported in most provinces in South Africa. As of 16 March 2007, there were 314 cases reported, with 215 deaths. It is clear that the spread of this strain of TB is closely associated with a high prevalence of HIV and poor infection control; in other countries where XDR-TB strains have arisen, drug-resistance has arisen from mismanagement of cases or poor patient compliance with drug treatment instead of being transmitted from person to person. This strain of TB does not respond to any of the drugs currently available in South Africa for first- or second-line treatment. It is now clear that the problem has been around for much longer than health department officials have suggested, and is far more extensive. By 23 November 2006, 303 cases of XDR-TB had been reported, of which 263 were in KwaZulu-Natal. Serious thought has been put to isolation procedures that may deny some patients their human rights, but which may be necessary to prevent further spread of this strain of TB. Treatment of MDR-TB The treatment and prognosis of MDR-TB are much more akin to that for cancer than to that for infection. It has a mortality rate of up to 80%, which depends on a number of factors, including How many drugs the organism is resistant to (the fewer the better), How many drugs the patient is given (patients treated with five or more drugs do better), Whether an injectable drug is given or not (it should be given for the first three months at least), The expertise and experience of the physician responsible, How co-operative the patient is with treatment (treatment is arduous and long, and requires persistence and determination on the part of the patient), Whether the patient is HIV positive or not (HIV co-infection is associated with an increased mortality).Treatment courses are a minimum of 18 months and may last years; it may require surgery, though death rates remain high despite optimal treatment. That said, good outcomes are still possible. Treatment courses that are at least 18 months long and which have a directly observed component can increase cure rates to 69%.The treatment of MDR-TB must be undertaken by a physician experienced in the treatment of MDR-TB. Mortality and morbidity in patients treated in non-specialist centres is significantly elevated compared to those patients treated in specialist centres. In addition to the obvious risks (e.g., known exposure to a patient with MDR-TB), risk factors for MDR-TB include male sex, HIV infection, previous incarceration, failed TB treatment, failure to respond to standard TB treatment, and relapse following standard TB treatment. A large proportion of people with MDR-TB are unable to access treatment due to what Paul Farmer describes as an "Outcome Gap". The majority of people struck with MDR-TB live in "resource-poor settings" and are denied treatment because international organizations have refused to make technologies available to countries who cannot afford to pay for treatment, the reason being that second line drugs are to expensive therefore treatment methods for MDR-TB are not sustainable in impoverished nations. Farmer argues that this is social injustice and we cannot allow people to die simply because they are faced with circumstances where they cannot afford "effective therapy".Treatment of MDR-TB must be done on the basis of sensitivity testing: it is impossible to treat such patients without this information. If treating a patient with suspected MDR-TB, the patient should be started on SHREZ+MXF+cycloserine pending the result of laboratory sensitivity testing. A gene probe for rpoB is available in some countries and this serves as a useful marker for MDR-TB, because isolated RMP resistance is rare (except when patients have a history of being treated with rifampicin alone). If the results of a gene probe (rpoB) are known to be positive, then it is reasonable to omit RMP and to use SHEZ+MXF+cycloserine. The reason for maintaining the patient on INH despite the suspicion of MDR-TB is that INH is so potent in treating TB that it is foolish to omit it until there is microbiological proof that it is ineffective. There are also probes available for isoniazid-resistance (katG and mabA-inhA), but these are less widely available. When sensitivities are known and the isolate is confirmed as resistant to both INH and RMP, five drugs should be chosen in the following order (based on known sensitivities): an aminoglycoside (e.g., amikacin, kanamycin) or polypeptide antibiotic (e.g., capreomycin) PZA EMB a fluoroquinolones: moxifloxacin is preferred (ciprofloxacin should no longer be used; rifabutin cycloserine a thioamide: prothionamide or ethionamide PAS a macrolide: e.g., clarithromycin linezolid high-dose INH (if low-level resistance) interferon-γ thioridazine meropenem and clavulanic acidDrugs are placed nearer the top of the list because they are more effective and less toxic; drugs are placed nearer the bottom of the list because they are less effective or more toxic, or more difficult to obtain. Resistance to one drug within a class generally means resistance to all drugs within that class, but a notable exception is rifabutin: rifampicin-resistance does not always mean rifabutin-resistance and the laboratory should be asked to test for it. It is only possible to use one drug within each drug class. If it is difficult finding five drugs to treat then the clinician can request that high level INH-resistance be looked for. If the strain has only low level INH-resistance (resistance at 0.2 mg/L INH, but sensitive at 1.0 mg/L INH), then high dose INH can be used as part of the regimen. When counting drugs, PZA and interferon count as zero; that is to say, when adding PZA to a four drug regimen, you must still choose another drug to make five. It is not possible to use more than one injectable (STM, capreomycin or amikacin), because the toxic effect of these drugs is additive: if possible, the aminoglycoside should be given daily for a minimum of three months (and perhaps thrice weekly thereafter). Ciprofloxacin should not be used in the treatment of tuberculosis if other fluoroquinolones are available.There is no intermittent regimen validated for use in MDR-TB, but clinical experience is that giving injectable drugs for five days a week (because there is no-one available to give the drug at weekends) does not seem to result in inferior results. Directly observed therapy certainly helps to improve outcomes in MDR-TB and should be considered an integral part of the treatment of MDR-TB.Response to treatment must be obtained by repeated sputum cultures (monthly if possible). Treatment for MDR-TB must be given for a minimum of 18 months and cannot be stopped until the patient has been culture-negative for a minimum of nine months. It is not unusual for patients with MDR-TB to be on treatment for two years or more. Patients with MDR-TB should be isolated in negative-pressure rooms, if possible. Patients with MDR-TB should not be accommodated on the same ward as immunosuppressed patients (HIV infected patients, or patients on immunosuppressive drugs). Careful monitoring of compliance with treatment is crucial to the management of MDR-TB (and some physicians insist on hospitalisation if only for this reason). Some physicians will insist that these patients are isolated until their sputum is smear negative, or even culture negative (which may take many months, or even years). Keeping these patients in hospital for weeks (or months) on end may be a practical or physical impossibility and the final decision depends on the clinical judgement of the physician treating that patient. The attending physician should make full use of therapeutic drug monitoring (particularly of the aminoglycosides) both to monitor compliance and to avoid toxic effects. Some supplements may be useful as adjuncts in the treatment of tuberculosis, but for the purposes of counting drugs for MDR-TB, they count as zero (if you already have four drugs in the regimen, it may be beneficial to add arginine or vitamin D or both, but you still need another drug to make five). arginine, some clinical evidence (peanuts are a good source) Vitamin D, (some in-vitro evidence & see Vitamin D and tuberculosis treatment )The drugs listed below have been used in desperation and it is uncertain whether they are effective at all. They are used when it is not possible to find five drugs from the list above. imipenem co-amoxiclav clofazimine prochlorperazine metronidazoleOn 28 December 2012 the US Food and Drug Administration (FDA) approved bedaquiline (marketed as Sirturo by Johnson & Johnson) to treat multi-drug resistant tuberculosis, the first new treatment in 40 years. Sirturo is to be used in a combination therapy for patients who have failed standard treatment and have no other options. Sirturo is an adenosine triphosphate synthase (ATP synthase) inhibitor.The follow drug is experimental compounds that are not commercially available, but which may be obtained from the manufacturer as part of a clinical trial or on a compassionate basis. Their efficacy and safety are unknown: Pretomanid (manufactured by Novartis, developed in partnership with TB Alliance)There is increasing evidence for the role of surgery (lobectomy or pneumonectomy) in the treatment of MDR-TB, although whether this is should be performed early or late is not yet clearly defined. See Modern surgical management Management in Asia The Asia‐Pacific region carries 58% of the global tuberculosis burden, which includes multi drug-resistant tuberculosis. Southeast Asia has high burdens of tuberculosis as a result of inefficient and inadequate health infrastructures. According to the World Health Organization, many Asian countries have high cases of tuberculosis, but their governments will not invest in new technology to treat its patients. Philippines From 2005 to 2009, the IPHO-Maguindanao, a governmental organization in Maguindanao, Philippines, partnered with the Catholic Relief Services (CRS) to increase tuberculosis awareness. CRS implemented a USAID-assisted project to fund tuberculosis testing. Additionally, they launched an "Advocacy, Communication, and Self-Mobilization" project featuring workshops to encourage testing in communities. Citizens attending religious sermons were able to distribute information about tuberculosis and inform their communities on where to seek treatment and how to adhere to treatment protocols The DOTS-Plus strategy, designed to deliver from within familiar local institutions, was successful at conveying information about tuberculosis prevention and treatment. India In 1906, India opened its first air sanatorium for treatment and isolation of TB patients.However, the World Health Organization reviewed the national program in India which lacked funding and treatment regimens that could report accurate tuberculosis case management. By 1945, there were successful immunization screenings due to campaigns that helped spread messages about the prevention of disease. This was also around the same time that the World Health Organization declared tuberculosis to be a global emergency and recommended countries adopt the DOTS strategy. Bangladesh, Cambodia, Thailand In Bangladesh, Cambodia, and Indonesia, there is a diagnostic treatment for latent tuberculosis in children below 5 years of age. The IGRA approach (Interferon Gamma Release Assay) is used in these countries. IGRA testing and diagnosis are whole blood cell tests where fresh blood samples are mixed with antigens and controls. A person infected with tuberculosis will have interferon-gammas in the blood stream when mixed with the antigen. It is a highly accurate but expensive test and is technologically complex for immuno-compromised patients. These developing countries were unable to get rid of tuberculosis effectively because the national health policies did not cover screening and testing for tuberculosis. There were also no programs in place to educate citizens and provide training for healthcare workers. Without the mobilization of sufficient resources and the backing of sustainable government funding, these developing countries failed to adequately provide the treatment and resources necessary to combat tuberculosis. Vietnam According to the WHO, Vietnam ranks 13th on the list of 22 countries with the highest tuberculosis burden in the world. Nearly 400 new cases of TB and 55 deaths occur each day in Vietnam. In 1989, the Ministry of Health in Vietnam addressed the tuberculosis burden by establishing the National Institute of Tuberculosis and Lung Diseases and implemented the DOTS strategy as a national priority. Vietnams health service system consists of four different levels: the central level headed by the Ministry of Health (MOH), provincial health services, district health services, and commune health centers". These departments worked with the National Institute of Tuberculosis and Lung Diseases to ensure that there were treatment and prevention plans for long-term reduction of tuberculosis. In 2002, Vietnam also implemented a communication plan to provide accurate educational information to respond to any barriers or misperceptions about tuberculosis treatment. The government worked with the World Health Organization, Center for Disease and Control Prevention, and local medical non-profits such as Friends for International Tuberculosis Relief to provide information about the causes of TB, sources of infection, how it is transmitted, symptoms, treatment, and prevention. The National Tuberculosis Control Program works closely with the primary health care system at the central, provincial, district, and commune levels which has proven to be an incredibly imperative measure of success. Tuberculosis non-profits in Asia Friends for International TB Relief is a small non-governmental organization whose mission is to help prevent tuberculosis and the spreading of TB. FIT not only diagnoses patients, but also provides preventative tuberculosis detection to pilot a comprehensive patient-centered TB program that aims to stop TB transmission and reduce suffering. The organization focuses on island screening due to the high level of risk and burden the population faces. Through its method of search, treat, prevent, and integrative sustainability, FIT is working closely with most of the population on the island (roughly 2022 patients), and partnered with the Ho Chi Minh City Public Health Association on a pilot that provides active community outreach, patient-centric care and stakeholder engagement.Located in Ha Noi, the National Institute of Tuberculosis and Lung Diseases is responsible for the direction and management of TB control activities at the central level. The institute supports the MOH in developing TB- related strategies, and in handling management and professional guidelines for the system. The provincial level centers diagnose, treat, and manage patients, implement TB policies issued by the NTP, and develop action plans under the guidelines of the Provincial Health Bureau and the provincial TB control committees. The districts are capable of detecting TB and treating patients. All districts have physicians specializing in TB, laboratories, and X-ray equipment and have either a TB department or a TB-communicable diseases department in the district hospital. The district level is also responsible for implementing and monitoring the NTP, and the supervision and management of TB programs in the communes. The commune level provides treatment as prescribed by the district level, administering drugs, and vaccinating children. In TB control, village health workers play critically important roles in identifying suspected TB patients, conducting counseling for examination and tests, paying home visits to patients undergoing treatment, and reporting problems in monthly meetings with the CHC.TB Alliance is a non-governmental organization that is located in South Africa and was discovered in the early 2000s. The NGO is a leading non-profit for global tuberculosis research and development of new TB vaccines. To advance TB development, TB Alliance creates partnerships with private, public, academic, and philanthropic sectors to develop products in underserved communities. In 2019, TB Alliance became the first not-for-profit organization to develop and register an anti-TB drug. TB Alliance also works closely alongside the World Health Organization (WHO), U.S FDA, and the European Medicine Agency (EMA) to endorse regulative policies and treatments that are affordable. FHI 360 is an international tuberculosis non-profit organization funded by USAID to treat and support patients in Myanmar, China, and Thailand. The organization developed an app called DOTsync for healthcare staff to administer antibiotics and monitor the side effects of patients. This is incredibly imperative to eliminating tuberculosis because it allows healthcare workers to have follow-up checkups with patients to ensure that tuberculosis treatments are effective. Operation ASHA is a TB nonprofit organization that was founded in 2006. Located in India and Cambodia, Operation ASHA focuses on the development of "e-Compliance," which is a verification and SMS text messaging system where patients can use their fingerprints to access their medical records and be reminded daily via text when to take their medication. According to Operation ASHA, the e-Compliance treatment successive rate is 85%. Treatment failure Patients who fail treatment must be distinguished from patients who relapse. Patients who responded to treatment and appeared to be cured after completing a course of TB treatment are not classed as treatment failures, but as relapses and are discussed in a separate section below. Patients are said to have failed treatment if they fail to respond to treatment (cough and sputum production persisting throughout the whole of treatment), or only experience a transient response to treatment (the patient gets better at first, but then get worse again, all the while on treatment).It is very uncommon for patients not to respond to TB treatment at all (even transiently), because this implies resistance at base-line to all of the drugs in the regimen. Patients who fail to get any response at all while on treatment should first of all be questioned very closely about whether or not they have been taking their medicines, and perhaps even be admitted to hospital to be observed taking their treatment. Blood or urine samples may be taken to check for malabsorption of TB drugs. If it can be shown that they are fully compliant with their medication, then the probability that they have another diagnosis (perhaps in addition to the diagnosis of TB) is very high. These patients should have their diagnosis carefully reviewed and specimens obtained for TB culture and sensitivity testing. Patients who get better and then get worse again should likewise be questioned very closely about adherence to treatment. If adherence is confirmed then they should be investigated for resistant TB (including MDR-TB), even if a specimen has already been obtained for microbiology before commencing treatment. Prescription or dispensing errors will account for a proportion of patients who fail to respond to treatment. Immune defects are a rare cause of non-response. In a tiny proportion of patients, treatment failure is a reflection of extreme biological variation and no cause is found. Treatment relapse Patients are said to relapse if they improve while on treatment, but become ill again after stopping treatment. Patients who experience only a transient improvement while on treatment, or who never respond to treatment are said to have failed treatment and are discussed above. There is a small relapse rate associated with all treatment regimens, even if the treatment has been taken religiously with 100% compliance (the standard regimen 2HREZ/4HR has a relapse rate of 2 to 3% under trial conditions). The majority of relapses occur within 6 months of finishing treatment. Patients who are more likely to relapse are those who took their medication in an unreliable and irregular fashion. The probability of resistance is higher in those patients who relapse and every effort must be made to obtain a specimen that can be cultured for sensitivities. That said, most patients who relapse do so with a fully sensitive strain and it is possible that these patients have not relapsed, but have instead been re-infected; these patients can be re-treated with the same regimen as before (no drugs need to be added to the regimen and the duration need not be any longer). The WHO recommends a regimen of 2SHREZ/6HRE when microbiology is not available (the majority of countries where TB is highly endemic). This regimen was designed to provide optimal treatment for fully sensitive TB (the most common finding in patients who have relapsed) as well as to cover the possibility of INH-resistant TB (the most common form of resistance found). Because of the lifelong risk of relapse, all patients should be warned of the symptoms of TB relapse upon finishing treatment and given strict instructions to return to their doctor if symptoms recur. Public health and health policy As of 2010, India has more reported cases of TB than any other country. This is in part due to severe mismanagement of diagnosis and treatment of TB within the private health care sector of India that serves about 50% of the population. There are therefore calls for the private sector to engage in the public Revised National Tuberculosis Control Program that has proved effective in reducing TB amongst the patients receiving health care through the government. Additionally, a study by Maurya et al. conducted in 2013 shows evidence that there is a burden of multidrug-resistant tuberculosis in India and change is needed for testing, surveillance, monitoring and management. During the COVID-19 pandemic, 80% fewer TB cases were reported daily in April 2020 in India, reducing the diagnosis and treatment of TB. Trial of therapy In areas where TB is highly endemic, it is not unusual to encounter patient with a fever, but in whom no source of infection is found. The physician may then, after extensive investigation has excluded all other diseases, resort to a trial of TB treatment. The regimen used is HEZ for a minimum of three weeks; RMP and STM are omitted from the regimen because they are broad spectrum antibiotics, whereas the other three first-line drugs treat only mycobacterial infection. Resolution of the fever after three weeks of treatment is good evidence for occult TB and the patient should then be changed to conventional TB treatment (2HREZ/4HR). If the fever does not resolve after three weeks of treatment then it is reasonable to conclude that the patient has another cause for his fever. This approach is not recommended by the WHO and most national guidelines. Surgical treatment Surgery has played an important part in the management of tuberculosis since the 1930s. Historical surgical management The first successful treatments for tuberculosis were all surgical. They were based on the observation that healed tuberculous cavities were all closed. Surgical management was therefore directed at closing open cavities to encourage healing. These procedures were all used in the pre-antibiotic era. There exists a myth that surgeons believed that the purpose was to deprive the organism of oxygen: it was however well known that the organism survives anaerobic conditions. Although these procedures may be considered barbaric by 21st centurys standards, it must be remembered that these treatments represented a potential cure for a disease that at the time had a mortality at least as bad as lung cancer in 2000s. Recurrent or persistent pneumothorax The simplest and earliest procedure was to introduce air into the pleural space so as to collapse the affected lung and therefore the open cavity. There was always spontaneous resolution of the pneumothorax and the procedure had to be repeated every few weeks. Phrenic nerve crush The phrenic nerve (which supplies the diaphragm) was cut or crushed so as to permanently paralyse the diaphragm on that side. The paralysed diaphragm would then rise up and the lung on that side would collapse, thus closing the cavity. Thoracoplasty When the cavity was located in the apex of the lung, thoracoplasty could be performed. Six to eight ribs were broken and pushed into the thoracic cavity to collapse the lung beneath. This was a disfiguring operation, but it avoided the need for repeated procedures. In the Novosibirsk TB Research Institute (Russia), osteoplastic thoracoplasty (a variant of extrapleural thoracoplasty) has been used for the last 50 years for patients with complicated cavitary forms of TB for whom lung resection is contraindicated. Plombage Plombage reduced the need for a disfiguring operation. It involved inserting porcelain balls into the thoracic cavity to collapse the lung underneath.Surgical resections of infected lungs were rarely attempted during the 1930s and 1940s, due to the extremely high perioperative mortality rate. Modern surgical management In modern times, the surgical treatment of tuberculosis is confined to the management of multi-drug resistant TB. A patient with MDR-TB who remains culture positive after many months of treatment may be referred for lobectomy or pneumonectomy with the aim of cutting out the infected tissue. The optimal timing for surgery has not been defined, and surgery still confers significant morbidity. The centre with the largest experience in the US is the National Jewish Medical and Research Center in Denver, Colorado. From 1983 to 2000, they performed 180 operations in 172 patients; of these, 98 were lobectomies, and 82 were pneumonectomies. They report a 3.3% operative mortality, with an additional 6.8% dying following the operation; 12% experienced significant morbidity (particularly extreme breathlessness). Of 91 patients who were culture positive before surgery, only 4 were culture positive after surgery. Some complications of treated tuberculosis like recurrent hemoptysis, destroyed or bronchiectasic lungs and empyema (a collection of pus in the pleural cavity) are also amenable to surgical therapy.In extrapulmonary TB, surgery is often needed to make a diagnosis (rather than to effect a cure): surgical excision of lymph nodes, drainage of abscesses, tissue biopsy, etc. are all examples of this. Samples taken for TB culture should be sent to the laboratory in a sterile pot with no additive (not even water or saline) and must arrive in the laboratory as soon as possible. Where facilities for liquid culture are available, specimens from sterile sites may be inoculated directly following the procedure: this may improve the yield. In spinal TB, surgery is indicated for spinal instability (when there is extensive bony destruction) or when the spinal cord is threatened. Therapeutic drainage of tuberculous abscesses or collections is not routinely indicated and will resolve with adequate treatment. In TB meningitis, hydrocephalus is a potential complication and may necessitate the insertion of a ventricular shunt or drain. Nutrition It is well known that malnutrition is a strong risk factor for becoming unwell with TB, that TB is itself a risk factor for malnutrition, and that malnourished patients with TB (BMI less than 18.5) are at an increased risk of death even with appropriate antibiotic therapy. Knowledge about the association between malnutrition and TB is prevalent in some cultures, and may reduce diagnostic delay and improve adherence to treatment.Although blood levels of some micronutrients may be low in people starting treatment for active tuberculosis, a Cochrane review of thirty-five included trials concluded that there is insufficient research to know whether the routine provision of free food or energy supplements improves tuberculosis treatment outcomes. However, nutritional supplementation probably improves weight gain in some settings. Vitamin D and tuberculosis epidemiology Vitamin D deficiency is a risk factor for tuberculosis, and vitamin D deficiency appears to impair the bodys ability to fight tuberculosis, but there is no clinical evidence to show that treating vitamin D deficiency prevents tuberculosis, although the available evidence is that it ought to. Reduced levels of vitamin D may explain the increased susceptibility of African-Americans to tuberculosis, and may also explain why phototherapy is effective for lupus vulgaris (tuberculosis of the skin) (a finding which won Niels Finsen the Nobel Prize in 1903), because skin exposed to sunlight naturally produces more vitamin D. Concerns that tuberculosis treatment itself decreases vitamin D levels appear not to be an issue in clinical practice.Genetic differences in the vitamin D receptor in West African, Gujarati and Chinese populations have been noted to affect susceptibility to tuberculosis, but there is no data available in any population that shows vitamin D supplementation (that is, giving extra vitamin D to people with normal vitamin D levels) has any effect on susceptibility to TB. Vitamin D and tuberculosis treatment Giving vitamin D to TB patients who are vitamin D deficient may be beneficial in a proportion of patients. When taken as a group, vitamin D supplementation appears to have no benefit when using sputum culture conversion as an endpoint, and giving vitamin D supplements to TB patients who have normal vitamin D levels does not provide any benefit from the point of view of TB. In a subset of patients with the tt genotype of the TaqI vitamin D receptor and who are vitamin D deficient, vitamin D supplementation appears to hasten sputum culture conversion. There are no studies of vitamin D using the gold standard outcome of relapse, so the true benefit of vitamin D is not at present known.It was noted as early as the mid-19th century that cod liver oil (which is rich in vitamin D) improved patients with tuberculosis, and the mechanism for this is probably an enhancement of immune responses to tuberculosis.The addition of vitamin D appears to enhance the ability of monocytes and macrophages to kill M. tuberculosis in vitro as well as ameliorating potentially harmful effects of the human immune system. Another reason Vitamin D can be used as a treatment for mycobacterial infections like tuberculosis is because of pro-anti-inflammatory cytokines that are influenced by vitamin D. Vitamin D has a post-anti-inflammatory effect on tuberculosis. Other arginine has some clinical evidence as an adjuvant. Mycobacterium vaccae has been completed in Phase III trials by Anhui Zhifei Longcom Biologic Pharmacy Co., Ltd., injectable Vaccae(TM) and Immunitor LLC., oral tablet Tubivac (V7). Latent tuberculosis The treatment of latent tuberculosis infection (LTBI) is essential to controlling and eliminating TB by reducing the risk that TB infection will progress to disease. The terms "preventive therapy" and "chemoprophylaxis" have been used for decades and are preferred in the UK because it involves giving medication to people who have no active disease and are currently well, the reason for treatment is primarily to prevent people from becoming unwell. The term "latent tuberculosis treatment" is preferred in the US because the medication does not actually prevent infection: it prevents an existing silent infection from becoming active. The feeling in the US is that the term "treatment of LTBI" promotes wider implementation by convincing people that they are receiving treatment for disease. There are no convincing reasons to prefer one term over the other. It is essential that assessment to rule out active TB is carried out before treatment for LTBI is started. To give LTBI treatment to someone with active TB is a serious error: the TB will not be adequately treated and there is a risk of developing drug-resistant strains of TB. There are several treatment regimens available: 9H—Isoniazid for 9 months is the gold standard and is 93% effective. 6H—Isoniazid for 6 months might be adopted by a local TB program based on cost-effectiveness and patient compliance. This is the regimen currently recommended in the UK for routine use. The US guidance exclude this regimen from use in children or persons with radiographic evidence of prior tuberculosis (old fibrotic lesions). (69% effective) 6 to 9H2—A twice-weekly regimen for the above two treatment regimens is an alternative if administered under Directly observed therapy (DOT). 4R—Rifampicin for 4 months is an alternative for those who are unable to take isoniazid or who have had known exposure to isoniazid-resistant TB. 3HR—Isoniazid and rifampicin may be given for 3 months. 2RZ—The 2-month regimen of rifampicin and pyrazinamide is no longer recommended for treatment of LTBI because of the greatly increased risk of drug-induced hepatitis and death. 3RPT/INH - 3-month (12-dose) regimen of weekly rifapentine and isoniazid.Evidence for treatment effectiveness: A 2000 Cochran review containing 11 double-blinded, randomized control trials and 73,375 patients examined six and 12 month courses of isoniazid (INH) for treatment of latent tuberculosis. HIV positive and patients currently or previously treated for tuberculosis were excluded. The main result was a relative risk (RR) of 0.40 (95% confidence interval (CI) 0.31 to 0.52) for development of active tuberculosis over two years or longer for patients treated with INH, with no significant difference between treatment courses of six or 12 months (RR 0.44, 95% CI 0.27 to 0.73 for six months, and 0.38, 95% CI 0.28 to 0.50 for 12 months).A 2013 systematic review published by the Cochrane Collaboration, compared Rifamycins (monotheraphy and combination therapy) to INH monotheraphy as an alternative in preventing active TB in HIV negative populations. The evidence suggested that shorter Rifampicin regimes (3 or 4 months) had higher treatment completion rates and fewer adverse events when compared to INH. However, the overall quality of evidence as per GRADE criteria was low to moderate. Another meta-analysis came to a similar conclusion, namely that rifamycin-containing regimens taken for 3 months or longer had a better profile in preventing TB reactivation. Research There is some evidence from animal and clinical studies that suggests that moxifloxacin-containing regimens as short as four months may be as effective as six months of conventional therapy.Bayer is currently running a phase II clinical trial in collaboration with the TB Alliance to evaluate shorter treatment regimens for TB; encouragingly, Bayer have also promised that if the trials are successful, Bayer will make moxifloxacin affordable and accessible in countries that need it. Another approach for anti-TB drug development, which does not rely on antibiotics, consists of targeting NAD+ synthase, an essential enzyme in tuberculosis bacteria but not in humans. Low level laser therapy for treating tuberculosis is not supported by reliable evidence. History Streptomycin and para-aminosalicylic acid were developed by the mid-1940s. In 1960, Edinburgh City Hospital physician Sir John Crofton, addressed the Royal College of Physicians in London with a lecture titled "Tuberculosis Undefeated", and proposed that "the disease could be conquered, once and for all". With his colleagues at Edinburgh, he recognised that germs that developed only a mild resistance to one drug was significant. His team showed that when treating new cases of TB, strict compliance to a combination of three therapies, or the triple therapy, (streptomycin, para-aminosalicylic acid and isoniazid) could provide a complete cure. It became known as the Edinburgh method and became standard treatment for at least 15 years. In the 1970s it was recognised that combining isoniazid and rifampin could reduce the duration of treatment from 18 to nine months, and in the 1980s the duration of treatment was further shortened by adding pyrazinamide. National and international guidelines See also Modern era ATC code J04 Drugs for treatment of TB Mantoux test Heaf test TB Alliance Tuberculosis management in the era before antituberculosis drugs History of tuberculosis Tuberculosis treatment in Colorado Springs (historical) References This article incorporates public domain material from websites or documents of the Centers for Disease Control and Prevention. == Further reading ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
Could you please explain the term 'Pityriasis rotunda' in simple language?
Pityriasis rotunda is a disorder of keratisation of the skin that manifests as a perfectly circular, scaly patches on the torso and proximal portions of the extremities. It may be associated with diseases like hepatocellular carcinoma in racially predisposed groups. See also Skin lesion List of cutaneous conditions References == External links ==
You function as a medical explainer. Your task is to convey the meanings of medical terms in a detailed, yet easy-to-understand manner.
Can you demystify the medical term 'Levosalbutamol' for me?
Levosalbutamol, also known as levalbuterol, is a short-acting β2 adrenergic receptor agonist used in the treatment of asthma and chronic obstructive pulmonary disease (COPD). Evidence is inconclusive regarding the efficacy of levosalbutamol versus salbutamol or salbutamol-levosalbutamol combinations, however levosalbutamol is believed to have a better safety profile due to its more selective binding to β2 receptors (primarily in the lungs) versus β1 (primarily in heart muscle).The drug is the (R)-(−)-enantiomer of its prototype drug salbutamol. It is available in some countries in generic formulations from pharmaceutical companies including Cipla, Teva, and Dey, among others. Medical use Levosalbutamols bronchodilator properties give it indications in treatment of COPD (chronic obstructive pulmonary disease, also known as chronic obstructive lung disease) and asthma. Like other bronchodilators, it acts by relaxing smooth muscle in the bronchial tubes, and thus shortening or reversing an acute "attack" of shortness of breath or difficulty breathing. Unlike some slower-acting bronchodilators, it is not indicated as a preventative of chronic bronchial constriction. Comparison to salbutamol A 2013 systematic review of the drugs use as a treatment for acute asthma found that it "was not superior to albuterol regarding efficacy and safety in subjects with acute asthma." The review concluded: "We suggest that levalbuterol should not be used over albuterol for acute asthma." Levalbuterol is notably more costly. Adverse effects Generally, levosalbutamol is well tolerated. Common mild side-effects include an elevated heart rate, muscle cramps, and gastric upset (including heartburn and diarrhea).Symptoms of overdose in particular include: collapse into a seizure; chest pain (possible precursor of a heart attack); fast, pounding heartbeat, which may cause raised blood pressure (hypertension); irregular heartbeat (cardiac arrhythmia), which may cause paradoxical lowered blood pressure (hypotension); nervousness and tremor; headache; dizziness and nausea/vomiting; weakness or exhaustion (medical fatigue); dry mouth; and insomnia.Rarer side effects may indicate a dangerous allergic reaction. These include: paradoxical bronchospasm (shortness of breath and difficulty breathing); skin itching, rash, or hives (urticaria); swelling (angioedema) of any part of the face or throat (which can lead to voice hoarseness), or swelling of the extremities. Pharmacology Mechanism of action Activation of β2 adrenergic receptors on airway smooth muscle leads to the activation of adenylate cyclase and to an increase in the intracellular concentration of 3,5-cyclic adenosine monophosphate (cyclic AMP). The increase in cyclic AMP is associated with the activation of protein kinase A, which in turn, inhibits the phosphorylation of myosin and lowers intracellular ionic calcium concentrations, resulting in muscle relaxation. Levosalbutamol relaxes the smooth muscles of all airways, from the trachea to the terminal bronchioles. Increased cyclic AMP concentrations are also associated with the inhibition of the release of mediators from mast cells in the airways. Levosalbutamol acts as a functional agonist that relaxes the airway irrespective of the spasmogen involved, thereby protecting against all bronchoconstrictor challenges. While it is recognized that β2 adrenergic receptors are the predominant receptors on bronchial smooth muscle, data indicate that there are beta receptors in the human heart, 10–50% of which are β2 adrenergic receptors. The precise function of these receptors has not been established. However, all β adrenergic agonist drugs can produce a significant cardiovascular effect in some patients, as measured by pulse rate, blood pressure, and restlessness symptoms, and/or electrocardiographic (ECG). Approval and names Levosalbutamol is the INN while levalbuterol is the USAN. Levalbuterol was approved in the United States as a solution to be used with a nebulizer device in March 1999 and in March 2015 became available in a formulation with a metered-dose inhaler under the trade name Xopenex HFA (levalbuterol tartrate inhalation aerosol). See also Salbutamol — the racemic mixture containing both (R)-(−)- and (S)-(+)-enantiomers References External links "Levalbuterol". Drug Information Portal. U.S. National Library of Medicine.
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
Can you break down the meaning of the medical term 'Neuroma' for me?
A neuroma (; plural: neuromata or neuromas) is a growth or tumor of nerve tissue. Neuromas tend to be benign (i.e. not cancerous); many nerve tumors, including those that are commonly malignant, are nowadays referred to by other terms. Neuromas can arise from different types of nervous tissue, including the nerve fibers and their myelin sheath, as in the case of genuine neoplasms (growths) like ganglioneuromas and neurinomas. The term is also used to refer to any swelling of a nerve, even in the absence of abnormal cell growth. In particular, traumatic neuroma results from trauma to a nerve, often during a surgical procedure. Mortons neuroma affects the foot. Neuromas can be painful, or sometimes, as in the case of acoustic neuromas, can give rise to other symptoms. Neoplasms Acoustic neuroma - a slow-growing, benign tumor of the acoustic nerve. Symptoms, which most often start after the age of 30, can include dizziness, headache, vertigo, loss of balance, ringing sensations, and numbness. Ganglioneuroma - a tumor of the sympathetic nerve fibers arising from neural crest cells. Pacinian neuroma - a very rare, painful, benign hyperplastic tumor of Pacinian corpuscles (mechanoreceptors responsible for sensitivity to vibration and pressure), sometimes linked to a history of local trauma. Other nerve swellings Some of the benign varieties of neuroma, in the broadest sense of the term, are not neoplasms. Traumatic neuroma follows different forms of nerve injury (often as a result of surgery). They occur at the end of injured nerve fibres as a form of ineffective, unregulated nerve regeneration; it occurs most commonly near a scar, either superficially (skin, subcutaneous fat) or deep (e.g., after a cholecystectomy). They are often very painful. Synonyms include scar neuroma, amputation neuroma, or pseudoneuroma. Mortons neuroma (a mononeuropathy of the foot) is another example of the more general usage of the term neuroma. Some prefer the term "Mortons metatarsalgia", thus avoiding the term neuroma and its association with tumors. Etymology The stem neuro- originates from the Greek word for nerve (νεῦρον), while the suffix -oma (-ωμα) denotes swelling. The stem does not imply that neuromas necessarily arise from neurons; neuromas generally arise from non-neuronal nerve tissues. The word was originally used to refer to any nerve tumor, but its meaning has evolved. == References ==
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
Could you please explain the term 'Hypertonia' in simple language?
Hypertonia is a term sometimes used synonymously with spasticity and rigidity in the literature surrounding damage to the central nervous system, namely upper motor neuron lesions. Impaired ability of damaged motor neurons to regulate descending pathways gives rise to disordered spinal reflexes, increased excitability of muscle spindles, and decreased synaptic inhibition. These consequences result in abnormally increased muscle tone of symptomatic muscles. Some authors suggest that the current definition for spasticity, the velocity-dependent over-activity of the stretch reflex, is not sufficient as it fails to take into account patients exhibiting increased muscle tone in the absence of stretch reflex over-activity. They instead suggest that "reversible hypertonia" is more appropriate and represents a treatable condition that is responsive to various therapy modalities like drug and/or physical therapy. Presentation Symptoms associated with central nervous systems disorders are classified into positive and negative categories. Positive symptoms include those that increase muscle activity through hyper-excitability of the stretch reflex (i.e., rigidity and spasticity) where negative symptoms include those of insufficient muscle activity (i.e. weakness) and reduced motor function. Often the two classifications are thought to be separate entities of a disorder; however, some authors propose that they may be closely related. Pathophysiology Hypertonia is caused by upper motor neuron lesions which may result from injury, disease, or conditions that involve damage to the central nervous system. The lack of or decrease in upper motor neuron function leads to loss of inhibition with resultant hyperactivity of lower motor neurons. Different patterns of muscle weakness or hyperactivity can occur based on the location of the lesion, causing a multitude of neurological symptoms, including spasticity, rigidity, or dystonia.Spastic hypertonia involves uncontrollable muscle spasms, stiffening or straightening out of muscles, shock-like contractions of all or part of a group of muscles, and abnormal muscle tone. It is seen in disorders such as cerebral palsy, stroke, and spinal cord injury. Rigidity is a severe state of hypertonia where muscle resistance occurs throughout the entire range of motion of the affected joint independent of velocity. It is frequently associated with lesions of the basal ganglia. Individuals with rigidity present with stiffness, decreased range of motion and loss of motor control. Dystonic hypertonia refers to muscle resistance to passive stretching (in which a therapist gently stretches the inactive contracted muscle to a comfortable length at very low speeds of movement) and a tendency of a limb to return to a fixed involuntary (and sometimes abnormal) posture following movement. Management Therapeutic interventions are best individualized to particular patients.Basic principles of treatment for hypertonia are to avoid noxious stimuli and provide frequent range of motion exercise. Physical interventions Physiotherapy has been shown to be effective in controlling hypertonia through the use of stretching aimed to reduce motor neuron excitability. The aim of a physical therapy session could be to inhibit excessive tone as far as possible, give the patient a sensation of normal position and movement, and to facilitate normal movement patterns. While static stretch has been the classical means to increase range of motion, PNF stretching has been used in many clinical settings to effectively reduce muscle spasticity.Icing and other topical anesthetics may decrease the reflexive activity for short period of time in order to facilitate motor function. Inhibitory pressure (applying firm pressure over muscle tendon) and promoting body heat retention and rhythmic rotation (slow repeated rotation of affected body part to stimulate relaxation) have also been proposed as potential methods to decrease hypertonia. Aside from static stretch casting, splinting techniques are extremely valuable to extend joint range of motion lost to hypertonicity. A more unconventional method for limiting tone is to deploy quick repeated passive movements to an involved joint in cyclical fashion; this has also been demonstrated to show results on persons without physical disabilities. For a more permanent state of improvement, exercise and patient education is imperative. Isokinetic, aerobic, and strength training exercises should be performed as prescribed by a physiotherapist, and stressful situations that may cause increased tone should be minimized or avoided. Pharmaceutical interventions Baclofen, diazepam and dantrolene remain the three most commonly used pharmacologic agents in the treatment of spastic hypertonia. Baclofen is generally the drug of choice for spinal cord types of spasticity, while sodium dantrolene is the only agent which acts directly on muscle tissue. Tizanidine is also available. Phenytoin with chlorpromazine may be potentially useful if sedation does not limit their use. Ketazolam, not yet available in the United States, may be a significant addition to the pharmacologic set of options. Intrathecal administration of antispastic medications allows for high concentrations of drug near the site of action, which limits side effects. See also Dystonia Hypotonia Spasticity Clasp-knife response References == External links ==
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
What does the medical term 'Dipylidium caninum' encompass?
Dipylidium caninum, also called the flea tapeworm, double-pored tapeworm, or cucumber tapeworm (in reference to the shape of its cucumber-seed-like proglottids, though these also resemble grains of rice or sesame seeds), is a cyclophyllid cestode that infects organisms afflicted with fleas and canine chewing lice, including dogs, cats, and sometimes human pet-owners, especially children. Adult morphology The adult worm is about 18 inches (46 cm) long. Gravid proglottids containing the worms microscopic eggs are either passed in the definitive hosts feces or may leave their host spontaneously and are then ingested by microscopic flea larvae (the intermediate hosts) in the surrounding environment. As in all members of family Dipylidiidae, proglottids of the adult worm have genital pores on both sides (hence the name double-pore tapeworm). Each side has a set of male and female reproductive organs. The uterus is paired with 16 to 20 radial branches each. The scolex has a retractable rostellum with four rows of hooks, along with the four suckers that all cyclophyllid cestodes have. Life cycle The definitive host within this life cycle is primarily canines, and occasionally felines, and in rare cases young children. The intermediate hosts include fleas (Ctenocephalides spp.) and chewing lice. The first stage in the life cycle is when the gravid proglottids are either passed out through faecal matter, or actively crawl out of the anus of the host. The gravid proglottids once out of the definitive host release eggs. Then, an intermediate host (the larval stage of a flea or chewing louse) will ingest an egg, which develops into a cysticercoid larva. The cysticercoid larva remains viable, but is not infective to carnivores until the flea hatches to an adult and begins feeding on a host (e.g. a dog). Approximately 36 hours after the flea has consumed a blood meal, the infective metacestode develops inside the flea. The metacestode larva must be ingested in a flea by the dog or cat during grooming in order to develop. Humans can also become infected by D. caninum by accidentally ingesting an infected flea. In the small intestine of the definitive host, the metacestode develops into an adult tapeworm, which reaches maturity 4-6 weeks after ingestion. This adult tapeworm produces proglottids, and over time, the proglottids mature and become gravid and eventually detach from the tapeworm and the life cycle starts all over again. Geographic Distribution This parasite occurs worldwide in animals, such as dogs and cats, as well as in humans, though to a significantly lesser degree. It is the most common tapeworm of dogs and is relatively common in cats. Despite human diplydiasis being rare, instances have been reported from every inhabited continent.Human instances of diplydiasis are reported globally, and unsurprisingly roughly one third of the cases occur in children less than 6 months of age. The most at-risk age group is those that range from 2 months to 4 years old. Pet infections Tapeworm infection usually does not cause pathology in the dog or cat, and most pets show no adverse reaction to infection other than increased appetite. The bulk of infections are asymptomatic and the infections that do result in symptoms are generally mildly so. Pets behavior may reflect the presence of anal discomfort and itching, or pruritus. This could result in the ‘butt-scooching” across the floor, grass or carpeting. It may be accompanied by slight gastrointestinal disturbances, as this is the region where the worms inhabit. Though not a pathology of the diplydiasis, the most unnerving sign of the infection is the presence of proglottids in the animals, or child’s, feces. These proglottids can also be found near the perianal region, in the feces, and in diapers (children). The motile proglottids can actively crawl out of the anus of the infected animal/person and migrate small distances, thus potentially covering this array of neighboring surfaces. It is from these locations that the larval stage of the flea will come along and ingest them. Then the metacestode stage, a cysticercoid, develops in the coelomic cavity (abdominal cavity; main body cavity) of the flea larvae and remains there as the flea matures into an adult. These freshly passed proglottids are motile, allowing them to also be found on the floor and furniture, from a migration out of a pets anus and could be compared to resembling fly larvae, or maggots.The other tapeworm infecting cats is Taenia taeniaeformis, though this form is much less commonly encountered than D. caninum. A recent (2018) study using genetical analysis and experimental infections and life-cycles showed that two different distinct genotypes of D. caninum occur respectively in dogs and in cats, and suggested that two different species might be involved. Human infections A human infection with D. caninum is rare, but if an infection does occur, it is more likely to occur in young children. As of the early 1960s, the number of cases of D. caninum in the U.S. was a mere 21. Therefore, human infection of Dipylidium caninum, or diplydiasis, is a rare occasion. It is largely agreed across the parasitology community that despite the reports of this disease occurring, there are very likely numerous cases that have gone unnoticed and unreported because of its subtle and minor pathology in humans, in addition to its scarceness in clinical records. The adult tapeworm grows within the host for 3–4 weeks after initial infection. The number of parasites the host is initially infected with is directly related to the number of cysticercoid juveniles present in the fleas coelom. The load of parasites present in the humans is lower, luckily, as the life cycle is not occurring in the ideal conditions or species as humans are not the definitive host.Many cases have an unexciting course of infection, as can be depicted in two cases occurring in the 1960s. The first case occurred in a 9-month-old female. Mother found motile proglottids the child’s diaper, later identified as D. caninum. The child had no apparent signs or symptoms. The presumed source of infections was one of the family’s four Labrador retrievers, two of which were found to already have been infected with D. caninum. The second additional case occurred in an 18-month-old male. Mother found motile proglottids in the child’s diaper and again, the child was symptom-free. A puppy in the household was found to be infected and thus was the source of the child infection. Young children and toddlers are at a greater risk of infection because of how they interact with their pets. A human may attain an infection by accidentally ingesting an infected flea through food contamination or through the saliva of pets. Most infections are asymptomatic, but sometimes these symptoms may be identified in an infected individual: mild diarrhea, abdominal colic, anorexia, restlessness, constipation, rectal itching, and pain due to emerging proglottids through the anal cavity. Treatment and prevention As with most tapeworm infections, the drugs of choice to kill adult tapeworms are praziquantel or niclosamide. Pets can be prevented from becoming infested with tapeworm if they are treated prophylactically with a product which kills the intermediate host (the flea) before the infective metacestode can develop. Some isoxazoline products are registered to prevent flea tapeworm infestations using this method. Gallery References External links tapeworms from The Pet Health Library
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
The term 'Polycythemia' keeps coming up in medical discussions. What does it stand for?
Polycythemia (also known as polycythaemia, polyglobulia and Erythrocytosis) is a disease state in which the hematocrit (the volume percentage of red blood cells in the blood) and/or hemoglobin concentration are elevated in peripheral blood. It can be due to an increase in the number of red blood cells ("absolute polycythemia") or to a decrease in the volume of plasma ("relative polycythemia"). Polycythemia is sometimes called erythrocytosis, but the terms are not synonymous, because polycythemia describes any increase in red blood mass (whether due to an erythrocytosis or not), whereas erythrocytosis is a documented increase of red cell count. The emergency treatment of polycythemia (e.g., in hyperviscosity or thrombosis) is by phlebotomy (removal of blood from the circulation). Depending on the underlying cause, phlebotomy may also be used on a regular basis to reduce the hematocrit. Myelosuppressive medications such as hydroxyurea are sometimes used for long-term management of polycythemia. Absolute polycythemia The overproduction of red blood cells may be due to a primary process in the bone marrow (a so-called myeloproliferative syndrome), or it may be a reaction to chronically low oxygen levels or, rarely, a malignancy. Alternatively, additional red blood cells may have been received through another process—for example, being over-transfused (either accidentally or, as blood doping, deliberately) or being the recipient twin in a pregnancy, undergoing twin-to-twin transfusion syndrome. Primary polycythemia Primary polycythemias are due to factors intrinsic to red cell precursors. Polycythemia vera (PCV), polycythemia rubra vera (PRV), or erythremia, occurs when excess red blood cells are produced as a result of an abnormality of the bone marrow. Often, excess white blood cells and platelets are also produced. PCV is classified as a myeloproliferative disease. Symptoms include headaches and vertigo, and signs on physical examination include an abnormally enlarged spleen and/or liver. In some cases, affected individuals may have associated conditions including high blood pressure or formation of blood clots. Transformation to acute leukemia is rare. Phlebotomy is the mainstay of treatment. A hallmark of polycythemia is an elevated hematocrit, with Hct > 55% seen in 83% of cases. A somatic (non-hereditary) mutation (V617F) in the JAK2 gene, also present in other myeloproliferative disorders, is found in 95% of cases.Primary familial polycythemia, also known as primary familial and congenital polycythemia (PFCP), exists as a benign hereditary condition, in contrast with the myeloproliferative changes associated with acquired PCV. In many families, PFCP is due to an autosomal dominant mutation in the EPOR erythropoietin receptor gene. PFCP can cause an increase of up to 50% in the oxygen-carrying capacity of the blood; skier Eero Mäntyranta had PFCP, which is considered to have given him a large advantage in endurance events. Secondary polycythemia Secondary polycythemia is caused by either natural or artificial increases in the production of erythropoietin, hence an increased production of erythrocytes. In secondary polycythemia, 6 to 8 million and occasionally 9 million erythrocytes may occur per cubic millimeter of blood. Secondary polycythemia resolves when the underlying cause is treated.Secondary polycythemia in which the production of erythropoietin increases appropriately is called physiologic polycythemia. Conditions which may result in a physiologically appropriate polycythemia include: Altitude related – This physiologic polycythemia is a normal adaptation to living at high altitudes (see altitude sickness). Many athletes train at high altitude to take advantage of this effect, which can be considered a legal form of blood doping. Some individuals believe athletes with primary polycythemia may have a competitive advantage due to greater stamina. However, this has yet to be proven due to the multifaceted complications associated with this condition. Hypoxic disease-associated – for example in cyanotic heart disease where blood oxygen levels are reduced significantly, may also occur as a result of hypoxic lung disease such as COPD and as a result of chronic obstructive sleep apnea. Iatrogenic – Secondary polycythemia can be induced directly by phlebotomy (blood letting) to withdraw some blood, concentrate the erythrocytes, and return them to the body. Genetic – Heritable causes of secondary polycythemia also exist and are associated with abnormalities in hemoglobin oxygen release. This includes patients who have a special form of hemoglobin known as Hb Chesapeake, which has a greater inherent affinity for oxygen than normal adult hemoglobin. This reduces oxygen delivery to the kidneys, causing increased erythropoietin production and a resultant polycythemia. Hemoglobin Kempsey also produces a similar clinical picture. These conditions are relatively uncommon.Conditions where the secondary polycythemia is not caused by physiologic adaptation, and occurs irrespective of body needs include: Neoplasms – Renal cell carcinoma or liver tumors, von Hippel-Lindau disease, and endocrine abnormalities including pheochromocytoma and adrenal adenoma with Cushings syndrome. People whose testosterone levels are high because of the use of anabolic steroids, including athletes who abuse steroids, or people on testosterone replacement for hypogonadism or transgender hormone replacement therapy, may develop secondary polycythemia. Blood doping - Secondary polycythemia can be induced by endurance athletes that take erythropoietin-stimulating agents, receive blood transfusions, and take other measures to increase their red blood cell mass in an effort to increase oxygen transport capacity. Altered oxygen sensing Inherited mutations in three genes which all result in increased stability of hypoxia-inducible factors, leading to increased erythropoietin production, have been shown to cause erythrocytosis: Chuvash erythrocytosis is an autosomal recessive form of erythrocytosis which is endemic in patients from the Chuvash Republic in Russia. Chuvash erythrocytosis is associated with homozygosity for a C598T mutation in the von Hippel-Lindau gene (VHL), which is needed for the destruction of hypoxia-inducible factors in the presence of oxygen. Clusters of patients with Chuvash erythrocytosis have been found in other populations, such as on the Italian island of Ischia, located in the Bay of Naples. PHD2 erythrocytosis: Heterozygosity for loss-of-function mutations of the PHD2 gene are associated with autosomal dominant erythrocytosis and increased hypoxia-inducible factors activity. HIF2α erythrocytosis: Gain-of-function mutations in HIF2α are associated with autosomal dominant erythrocytosis and pulmonary hypertension. Relative polycythemia Relative polycythemia is an apparent rise of the erythrocyte level in the blood; however, the underlying cause is reduced blood plasma (hypovolemia, cf. dehydration). Relative polycythemia is often caused by loss of body fluids, such as through burns, dehydration, and stress. A specific type of relative polycythemia is Gaisböck syndrome. In this syndrome, primarily occurring in obese men, hypertension causes a reduction in plasma volume, resulting in (amongst other changes) a relative increase in red blood cell count. Symptoms Doctors say that patients may not experience in any notable symptom of PV until the late stages. Although vague, these symptoms might help patients get help in the early years of the progression. Severe headache Dizziness, fatigue, and tiredness Unusual bleeding, nosebleeds Pain Itchiness Numbness or tingling in different body parts Notable people Polycythemia is linked to increased performance in endurance sports due to the blood being able to store more oxygen. It can also be linked to damage from smoking. Eero Mäntyranta, Finnish cross-country skier Bob Newhart, American comedian See also Cytopenia, a decrease in blood cell count Capillary leak syndrome, another cause of hemoconcentration References == External links ==
You are a guide to medical language. Please provide in-depth and understandable explanations of medical terms, highlighting their significance in healthcare.
I've come across the term 'Periapical periodontitis' in a medical context, but I'm not sure what it means. Can you clarify?
Periapical periodontitis or apical periodontitis (AP) is an acute or chronic inflammatory lesion around the apex of a tooth root, most commonly caused by bacterial invasion of the pulp of the tooth. It is a likely outcome of untreated dental caries (tooth decay), and in such cases it can be considered a sequela in the natural history of tooth decay, irreversible pulpitis and pulpal necrosis. Other causes can include occlusal trauma due to high spots after restoration work, extrusion from the tooth of root filling material, or bacterial invasion and infection from the gums. Periapical periodontitis may develop into a periapical abscess, where a collection of pus forms at the end of the root, the consequence of spread of infection from the tooth pulp (odontogenic infection), or into a periapical cyst, where an epithelial lined, fluid-filled structure forms. Etymologically, the name refers to inflammation (Latin, -itis) around (peri- ) the root tip or apex (-apical) of the tooth (-odont-). Periradicular periodontitis is an alternative term. Diagnosis The radiographic features of periapical inflammatory lesions vary depending on the time course of the lesion. Because very early lesions may not show any radiographic changes, diagnosis of these lesions relies solely on the clinical symptoms. More chronic lesions may show lytic (radiolucent) or sclerotic (radiopaque) changes, or both. Classification Classification of periapical periodontitis is usually based on whether the process is acute/symptomatic or chronic/asymptomatic. (Note: alternative names for periapical periodontitis include apical periodontitis and periradicular periodontitis.) Acute periapical periodontitis Acute (or symptomatic) periapical periodontitis.... Malaise thumbing pain due to Inflammation within periodontal ligament Chronic periapical periodontitis Chronic (or asymptomatic) periapical periodontitis. Related lesions In addition to periapical abscesses, periapical periodontitis can give rise to various related lesions, including periapical granulomas and cysts. A periapical granuloma (also referred to as an apical granuloma or radicular granuloma) is a mass of chronically inflamed granulation tissue that forms at the apex of the root of a nonvital (dead) tooth. (Although not a true granuloma, given the absence of granulomatous inflammation, the term periapical granuloma is widely accepted.) Treatment Treatment options may include antibiotic therapy (in the short term, augmented by chewing gum), root canal therapy, or extraction. Epidemiology Periapical periodontitis of some form is a very common condition. The prevalence of periapical periodontitis is generally reported to vary according to age group, e.g. 33% in those aged 20–30, 40% in 30- to 40-year-olds, 48% in 40- to 50-year-olds, 57% in 50- to 60-year-olds and 62% in those over the age of 60. Most epidemiologic data has been generated in European countries, especially Scandinavia. While millions of root canal treatments are carried out in the United States alone each year, total numbers of such cases do not provide reliable indicators of frequency, even for symptomatic periapical periodontitis (given that root canal treatment is not always indicated or complied with, and may also be performed in the absence of periapacial periodontitis). References == External links ==
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
I've encountered the term 'Gorhams disease' while reading about medical topics. What does it refer to exactly?
Gorhams disease (pronounced GOR-amz), also known as Gorham vanishing bone disease and phantom bone disease, is a very rare skeletal condition of unknown cause, characterized by the uncontrolled proliferation of distended, thin-walled vascular or lymphatic channels within bone, which leads to resorption and replacement of bone with angiomas and/or fibrosis. Signs and symptoms The symptoms of Gorhams disease vary depending on the bones involved. It may affect any part of the skeleton, but the most common sites of disease are the shoulder, skull, pelvic girdle, jaw, ribs, and spine.In some cases, no symptoms are seen until a fracture occurs either spontaneously or following minor trauma, such as a fall. An acute onset of localized pain and swelling may occur. More commonly, pain of no apparent cause increases in frequency and intensity over time and may eventually be accompanied by weakness and noticeable deformity of the area. The rate of progression is unpredictable, and the prognosis can be difficult. The disease may stabilize after a number of years, go into spontaneous remission, or in cases involving the chest and upper spine, prove fatal. Recurrence of the disease following remission can also occur. Involvement of the spine and skull base may cause a poor outcome from neurological complications. In many cases, the result of Gorhams disease is severe deformity and functional disability.Symptoms such as difficulty breathing and chest pain may be present if the disease is present in the ribs, scapula, or thoracic vertebrae. These may indicate that the disease has spread from the bone into the chest cavity. The breathing problems may be misdiagnosed as asthma, because the damage done to the lungs can cause the same types of changes to lung function testing as seen in asthma. Extension of the lesions into the chest may lead to the development of chylous pleural and pericardial effusions. Chyle is rich in protein and white blood cells that are important in fighting infection. The loss of chyle into the chest can have serious consequences, including infection, malnutrition, and respiratory distress and failure. These complications or their symptoms, such as difficulty breathing, chest pain, poor growth or weight loss, and infection have sometimes been the first indications of the condition. Causes The specific cause of Gorhams disease remains unknown. Bone mass and strength are obtained and maintained through a process of bone destruction and replacement that occurs at the cellular level throughout a persons life. Cells called osteoclasts secrete enzymes that dissolve old bone, allowing another type of cells called osteoblasts to form new bone. Except in growing bone, the rate of breakdown equals the rate of building, thereby maintaining bone mass. In Gorhams disease, that process is disrupted.Gorham and Stout found that vascular anomalies always occupied space that normally would be filled with new bone and speculated that the presence of angiomatosis may lead to chemical changes in the bone. Gorham and others speculated that such a change in the bone chemistry might cause an imbalance in the rate of osteoclast to osteoblast activity such that more bone is dissolved than is replaced. Beginning in the 1990s, elevated levels of a protein called interleukin-6 (IL-6) being detected in people with the disease were reported, leading some to suggest that increased levels of IL-6 and vascular endothelial growth factor (VEGF) may contribute to the chemical changes Gorham and others believed were the cause of this type of osteolysis.In 1999, Möller and colleagues concluded, "The Gorham-Stout syndrome may be, essentially, a monocentric bone disease with a focally increased bone resorption due to an increased number of paracrine – or autocrine – stimulated hyperactive osteoclasts. The resorbed bone is replaced by a markedly vascularized fibrous tissue. The apparent contradiction concerning the presence or absence or the number of osteoclasts, may be explained by the different phases of the syndrome." They further stated that their histopathological study provided good evidence that osteolytic changes seen in Gorhams disease are the result of hyperactive osteoclastic bone. However, others have concluded that lymphangiomatosis and Gorhams disease should be considered as a spectrum of disease rather than separate diseases.While a consensus exists that Gorhams is caused by deranged osteoclastic activity, no conclusive evidence has been found as to what causes this behavior to begin. Diagnosis In 1983, Heffez and colleagues published a case report in which they suggested eight criteria for a definitive diagnosis of Gorhams disease: Positive biopsy with the presence of angiomatous tissue Absence of cellular atypia Minimal or no osteoblastic response or dystrophic calcifications Evidence of local bone progressive osseous resorption Nonexpansile, nonulcerative lesions No involvement of viscera Osteolytic radiographic pattern Negative hereditary, metabolic, neoplastic, immunologic, or infectious etiologyIn the early stages of the disease, X-rays reveal changes resembling patchy osteoporosis. As the disease progresses, bone deformity occurs with further loss of bone mass, and in the tubular bones (the long bones of the arms and legs), a concentric shrinkage is often seen which has been described as having a "sucked candy" appearance. Once the cortex (the outer shell) of the bone has been disrupted, and vascular channels may invade adjacent soft tissues and joints. Eventually, complete or near-complete resorption of the bone occurs and may extend to adjacent bones, though spontaneous arrest of bone loss has been reported on occasion. Throughout this process, as the bone is destroyed, it is replaced by angiomatous and/or fibrous tissue.Often, Gorhams disease is not recognized until a fracture occurs, with subsequent improper bone healing. The diagnosis essentially is one of exclusion and must be based on combined clinical, radiological, and histopathological findings. X-rays, CT scans, MRIs, ultrasounds, and nuclear medicine (bone scans) are all important tools in the diagnostic workup and surgical planning, but none has the ability alone to produce a definitive diagnosis. Surgical biopsy with histological identification of the vascular or lymphatic proliferation within a generous section of the affected bone is an essential component in the diagnostic process.Recognition of the disease requires a high index of suspicion and an extensive workup. Because of its serious morbidity, Gorhams must always be considered in the differential diagnosis of osteolytic lesions. Management Treatment of Gorhams disease is for the most part palliative and limited to symptom management.Sometimes, the bone destruction spontaneously ceases and no treatment is required, but when the disease is progressive, aggressive intervention may be necessary. Duffy and colleagues reported that around 17% of people with Gorhams disease in the ribs, shoulder, or upper spine experience extension of the disease into the chest, leading to chylothorax with its serious consequences, and that the mortality rate in this group can reach as high as 64% without surgical intervention. A search of the medical literature reveals multiple case reports of interventions with varying rates of success as follows:Cardiothoracic (heart and lung): Skeletal: To date, no known interventions are consistently effective for Gorhams, and all reported interventions are considered experimental treatments, though many are routine for other conditions. Some people may require a combination of these approaches. Unfortunately, some people will not respond to any intervention. Epidemiology Gorhams disease is extremely rare and may occur at any age, though it is most often recognized in children and young adults. It strikes males and females of all races and exhibits no inheritance pattern. The medical literature contains case reports from every continent. Because it is so rare, and commonly misdiagnosed, exactly how many people are affected by this disease is not known. The literature frequently cites that fewer than 200 cases have been reported, though a consensus is building that many more cases occur around the world than have been reported. History The first known report of the condition came in 1838 in an article titled "A Boneless Arm" in what was then The Boston Medical and Surgical Journal (now The New England Journal of Medicine). It is a brief report chronicling the case of Mr. Brown who had, in 1819 at age 18 years, broken his right upper arm in an accident. The person had two subsequent accidents, which fractured the arm twice more "before the curative process had been completed." At the time of the report in 1838, the person was reported as having remarkable use of the arm, in spite of the humerus bone having apparently disappeared – X-rays did not yet exist. Thirty-four years later, a follow-up report was published in the same journal, following Mr. Browns death from pneumonia at the age of 70 years. The person had requested the arm "be dissected and preserved for the benefit of medical science" and this report contains a detailed pathological description of the arm and shoulder. Abnormalities of the remaining bones of the arm and shoulder are noted and the authors report that the arteries, veins, and nerves appeared normal. No mention was made of lymphatic vessels. Though several reports of similar cases were published in the interim, more than 80 years passed before another significant report of the condition appeared in the medical literature.Both born in 1885, Lemuel Whittington Gorham, MD, and Arthur Purdy Stout, MD, had long, distinguished careers in medicine and shared a lifelong interest in pathology. Dr. Gorham practiced and taught medicine and oncology and from the mid-1950s through the early 1960s conducted and reported the classical clinicopathological investigations of pulmonary embolism. During this time, he also authored several case series on osteolysis of bone. Dr. Stout began his career as a surgeon and became a pioneer in tumor pathology, publishing Human Cancer in 1932. This work became the model for the Atlas of Tumor Pathology project, which Stout oversaw as chairman of the National Research Council in the 1950s. In his later years, Dr. Stout embarked on a systematic study of soft tissue tumors in children and was among the first to link cigarette smoking to lung cancer.In 1954, Gorham and three others published a two case series, with a brief review of 16 similar cases from the medical literature, that advanced the hypothesis that angiomatosis was responsible for this unusual form of massive osteolysis. That same year, Gorham and Stout presented to the American Association of Physicians their paper (in abstract form), "Massive Osteolysis (Acute Spontaneous Absorption of Bone, Phantom Bone, Disappearing Bone): Its Relation to Hemangiomatosis". The paper was published in its entirety in October 1955 in The Journal of Bone and Joint Surgery, concluding that: There now exists the basis for a new syndrome which is supported by a remarkable similarity of clinical and [x-ray] findings in twenty-four cases, and by an equally convincing similarity of the histological picture in eight of these, which we have personally studied. However it is accomplished, the progressive osteolysis is always associated with an angiomatosis of blood and sometimes of lymphatic vessels, which seemingly are responsible for it.The most typical presentation is that of osteolysis of a single bone or the bones connected by a shared joint, such as the shoulder. Although the disease can attack any bone, the shoulder is one of the most commonly involved areas, along with the skull and pelvic girdle. Spontaneous fractures are common and may be the first sign of the disease. A hallmark of the disease is the lack of bone healing following fracture. References Further reading Dellinger, M. T., Garg, N, Olsen, B.R., 2014. Viewpoints on vessels and vanishing bones in Gorham-Stout disease. Bone. 63C, 47–52. Lala S, Mulliken JB, Alomari AI, Fishman SJ, Kozakewich HP, Chaudry G. Gorham-Stout disease and generalized lymphatic anomaly—clinical, radiologic, and histologic differentiation. Skeletal Radiol. 2013 Jul;42(7):917-24. doi: 10.1007/s00256-012-1565-4. Epub 2013 Jan 31. Trenor, C, Chaudry G. Complex lymphatic anomalies, Semin Pediatr Surg. 2014 Aug;23(4):186-90. doi:10.1053/j.sempedsurg.2014.07.006. Epub 2014 Jul 22. == External links ==
You are a conduit for medical education. Your task is to offer detailed explanations of medical terms, ensuring they are understandable and precise.
I'm seeking clarification on the medical term 'Spigelian hernia.' Could you explain it?
A Spigelian is the type of ventral hernia where aponeurotic fascia pushes through a hole in the junction of the linea semilunaris and the arcuate line creating a bulge. It appears in the abdomen lower quadrant between an area of dense fibrous tissue and abdominal wall muscles causing a (Spigelian aponeurosis).It is the protuberance, of omentum, adipose tissue, or bowel in that weak space between the abdominal wall muscles, that ultimately pushes the intestines or superficial fatty tissue through a hole causing a defect. As a result, it creates the movement of an organ or a loop of intestine in the weakened body space that it is not supposed to be in. It is at this separation (aponeurosis) in the ventral abdominal region, that herniation most commonly occurs. Spigelian hernias are rare compared to other types of hernias because they do not develop under abdominal layers of fat but between fascia tissue that connects to muscle. The Spigelian hernia is generally smaller in diameter, typically measuring 1–2 cm., and the risk of tissue becoming strangulated is high. Signs and symptoms Individuals typically present with either intermittent pain (coming and going), a lump, or mass all which are classic signs of a bowel obstruction. The patient may have a protuberance when standing in an upright position although discomfort can sometimes be confused by its anatomical region for a peptic ulceration. The bulge may be painful when the patient stretches but then goes away when they are lying down in a resting position. However, a number of patients present with no obvious symptoms but vague tenderness along the area in which the Spigelian fascia is located. Diagnosis Ultrasound Imaging or a CT Scan will provide better imaging for the detection of a hernia than an xray. The ultrasound probe should move from lateral to medially, a hypoechoic mass should appear anteriorly and medially to the inferior epigastric artery during Valsalva maneuver. The diagnosis of a Spigelian hernia is traditionally difficult if only given a history and physical examination. People who are good candidates for elective Spigelian hernia surgery, not only but also, after receiving an initial diagnostic consultation by a licensed medical professional, will be advised to see a physician to schedule surgery. Treatment The Spigelian hernia can be repaired by either an open procedure or laproscopic surgery because of the high risk of strangulation; surgery is straightforward, with only larger defects requiring a mesh prosthesis. In contrast to the laparoscopic intraperitoneal onlay mesh plan of action there is a significant higher risk associated with complications and recurrence rates during the period following a surgical operation. A Spigelian hernia becomes immediately operative once the risk of incarceration is confirmed. Today, a Spigelian hernia can be repaired by doing robotic laparoscopy and most patients can go home the same day. This novel uncomplicated approach to small Spigelian hernias combines the benefits of laparoscopic localization, reduction, and closure without the morbidity and cost associated with foreign material. Mesh-free laparoscopic suture repair is an uncomplicated approach to small Spigelian hernias combined with the benefits of a closure without the anguish and cost associated with foreign material. Eponym Adriaan van den Spiegel, born in Brussels, was an anatomist at the University of Padua during the 17th century. In 1619 he became a professor of surgery. Spiegel was the first to described this rare hernia in 1627. The history of the Spigelian hernia became acknowledged in 1645, twenty years after Spiegels death. In 1764, almost a century later, the Flemish anatomist, Josef Klinkosch was acknowledged for recognizing and describing a hernia located in the Spigelian fascia, and coined the term Spigelian hernia. Raveenthiran syndrome Raveenthiran described a new syndrome in which Spigelian hernia and cryptorchidism (undescended testis) occur together. Some common complications of this distinct syndrome cryptorchidism are testicular torsion, and its link to testicular cancer. References External links Adriaan van den Spiegel at Who Named It? Laparoscopic Repair of Spigelian Hernia Medtube
You function as a medical dictionary. Offer comprehensive and accurate descriptions of medical terms, ensuring clarity and depth in your explanations.
I've encountered the term 'Baboon syndrome' while reading about medical topics. What does it refer to exactly?
Symmetrical drug-related intertriginous and flexural exanthema (SDRIFE), more popularly known as baboon syndrome because of its resemblance to the distinctive red buttocks displayed by female baboons, is a systemic contact dermatitis characterized by well-demarcated patches of erythema distributed symmetrically on the buttocks. The cause of the syndrome may be drug-related, i.e. induced by systemic administration of hydroxyzine penicillin, iodinated radio contrast media and others. Symptoms and signs The typical rash commonly appears on buttocks. This then resembles the colour of a baboons buttocks. Other areas like upper inner thigh and armpits, may be affected by the rash. The rashes are red and well-defined. The presentation is typically symmetrical and not associated with systemic symptoms. Cause Diagnosis Treatment Treatment of symmetrical drug related intertriginous and flexural exanthema involves identifying and stopping the causative agent. Topical steroids can help to reduce the redness. Epidemiology Baboon syndrome affects both sexes equally, and can occur at any age, but seems to be more common in childhood than in adulthood. See also Airbag dermatitis List of cutaneous conditions == References ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
I've come across the term 'Squamous cell skin cancer' in a medical context, but I'm not sure what it means. Can you clarify?
Squamous-cell skin cancer, also known as cutaneous squamous-cell carcinoma (cSCC), is one of the main types of skin cancer along with basal cell cancer and melanoma. It usually presents as a hard lump with a scaly top but can also form an ulcer. Onset is often over months. Squamous-cell skin cancer is more likely to spread to distant areas than basal cell cancer. When confined to the outermost layer of the skin, a precancerous or in situ form of cSCC is known as Bowens disease.The greatest risk factor is high total exposure to ultraviolet radiation from the sun. Other risks include prior scars, chronic wounds, actinic keratosis, lighter skin, Bowens disease, arsenic exposure, radiation therapy, tobacco smoking, poor immune system function, previous basal cell carcinoma, and HPV infection. Risk from UV radiation is related to total exposure, rather than early exposure. Tanning beds are becoming another common source of ultraviolet radiation. Risk is also elevated in certain genetic skin disorders, such as xeroderma pigmentosum and certain forms of epidermolysis bullosa. It begins from squamous cells found within the skin. Diagnosis is often based on skin examination and confirmed by tissue biopsy.New in vivo and in vitro studies have proven that the upregulation of FGFR2, a subset of the fibroblast growth factor receptor (FGFR) immunoglobin family, has a critical role to play in the progression of cSCC cells. Mutation in the TPL2 gene causes overexpression of FGFR2, which activates mTORC1 and AKT pathways in both primary and metastatic cSCC cell lines. Only by using FGFR pan-inhibitor, AZD4547, could cell migration and cell proliferation on cSCC be attenuated Additionally, research has identified common driver mutations in cSCC, with alterations in the NOTCH and p53 pathways occurring at high frequency and mutations in the Hippo and Ras/MAPK/PI3K occurring at lower frequency Decreasing exposure to ultraviolet radiation and the use of sunscreen appear to be effective methods of preventing squamous-cell skin cancer. Treatment is typically by surgical removal. This can be by simple excision if the cancer is small otherwise Mohs surgery is generally recommended. Other options may include application of cold and radiation therapy. In the cases in which distant spread has occurred chemotherapy or biologic therapy may be used.As of 2015, about 2.2 million people have cSCC at any given time. It makes up about 20% of all skin cancer cases. About 12% of males and 7% of females in the United States developed cSCC at some point in time. While prognosis is usually good, if distant spread occurs five-year survival is ~34%. In 2015 it resulted in about 51,900 deaths globally. The usual age at diagnosis is around 66. Following the successful treatment of one case of cSCC people are at high risk of developing further cases. Signs and symptoms SCC of the skin begins as a small nodule and as it enlarges the center becomes necrotic and sloughs and the nodule turns into an ulcer, and generally are developed from an actinic keratosis. Once keratinocytes begin to grow uncontrollably, they have the potential to become cancerous and produce squamous cell carcinoma. The lesion caused by SCC is often asymptomatic Ulcer or reddish skin plaque that is slow growing Intermittent bleeding from the tumor, especially on the lip The clinical appearance is highly variable Usually the tumor presents as an ulcerated lesion with hard, raised edges The tumor may be in the form of a hard plaque or a papule, often with an opalescent quality, with tiny blood vessels The tumor can lie below the level of the surrounding skin, and eventually ulcerates and invades the underlying tissue The tumor commonly presents on sun-exposed areas (e.g. back of the hand, scalp, lip, and superior surface of pinna) On the lip, the tumor forms a small ulcer, which fails to heal and bleeds intermittently Evidence of chronic skin photodamage, as in multiple actinic keratoses (solar keratoses) The tumor grows relatively slowly Spread Unlike basal-cell carcinoma (BCC), squamous cell carcinoma (SCC) has a higher risk of metastasis. Risk of metastasis is higher clinically in SCC arising in scars, on the lower lips or mucosa, and occurring in immunosuppressed and solid organ transplant patients. Risk of metastasis is also higher in SCC that are >2 cm in diameter, growth into the fat layer and along nerves, thickness >6mm. Causes Squamous cell carcinoma is the second-most common cancer of the skin (after basal-cell carcinoma but more common than melanoma). It usually occurs in areas exposed to the sun. Sunlight exposure and immunosuppression are risk factors for SCC of the skin, with chronic sun exposure being the strongest environmental risk factor. There is a risk of metastasis starting more than 10 years after diagnosable appearance of squamous cell carcinoma, but the risk is low, though much higher than with basal-cell carcinoma. Squamous cell cancers of the lip and ears have high rates of local recurrence and distant metastasis. In a recent study, it has also been shown that the deletion or severe down-regulation of a gene titled Tpl2 (tumor progression locus 2) may be involved in the progression of normal keratinocytes into becoming squamous cell carcinoma.SCCs represent about 20% of the non-melanoma skin cancers, but 80-90% of those with metastatic potential are located on the head and neck.Tobacco smoking also increases the risk for cutaneous squamous cell carcinoma.The vast majority of SCCs are those of the skin, and are often the result of ultraviolet exposure. SCCs usually occur on portions of the body commonly exposed to the Sun; the face, ears, neck, hands, or arm. The main symptom is a growing bump that may have a rough, scaly surface and flat reddish patches. Unlike basal-cell carcinomas, SCCs carry a higher risk of metastasis, and may spread to the regional lymph nodes,Erythroplasia of Queyrat (SCC in situ of the glans or prepuce in males, M: 733 : 656  or the vulvae in females.) may be induced by human papilloma virus. It is reported to occur in the corneoscleral limbus. Erythroplasia of Queyrat may also occur on the anal mucosa or the oral mucosa. Some sources state that this condition is synonymous with Bowens disease, however generally speaking Bowens disease refers to carcinoma in situ of any location on the skin such as the lower leg. Genetically, SCC tumors harbor high frequencies of NOTCH and p53 mutations as well as less frequent alterations in histone acetyltransferase EP300, subunit of the SWI/SNF chromatin remodeling complex PBRM1, DNA-repair deubiquitinase USP28, and NF-κB signaling regulator CHUK. Immunosuppression People who have received solid organ transplants are at a significantly increased risk of developing squamous cell carcinoma due to the use of chronic immunosuppressive medication. While the risk of developing all skin cancers increases with these medications, this effect is particularly severe for SCC, with hazard ratios as high as 250 being reported, versus 40 for basal cell carcinoma. The incidence of SCC development increases with time posttransplant. Heart and lung transplant recipients are at the highest risk of developing SCC due to more intensive immunosuppressive medications used.Squamous cell cancers of the skin in individuals on immunotherapy or who have lymphoproliferative disorders (e.g. leukemia) tend to be much more aggressive, regardless of their location. The risk of SCC, and non-melanoma skin cancers generally, varies with the immunosuppressive drug regimen chosen. The risk is greatest with calcineurin inhibitors like cyclosporine and tacrolimus, and least with mTOR inhibitors, such as sirolimus and everolimus. The antimetabolites azathioprine and mycophenolic acid have an intermediate risk profile. Diagnosis Diagnosis is confirmed via biopsy of the tissue or tissues suspected to be affected by SCC. For the skin, look under skin biopsy. The pathological appearance of a squamous cell cancer varies with the depth of the biopsy. For that reason, a biopsy including the subcutaneous tissue and basilar epithelium, to the surface is necessary for correct diagnosis. The performance of a shave biopsy (see skin biopsy) might not acquire enough information for a diagnosis. An inadequate biopsy might be read as actinic keratosis with follicular involvement. A deeper biopsy down to the dermis or subcutaneous tissue might reveal the true cancer. An excision biopsy is ideal, but not practical in most cases. An incisional or punch biopsy is preferred. A shave biopsy is least ideal, especially if only the superficial portion is acquired. Characteristics Histopathologically, the epidermis in SCC in situ (Bowens disease) will show hyperkeratosis and parakeratosis. There will also be marked acanthosis with elongation and thickening of the rete ridges. These changes will overly keratinocytic cells which are often highly atypical and may in fact have a more unusual appearance than invasive SCC. The atypia spans the full thickness of the epidermis, with the keratinocytes demonstrating intense mitotic activity, pleomorphism, and greatly enlarged nuclei. They will also show a loss of maturity and polarity, giving the epidermis a disordered or "windblown" appearance.Two types of multinucleated cells may be seen: the first will present as a multinucleated giant cell, and the second will appear as a dyskeratotic cell engulfed in the cytoplasm of a keratinocyte. Occasionally, cells of the upper epidermis will undergo vacuolization, demonstrating an abundant and strongly eosinophilic cytoplasm. There may be a mild to moderate lymphohistiocytic infiltrate detected in the upper dermis. In situ disease Bowens disease is essentially equivalent to and used interchangeably with SCC in situ, when not having invaded through the basement membrane. Depending on source, it is classified as precancerous or SCC in situ (technically cancerous but non-invasive). In SCC in situ (Bowens disease), atypical squamous cells proliferate through the whole thickness of the epidermis. The entire tumor is confined to the epidermis and does not invade into the dermis. The cells are often highly atypical under the microscope, and may in fact look more unusual than the cells of some invasive squamous cell carcinomas. Erythroplasia of Queyrat is a particular type of Bowens disease that can arise on the glans or prepuce in males,: 733 : 656  and the vulvae in females. It mainly occurs in uncircumcised males, over the age of 40. It is named for French dermatologist Louis Queyrat (1856–1933), who was head of the dermatology service of lHôpital Ricord, a venereal hospital in Paris, now Hôpital Cochin. Invasive disease In invasive SCC, tumor cells infiltrate through the basement membrane. The infiltrate can be somewhat difficult to detect in the early stages of invasion: however, additional indicators such as full thickness epidermal atypia and the involvement of hair follicles can be used to facilitate the diagnosis. Later stages of invasion are characterized by the formation of nests of atypical tumor cells in the dermis, often with a corresponding inflammatory infiltrate. Degree of differentiation Prevention Appropriate sun-protective clothing, use of broad-spectrum (UVA/UVB) sunscreen with at least SPF 50, and avoidance of intense sun exposure may prevent skin cancer. A 2016 review of sunscreen for preventing squamous cell skin cancer found insufficient evidence to demonstrate whether it was effective. Management Most squamous cell carcinomas are removed with surgery. A few selected cases are treated with topical medication. Surgical excision with a free margin of healthy tissue is a frequent treatment modality. Radiotherapy, given as external beam radiotherapy or as brachytherapy (internal radiotherapy), can also be used to treat squamous cell carcinomas. There is little evidence comparing the effectiveness of different treatments for non-metastatic SCC of the skin.Mohs surgery is frequently utilized; considered the treatment of choice for squamous cell carcinoma of the skin, physicians have also utilized the method for the treatment of squamous cell carcinoma of the mouth, throat, and neck. An equivalent method of the CCPDMA standards can be utilized by a pathologist in the absence of a Mohs-trained physician. Radiation therapy is often used afterward in high risk cancer or patient types. Radiation or radiotherapy can also be a standalone option in treating SCCs. As a non invasive option brachytherapy serves a painless possibility to treat in particular but not only difficult to operate areas like the earlobes or genitals. An example of this kind of therapy is the High dose brachytherapy Rhenium-SCT which makes use of the beta rays emitting property of Rhenium-188. The radiation source is enclosed in a compound which is applied to a thin protection foile directly over the lesion. This way the radiation source can be applied to complex locations and minimize radiation to healthy tissue.After removal of the cancer, closure of the skin for patients with a decreased amount of skin laxity involves a split-thickness skin graft. A donor site is chosen and enough skin is removed so that the donor site can heal on its own. Only the epidermis and a partial amount of dermis is taken from the donor site which allows the donor site to heal. Skin can be harvested using either a mechanical dermatome or Humby knife.Electrodessication and curettage (EDC) can be done on selected squamous cell carcinoma of the skin. In areas where SCCs are known to be non-aggressive, and where the patient is not immunosuppressed, EDC can be performed with good to adequate cure rate.Treatment options for SCC in situ (Bowens disease) include photodynamic therapy with 5-aminolevulinic acid, cryotherapy, topical 5-fluorouracil or imiquimod, and excision. A meta-analysis showed evidence that PDT is more effective than cryotherapy and has better cosmetic outcomes. There is generally a lack of evidence comparing the effectiveness of all treatment options.High-risk squamous cell carcinoma, as defined by those occurring around the eye, ear, or nose, is of large size, is poorly differentiated, and grows rapidly, requires more aggressive, multidisciplinary management. Nodal spread: Surgical block dissection if palpable nodes or in cases of Marjolins ulcers but the benefit of prophylactic block lymph node dissection with Marjolins ulcers is not proven. Radiotherapy Adjuvant therapy may be considered in those with high-risk SCC even in the absence of evidence for local metastasis. Imiquimod (Aldara) has been used with success for squamous cell carcinoma in situ of the skin and the penis, but the morbidity and discomfort of the treatment is severe. An advantage is the cosmetic result: after treatment, the skin resembles normal skin without the usual scarring and morbidity associated with standard excision. Imiquimod is not FDA-approved for any squamous cell carcinoma.In general, squamous cell carcinomas have a high risk of local recurrence, and up to 50% do recur. Frequent skin exams with a dermatologist is recommended after treatment. Prognosis The long-term outcome of squamous cell carcinomas is dependent upon several factors: the sub-type of the carcinoma, available treatments, location and severity, and various patient health-related variables (accompanying diseases, age, etc.). Generally, the long-term outcome is positive, as less than 4% of Squamous cell carcinoma cases are at risk of metastasis. When it does metastasize, the most common involved organs are the lungs, brain, bone and other skin locations.One study found squamous cell carcinoma of the penis had a much greater rate of mortality than some other forms of squamous cell carcinoma, that is, about 23%, although this relatively high mortality rate may be associated with possibly latent diagnosis of the disease due to patients avoiding genital exams until the symptoms are debilitating, or refusal to submit to a possibly scarring operation upon the genitalia. Squamous cell carcinoma occurring in the organ transplant population is also associated with a higher risk of mortality. Epidemiology The incidence of squamous cell carcinoma continues to rise around the world. A recent study estimated that there are between 180,000 and 400,000 cases of SCC in the United States in 2013. Risk factors for squamous cell carcinoma varies with age, gender, race, geography, and genetics. The incidence of SCC increases with age and the peak incidence is usually around 60 years old. Males are affected with SCC at a ratio of 2:1 in comparison to females. Caucasians are more likely to be affected, especially those with fair skin or those chronically exposed to UV radiation.Squamous cell carcinoma of the skin can be found on all areas of the body but is most common on frequently sun-exposed areas, such as the face, legs and arms. Solid organ transplant recipients (heart, lung, liver, pancreas, among others) are also at a heightened risk of developing aggressive, high-risk SCC. There are also a few rare congenital diseases predisposed to cutaneous malignancy. In certain geographic locations, exposure to arsenic in well water or from industrial sources may significantly increase the risk of SCC. Additional images See also List of cutaneous conditions associated with increased risk of nonmelanoma skin cancer References External links DermNet NZ: Squamous cell carcinoma
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Can you demystify the medical term 'Salmonellosis' for me?
Salmonellosis is a symptomatic infection caused by bacteria of the Salmonella type. It is also a food-borne disease and are defined as diseases, usually either infectious or toxic in nature, caused by agents that enter the body through the ingestion of food. In humans, the most common symptoms are diarrhea, fever, abdominal cramps, and vomiting. Symptoms typically occur between 12 hours and 36 hours after exposure, and last from two to seven days. Occasionally more significant disease can result in dehydration. The old, young, and others with a weakened immune system are more likely to develop severe disease. Specific types of Salmonella can result in typhoid fever or paratyphoid fever.There are two species of Salmonella: Salmonella bongori and Salmonella enterica with many subspecies. However, subgroups and serovars within a species may be substantially different in their ability to cause disease. This suggests that epidemiologic classification of organisms at the subspecies level may improve management of Salmonella and similar pathogens.Both vegetarian and non-vegetarian populations are susceptible to Salmonella infections due to the consumption of contaminated meat and milk. Infection is usually spread by eating contaminated meat, eggs, water or milk. Other foods may spread the disease if they have come into contact with manure. A number of pets including cats, dogs, and reptiles can also carry and spread the infection. Diagnosis is by a stool test or blood tests.Efforts to prevent the disease include the proper washing, preparation, and cooking of food to appropriate temperature. Mild disease typically does not require specific treatment. More significant cases may require treatment of electrolyte problems and intravenous fluid replacement. In those at high risk or in whom the disease has spread outside the intestines, antibiotics are recommended.Salmonellosis is one of the most common causes of diarrhea globally. In 2015, 90,300 deaths occurred from nontyphoidal salmonellosis, and 178,000 deaths from typhoidal salmonellosis. In the United States, about 1.35 million cases and 450 deaths occur from non-typhoidal salmonellosis a year. In Europe, it is the second most common foodborne disease after campylobacteriosis. Signs and symptoms Enteritis After a short incubation period of a few hours to one day, the bacteria multiply in the small intestine, causing an intestinal inflammation (enteritis). Most people with salmonellosis develop diarrhea, fever, vomiting, and abdominal cramps 12 to 72 hours after infection. Diarrhea is often watery and non-bloody but may be mucoid and bloody. In most cases, the illness lasts four to seven days, and does not require treatment. In some cases, though, the diarrhea may be so severe that the patient becomes dangerously dehydrated and must be hospitalized. At the hospital, the patient may receive fluids intravenously to treat the dehydration, and may be given medications to provide symptomatic relief, such as fever reduction. In severe cases, the Salmonella infection may spread from the intestines to the blood stream, and then to other body sites, and can cause death, unless the person is treated promptly with antibiotics.In otherwise healthy adults, the symptoms can be mild. Normally, no sepsis occurs, but it can occur exceptionally as a complication in the immunocompromised. However, in people at risk such as infants, small children, and the elderly, Salmonella infections can become very serious, leading to complications. In infants, dehydration can cause a state of severe toxicity. Extraintestinal localizations are possible, especially Salmonella meningitis in children, osteitis, etc. Children with sickle-cell anemia who are infected with Salmonella may develop osteomyelitis. Treatment of osteomyelitis, in this case, will be to use fluoroquinolones (ciprofloxacin, levofloxacin, etc., and nalidixic acid).Those whose only symptom is diarrhea usually completely recover, but their bowel habits may not return to normal for several months. Typhoid fever Typhoid fever occurs when Salmonella bacteria enter the lymphatic system and cause a systemic form of salmonellosis. Endotoxins first act on the vascular and nervous apparatus, resulting in increased permeability and decreased tone of the vessels, upset thermal regulation, vomiting, and diarrhea. In severe forms of the disease, enough liquid and electrolytes are lost to upset the fluid balance, cause an electrolyte imbalance, decrease the circulating blood volume and arterial pressure, and cause hypovolemic shock. Septic shock may also develop. Shock of mixed character (with signs of both hypovolemic and septic shock) are more common in severe salmonellosis. Oliguria and azotemia develop in severe cases as a result of renal involvement due to hypoxia and toxemia. Long-term Salmonellosis is associated with later irritable bowel syndrome and inflammatory bowel disease. Evidence however does not support it being a direct cause of the latter.A small number of people afflicted with salmonellosis experience reactive arthritis, which can last months or years and can lead to chronic arthritis. In sickle-cell anemia, osteomyelitis due to Salmonella infection is much more common than in the general population. Though Salmonella infection is frequently the cause of osteomyelitis in people with sickle-cell, it is not the most common cause, which is Staphylococcus infection.Those infected may become asymptomatic carriers, but this is relatively uncommon, with shedding observed in only 0.2 to 0.6% of cases after a year. Causes Contaminated food, often having no unusual look or smell Poor kitchen hygiene, especially problematic in institutional kitchens and restaurants because this can lead to a significant outbreak Excretions from either sick or infected but apparently clinically healthy people and animals (especially dangerous are caregivers and animals) Polluted surface water and standing water (such as in shower hoses or unused water dispensers) Unhygienically thawed poultry (the meltwater contains many bacteria) An association with reptiles (pet tortoises, snakes, iguanas, and aquatic turtles) is well described. Amphibians such as frogsSalmonella bacteria can survive for some time without a host; they are frequently found in polluted water, with contamination from the excrement of carrier animals being particularly important.The European Food Safety Authority highly recommends that when handling raw turkey meat, consumers and people involved in the food supply chain should pay attention to personal and food hygiene.An estimated 142,000 Americans are infected each year with Salmonella Enteritidis from chicken eggs, and about 30 die. The shell of the egg may be contaminated with Salmonella by feces or environment, or its interior (yolk) may be contaminated by penetration of the bacteria through the porous shell or from a hen whose infected ovaries contaminate the egg during egg formation.Nevertheless, such interior egg yolk contamination is theoretically unlikely. Even under natural conditions, the rate of infection was very small (0.6% in a study of naturally contaminated eggs and 3.0% among artificially and heavily infected hens). Prevention The US Food and Drug Administration (FDA) has published guidelines to help reduce the chance of food-borne salmonellosis. Food must be cooked to 145–165 °F (63–74 °C), and liquids such as soups or gravies should be boiled when reheating. Freezing kills some Salmonella, but it is not sufficient to reliably reduce them below infectious levels. While Salmonella is usually heat-sensitive, it acquires heat-resistance in high-fat environments such as peanut butter. Vaccine Antibodies against nontyphoidal Salmonella were first found in Malawi children in research published in 2008. The Malawian researchers identified an antibody that protects children against bacterial infections of the blood caused by nontyphoidal Salmonella. A study at Queen Elizabeth Hospital in Blantyre found that children up to two years old develop antibodies that aid in killing the bacteria. This could lead to a possible Salmonella vaccine for humans.A 2014 study tested a vaccine on chickens which offered efficient protection against salmonellosis.Vaccination of chickens against Salmonella essentially wiped out the disease in the United Kingdom. A similar approach was considered in the United States, but the Food and Drug Administration decided not to mandate vaccination of hens. Industrial hygiene Since 2011, Denmark has had three cases of human salmonella poisoning. The country eradicated salmonella without vaccines and antibiotics by focusing on eliminating the infection from "breeder stocks", implementing various measures to prevent infection, and taking a zero-tolerance policy towards salmonella in chickens. Treatment Electrolytes may be replenished with oral rehydration supplements (typically containing salts sodium chloride and potassium chloride).Appropriate antibiotics, such as ceftriaxone, may be given to kill the bacteria, but are not necessary in most cases. Azithromycin has been suggested to be better at treating typhoid in resistant populations than both fluoroquinolone drugs and ceftriaxone. There are recommendations on choice of antibiotic to avoid promoting antibiotic resistance.There is no evidence of benefit of treating healthy people with diarrhea due to non-typhoidal salmonellosis. However, the evidence for the very young, very old or people with severe diseases are uncertain. Epidemiology United States Salmonellosis annually causes, per CDC estimation, about 1.2 million illnesses, 23,000 hospitalizations, and 450 deaths in the United States every year. About 142,000 people in the United States are infected each year with Salmonella Enteritidis specifically from chicken eggs, and about 30 die.In 2010, an analysis of death certificates in the United States identified a total of 1,316 Salmonella-related deaths from 1990 to 2006. These were predominantly among older adults and those who were immunocompromised. The U.S. government reported as many as 20% of all chickens were contaminated with Salmonella in the late 1990s, and 16.3% were contaminated in 2005.The United States has struggled to control salmonella infections, with the rate of infection rising from 2001 to 2011. In 1998, the USDA moved to close plants if salmonella was found in excess of 20 percent, which was the industrys average at the time, for three consecutive tests. Texas-based Supreme Beef Processors, Inc. sued on the argument that Salmonella is naturally occurring and ultimately prevailed when a federal appeals court affirmed a lower court. These issues were highlighted in a proposed Kevins Law (formally proposed as the Meat and Poultry Pathogen Reduction and Enforcement Act of 2003), of which components were included the Food Safety Modernization Act passed in 2011, but that law applies only to the FDA and not the USDA. The USDA proposed a regulatory initiative in 2011 to Office of Management and Budget.Salmonella is found in 8% of the chicken parts tested by the USDA and 25% of ground chicken. Europe An outbreak of salmonellosis started in Northern Europe in July 2012, caused by Salmonella thompson. The infections were linked to smoked salmon from the manufacturer Foppen, where the contamination had occurred. Most infections were reported in the Netherlands; over 1060 infections with this subspecies and four fatalities were confirmed.A case of widespread infection was detected mid-2012 in seven EU countries. Over 400 people had been infected with Salmonella enterica serovar Stanley (S. Stanley) that usually appears in the regions of Southeast Asia. After several DNA analyses seemed to point to a specific Belgian strain, the "Joint ECDC/E FSA Rapid Risk Assessment" report detected turkey production as the source of infection.In Germany, food poisoning infections must be reported. Between 1990 and 2005, the number of officially recorded cases decreased from about 200,000 to about 50,000. Elsewhere In March 2007, around 150 people were diagnosed with salmonellosis after eating tainted food at a governors reception in Krasnoyarsk, Russia. Over 1,500 people attended the ball on March 1 and fell ill as a consequence of ingesting Salmonella-tainted sandwiches.About 150 people were sickened by Salmonella-tainted chocolate cake produced by a major bakery chain in Singapore in December 2007.South Africa reported contamination of its poultry carcasses by Salmonella. Egypt showed that Salmonella was predominant in poultry along with other non-typhoid strains. In Indonesia, the isolation of Salmonella Typhi was the main focus, while other serovars were also included from poultry. In India, Salmonella was predominant in poultry. Romania reported Salmonella serovars in poultry that affect humans. History Both salmonellosis and the microorganism genus Salmonella derive their names from a modern Latin coining after Daniel E. Salmon (1850–1914), an American veterinary surgeon. He had help from Theobald Smith, and together they found the bacterium in pigs.Salmonella enterica was possibly the cause of the 1576 cocliztli epidemic in New Spain. Four-inch regulation The "Four-inch regulation" or "Four-inch law" is a colloquial name for a regulation issued by the U.S. FDA in 1975, restricting the sale of turtles with a carapace length less than four inches (10 cm).The regulation was introduced, according to the FDA, "because of the public health impact of turtle-associated salmonellosis". Cases had been reported of young children placing small turtles in their mouths, which led to the size-based restriction. Regulation elsewhere FSSAI regulation The FSSAI has been established under the Food Safety and Standards Act, 2006, which is a consolidating statute related to food safety and regulation in India. FSSAI is responsible for protecting and promoting public health through the regulation and supervision of food safety. The major importance of the FSSAI License is that it ensures that the food is verified chemically and hence is safe to consume. Health before wealth is a common quote as well as fact. Therefore, anything related directly to health is a matter of great sensitivity. See also 1984 Rajneeshee bioterror attack 2012 salmonella outbreak 2018 outbreak of Salmonella List of foodborne illness outbreaks References External links CDC website, Division of Bacterial and Mycotic Diseases, Disease Listing: Salmonellosis
You are a medical interpreter. Your duty is to translate medical terms into easily digestible information, maintaining accuracy and detail.
I need a basic explanation for the medical term 'Hypokinesia.'
Hypokinesia is one of the classifications of movement disorders, and refers to decreased bodily movement. Hypokinesia is characterized by a partial or complete loss of muscle movement due to a disruption in the basal ganglia. Hypokinesia is a symptom of Parkinsons disease shown as muscle rigidity and an inability to produce movement. It is also associated with mental health disorders and prolonged inactivity due to illness, amongst other diseases. The other category of movement disorder is hyperkinesia that features an exaggeration of unwanted movement, such as twitching or writhing in Huntingtons disease or Tourette syndrome. Spectrum of disorders Hypokinesia describes a variety of more specific disorders: Causes The most common cause of Hypokinesia is Parkinsons disease, and conditions related to Parkinsons disease. Other conditions may also cause slowness of movements. These include hypothyroidism and severe depression. These conditions need to be carefully ruled out, before a diagnosis of Parkinsonism is made. The remainder of this article describes Hypokinesia associated with Parkinsons disease, and conditions related to Parkinsons disease. Pathophysiology Associated neurotransmitters Dopamine The main neurotransmitter thought to be involved in hypokinesia is dopamine. Essential to the basal ganglionic-thalamocortical loop, which processes motor function, dopamine depletion is common in these areas of hypokinesic patients. Bradykinesia is correlated with lateralized dopaminergic depletion in the substantia nigra. The dopamine pathway in the substantia nigra is essential to motor function, and commonly a lesion in this area correlates with displayed hypokinesia. Tremor and rigidity, however, seem to be only partially due to dopamine deficits in the substantia nigra, suggesting other processes are involved in motor control. Treatments for hypokinesia often either attempt to prevent dopamine degradation by MAO-B or increase the amount of neurotransmitter present in the system.GABA and glutamate The inhibitory neurotransmitter GABA and the excitatory glutamate are found in many parts of the central nervous system, including in the motor pathways that involve hypokinesia. In one pathway, glutamate in the substantia nigra excites the release of GABA into the thalamus, which then inhibits the release of glutamate in the cortex and thereby reduces motor activity. If too much glutamate is initially in the substantia nigra, then through interaction with GABA in the thalamus and glutamate in the cortex, movements will be reduced or will not occur at all.Another direct pathway from the basal ganglia sends GABA inhibitory messages to the globus pallidus and substantia nigra, which then send GABA to the thalamus. In the indirect pathway, the basal ganglia send GABA to the globus pallidus which then sends it to the subthalamic nucleus, which then disinhibited sends glutamate to the output structures of the basal ganglia. Inhibition of GABA release could disrupt the feedback loop to the basal ganglia and produce hypokinesic movements.GABA and glutamate often interact with each other and with dopamine directly. In the basal ganglia, the nigrostriatal pathway is where GABA and dopamine are housed in the same neurons and released together. Neurobiology Hypokinetic symptoms arise from damage to the basal ganglia, which plays a role in producing force and computing the effort necessary to make a movement. Two possible neural pathways enable the basal ganglia to produce movement. When activated, the direct pathway sends sensory and motor information from the cerebral cortex to the first structure of the basal ganglia, the putamen. That information directly inhibits the globus pallidus internal and allows free movement. The indirect pathway, traveling through the putamen, globus pallidus external, and subthalamic nucleus, activates the globus pallidus internal threshold and inhibits the thalamus from communicating with the motor cortex, producing hypokinetic symptoms. When levels of dopamine decrease, the normal wave-firing pattern of basal ganglia neural oscillations changes and the tendency for oscillations increases, particularly in the beta wave of the basal ganglia. Recent research indicates, when oscillations fire simultaneously, processing is disrupted at the thalamus and cortex, affecting activities such as motor planning and sequence learning, as well as causing hypokinetic tremors. Treatments Dopaminergic drugs Dopaminergic drugs are commonly used in the early stages of the hypokinesia to treat patients. With increased intake, though, they can become ineffective because of the development of noradrenergic lesions. While initially the dopaminergic drugs may be effective, these noradrenergic lesions are associated with hypokinesic gait disorder development later on.Some Parkinsons patients are unable to move during sleep, prompting the diagnosis of "nocturnal hypokinesia". Physicians have experienced success treating this sleep disorder with slow-release or night-time dopaminergic drugs, and in some cases, continuous stimulation by the dopamine agonist rotigotine. Despite improved mobility during sleep, many Parkinsons patients report an extremely uncomfortable sleeping experience even after dopaminergic treatments. Deep brain stimulation Once the reaction to dopaminergic drugs begins to fluctuate in Parkinsons patients, deep brain stimulation (DBS) of the subthalamic nucleus and internal globus pallidus is often used to treat hypokinesia. DBS, like dopaminergic drugs, initially provides relief, but chronic use causes worse hypokinesia and freezing of gait. Lower-frequency DBS in irregular patterns has been shown to be more effective and less detrimental in treatment. Posteroventral pallidotomy (PVP) is a specific kind of DBS that destroys a small part of the globus pallidus by scarring the neural tissue, reducing brain activity and therefore tremors and rigidity. PVP is suspected to recalibrate basal ganglia activity in the thalamocortical pathway. PVP in the dominant hemisphere has been reported to disrupt executive function verbal processing abilities, and bilateral PVP may disturb processes of focused attention.Many akinesia patients also form a linguistic akinesia in which their ability to produce verbal movements mirrors their physical akinesia symptoms, especially after unsuccessful PVP. Patients are usually able to maintain normal levels of fluency, but often stop midsentence, unable to remember or produce a desired word. According to a study of Parkinsons patients with articulatory hypokinesia, subjects with faster rates of speech experienced more problems trying to produce conversational language than those who normally spoke at slower rates. Methylphenidate Methylphenidate, commonly used to treat ADHD, has been used in conjunction with levodopa to treat hypokinesia in the short term. The two work together to increase dopamine levels in the striatum and prefrontal cortex. Methylphenidate mainly inhibits dopamine and noradrenaline reuptake by blocking presynaptic transporters, and levodopa increases the amount of dopamine, generally improving hypokinesic gait. Some patients, however, have adverse reactions of nausea and headache to the treatment and the long-term effects of the drug treatment still need to be assessed. Stem cells New treatments include increasing the number of dopamine cells by transplanting stem cells into the basal ganglia or stimulating endogenous stem cell production and movement to the basal ganglia. The successful integration of stem cells can relieve hypokinetic symptoms and decrease the necessary dose of dopaminergic drugs. However, a variety of complications, including possible tumor formation, inappropriate cell migration, rejection of cells by the immune system, and cerebral hemorrhage are possible, causing many physicians to believe the risks outweigh the possible benefits. NOP receptor antagonists Another treatment, still in an experimental stage, is the administration of nociception FQ peptide (NOP) receptor antagonists. This treatment has been shown to reduce hypokinesia in animal studies when increasing nociception FQ in the substantia nigra and subthalamic nucleus. Low doses can be taken with dopaminergic treatment to decrease the amount of L-dopa needed, which can reduce its long-term side effects and improve motor performance. Dance therapy Dance therapy has also been shown to reduce hypokinesic movements and rigidity, though targeted more at the muscular aspects of the disorder than the neural activity. Associations Cognitive impairment Bradykinesia has been shown to precede impairment of executive functions, working memory, and attention. These cognitive deficiencies can be tied to nonfunction of the basal ganglia and prefrontal cortex, which is also linked to the motor-dysfunction of hypokinesia. Tremor and rigidity have not had observable connections to cognitive impairments, supporting the idea that they are not as involved in the dopamine pathway in the basal ganglionic-thalamocortical loop. Dopaminergic treatments have shown improvement in cognitive functions associated with hypokinesia, suggesting they are also dependent on dopamine levels in the system. Motor motivation Often debated is whether the efficiency, vigor, and speed of movements in patients with hypokinesia are tied to motivation for rewarding and against punishing stimuli. The basal ganglia have been tied to the incentives behind movement, therefore suggesting a cost/benefit analysis of planned movement could be affected in hypokinesia. Rewards have not been shown to change the aspects of a hypokinesic individuals movement. In fact, the motor planning and control of a patient with hypokinesia is already as efficient as possible (as shown by slightly faster, but generally the same movement after deep brain stimulation of the subthalamic nucleus). This suggests that hypokinetic individuals simply have a narrower range of movement that does not increase relative to motivation.Other studies have come to the same conclusion about rewards and hypokinesia, but have shown that aversive stimuli can, in fact, reduce hypokinesic movement. Dopamine is either less involved or has a more complex role in the response to punishment than it does to rewards, as the hypodopaminergic striatum allows more movement in response to aversive stimuli. Demographic differentiation Gender More men than women typically develop hypokinesia, which is reflected in young and middle-aged populations where females have displayed higher levels of nigrostriatal dopamine than males. In the elderly, however, this differentiation is not present. Typically, women exhibit more tremor in the beginning development of hypokinesia. In the disorder, men tend to display more rigidity and women more bradykinesic motor behavior.Age of onset Hypokinesia is displayed in the brain and outwardly slightly different depending on when an individual is first affected. In young-onset hypokinesia (younger than 45 years of age), typically slightly more cell loss occurs in the substantia nigra with more displayed dystonia and muscle stiffness. In old-onset hypokinesia (older than 70 years of age), typically more of a hypokinesic gait and difficulty walking and no dystonia are seen. Both onsets can display resting tremor, although more generally found in old-onset cases. Symptoms Stress causes alterations of cerebral circulation, increasing blood flow in the supramarginal gyrus and angular gyrus of the parietal lobe, the frontal lobe, and the superior temporal gyrus of the left hemisphere. Also, an increase in cardiac activity and change in the tonus of the heart vessels occurs, which is an elementary indication of stress development. In patients with normal stress, an adaptive fight-or-flight response is usually triggered by sympathetic nervous system activation. Hypokinesia patients experience these typical stress symptoms on a regular basis because of damage to the basal ganglia system. Therefore, when a hypokinesia victim is under stress, he or she does not display a typical fight-or-flight response, placing the patient under greater danger from potentially harmful stimuli. Low-impact exercise, elimination of drug and alcohol use, and regular meditation can help to restore normal stress responses in hypokinesia patients. Connections to other medical conditions Though it is often most associated with Parkinsons disease, hypokinesia can be present in a wide variety of other conditions. See also Akinetic mutism Hyperkinesia == References ==
You function as a medical dictionary. Offer comprehensive and accurate descriptions of medical terms, ensuring clarity and depth in your explanations.
Could you please explain the term 'Subdural effusion' in simple language?
Subdural effusion refers to an effusion in the subdural space, usually of cerebrospinal fluid.It is sometimes treated with surgery. See also Cerebrospinal fluid leak References == External links ==
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
I'm looking for a concise explanation of the medical term 'Emotional and behavioral disorders.'
Emotional and behavioral disorders (EBD; also known as behavioral and emotional disorders) refer to a disability classification used in educational settings that allows educational institutions to provide special education and related services to students who have displayed poor social and/or academic progress.The classification is often given to students after conducting a Functional Behavior Analysis. These students need individualized behavior supports such as a Behavior Intervention Plan, to receive a free and appropriate public education. Students with EBD may be eligible for an Individualized Education Plan (IEP) and/or accommodations in the classroom through a 504 Plan. History Early history Before any studies were done on the subject, mental illnesses were often thought to be a form of demonic possession or witchcraft. Since much was unknown, there was little to no distinction between the different types of mental illness and developmental disorders that we refer to today. Most often, they were dealt with by performing an exorcism on the person exhibiting signs of any mental illness. In the early to mid 1800s, asylums were introduced to America and Europe. There, patients were treated cruelly and often referred to as lunatics by the doctors in the professional fields. The main focus of asylums were to shun people with mental illnesses from the public. In 1963, the Community Mental Health Centers Construction Act (Public Law 88–164), was passed by Congress and signed by John F. Kennedy, which provided federal funding to community mental health centers. This legislation changed the way that mental health services were handled and also led to the closure of many large asylums. Many laws soon followed assisting more and more people with EBDs. 1978 came with the passing of Public Law 94- 142 which required free and public education to all disabled children including those with EBDs. An extension of PL 94–142, PL 99-457, was put into act which would provide services to all disabled children from the ages of 3-5 by the 1990–91 school year. PL 94-142 has since been renamed to the Individuals with Disabilities Education Act (IDEA). Use and development of the term Various terms have been used to describe irregular emotional and behavioral disorders. Many of the terms such as mental illness and psychopathology were used to describe adults with such conditions. Mental illness was a label for most people with any type of disorder and it was common for people with emotional and behavioral disorders to be labeled with a mental illness. However, those terms were avoided when describing children as it seemed too stigmatizing. In the late 1900s the term "behaviorally disordered" appeared. Some professionals in the field of special education accepted the term while others felt it ignored emotional issues. In order to make a more uniformed terminology, the National Mental Health and Special Education Coalition, which consists of over thirty professional and advocacy groups, coined the term "emotional and behavioral disorders" in 1988. Criteria According to the Individuals with Disabilities Education Act an EBD classification is required if one or more of the following characteristics is excessively observed in a student over a significant amount of time: Learning challenges that cannot be explained by intellectual, sensory, or health factors. Trouble keeping up or building satisfactory relationships with peers and teachers. Inappropriate behavior (against self or others) or emotions (shares the need to harm others or self, low self-worth) in normal conditions. An overall attitude of unhappiness or depression. A tendency to develop physical symptoms or fears related with individual or school issues.The term "EBD" includes students diagnosed with schizophrenia. However, it does not have any significant bearing on students who are socially maladjusted unless they also meet the above criteria. Criticisms Providing or failing to provide an EBD classification to a student may be controversial, as the IDEA does not clarify which children would be considered "socially maladjusted". Students with a psychiatric diagnosis of conduct disorder are not guaranteed to receive additional educational services under an EBD classification. Students with an EBD classification who meet the diagnostic criteria for various disruptive behavior disorders, including attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), or conduct disorder (CD) do not have an automatic eligibility to receive an IEP or 504 Plan. Students considered "socially maladjusted", but ineligible for an EBD classification (i.e., students diagnosed with conduct disorder), often receive better educational services in special education classrooms or alternative schools with high structure, clear rules, and consistent consequences. Student characteristics Students with EBD are a diverse population with a wide range of intellectual and academic abilities. Males, African-Americans, and economically disadvantaged students are over-represented in the EBD population, and students with EBD are more likely to live in single-parent homes, foster homes, or other non-traditional living situations. These students also tend to have low rates of positive social interactions with peers in educational contexts. Students with EBD are often categorized as "internalizers" (e.g., have poor self-esteem, or are diagnosed with an anxiety disorder or mood disorder) or "externalizers" (e.g., disrupt classroom instruction, or are diagnosed with disruptive behavior disorders such as oppositional defiant disorder and conduct disorder). Male students may be over-represented in the EBD population because they appear to be more likely to exhibit disruptive externalizing behavior that interferes with classroom instruction. Females may be more likely to exhibit internalizing behavior that does not interfere with classroom instruction, though to what extent this perception is due to social expectations of differences in male and female behavior is unclear. In any case, it is important to note that both internalizing and externalizing behaviour can and do occur in either sex; Students with EBD are also at an increased risk for learning disabilities, school dropout, substance abuse, and juvenile delinquency. Internalizing and externalizing behavior A person with EBD with "internalizing" behavior may have poor self-esteem, have depression, experience loss of interest in social, academic, and other life activities, and may exhibit non-suicidal self-injury or substance abuse. Students with internalizing behavior may also have a diagnosis of separation anxiety or another anxiety disorder, post-traumatic stress disorder (PTSD), specific or social phobia, obsessive–compulsive disorder (OCD), panic disorder, and/or an eating disorder. Teachers are more likely to write referrals for students that are overly disruptive. Screening tools used to detect students with high levels of "internalizing" behavior are not sensitive and are rarely used in practice. Students with EBD with "externalizing" behavior may be aggressive, non-compliant, extroverted, or disruptive. Students with EBD that show externalizing behavior are often diagnosed with attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), conduct disorder, and/or bipolar disorder; however, this population can also include typically developing children that have learned to exhibit externalizing behavior for various reasons (e.g., escape from academic demands or access to attention). These students often have difficulty inhibiting emotional responses resulting from anger, frustration, and disappointment. Students who "externalize" exhibit behaviors such as insulting, provoking, threatening, bullying, cursing, and fighting, along with other forms of aggression. Male students with EBD exhibit externalizing behavior more often than their female counterparts.Children and adolescents with ADD or ADHD may display different types of externalizing behavior and should be either medicated or going through behavioral treatment for their diagnosis. Adolescents with severe ADHD would likely benefit most from both medication and behavioral treatment. Younger children should go through behavioral treatment before being treated with medication. Another recommended form of treatment for children and adolescents diagnosed with ADHD would be counseling from a mental health professional. Treatment options will improve performance of children and adolescents on emotion recognition tasks, specifically response time as there is no difficulty recognizing human emotions. The degree of required treatments vary depending on the degree of ADD or ADHD the individual has. Treatment for these types of behaviors should include the parents as it is evident that their parenting skills impact on how their child deals with their symptoms, especially when at a younger age. Parents going through a parenting skills training program were reported a decrease in internalizing and externalizing behavior in their children post-training program. The program included learning how to give positive attention, increase good behavior with small frequent rewards and specific praise as well as learning how to decrease attention when the child behaved poorly. Effect on cognition In recent years, many researchers have been interested in exploring the relationship between emotional disorders and cognition. Evidence has revealed that there is a relationship between the two. Strauman (1989) investigated how emotional disorders shape a persons cognitive structure, that is, the mental processes people utilize to make sense of the world around them. He recruited three groups of individuals: those with social phobias, those with depression, and controls with no emotional disorder diagnosis. He wanted to determine whether these groups had a cognitive structure showing an actual/ideal (AI) discrepancy (referring to an individual not believing that they have achieved their personal desires) or actual/own/other (AOO) discrepancy (referring to an individuals actions not living up to what their significant other believes that they need to be). He found that depressed individuals had the highest AI discrepancy and social phobics had the greatest AOO discrepancy, while the controls were lower or in between the two for both discrepancies.Specific cognitive processes (e.g., attention) may be different in those with emotional disorders. MacLeod, Mathews, and Tata (1986) tested the reaction times of 32 participants, some of whom were diagnosed with Generalized Anxiety disorder, when presented with threatening words. They found that when threatening words were presented, people with greater anxiety tended to have increased selective attention, meaning that they reacted quicker to a stimulus in an area where a threatening word was just presented (32-59ms faster). When in the control group, subjects reacted slower when there was a threatening word proceeding the stimulus (16-32ms slower).Emotional disorders can also alter the way people regulate their emotions. Joormann and Gotlib (2010) conducted a study with depressed, or previously depressed, individuals to test this. They found that, when compared to individuals who have never had a depressive episode, previously and currently depressed individuals tended to use maladaptive emotion regulation strategies (such as rumination or brooding) more. They also found that when depressed individuals displayed cognitive inhibition (slowing of response to a variable that had been previously ignored) when asked to describe a negative word (ignored variable was a positive word), they were less likely to ruminate or brood. When they displayed cognitive inhibition when asked to describe a positive word (ignored variable was a negative word), they were more likely to reflect. Services in the United States There are many types of services available to EBD students, referenced below. One service is one-on-one support (or an aide) who assists in everyday activities and academics. Another service is foundations offer behavior services as well as counseling support. Some services include classrooms that are dedicated to educational foundations and work on building the student up possessively. States also offer dedicated schools with multiple resources that help students with EBD excel and transition (back) into local schools. Texas The state of Texas has the Texas Behavior Support Initiative (TBSI) authorized by Senate Bill 1196 and Texas Administrative Code §89.1053. With its design to provide knowledge for the use of constructive behavior interventions and to aid students, including students with disabilities. TBSI meets the legislative requirements for the use of restraint and time-out, along with providing the baseline work for behavior strategies and prevention throughout each environment. New York The state of New York has the Foundations Behavioral Health that has been approved out of state educations and residential provider with the New York State Education Dept. Foundations offer Academic and Behavioral Health Services to students between the ages of 14–21. This program allows students educational experience to have strategic interventions to aid their social and behavioral functioning. Some of the programs highlights include Functional Behavioral Assessment (FBA), Behavioral Intervention Plan (BIP) & Community Based Instruction (CBI). California The state of California has Spectrum Center classrooms in Los Angeles and the San Francisco area which are providing Emotional Disabilities and Behavioral Services. They provide academic classrooms for students who are actively working to improve grade-level standards and working toward getting their high school diploma. The main practice is the use of Positive Behavior Interventions and Supports (PBIS). PBIS instructional practices help students determine their skill level and progress, restore their skills through direct instruction, knowing the standards on their grade level and small group counseling. Michigan The state of Michigan has a Behavioral Education Center (BEC) in Bangor. Its purpose is to aid local schools directs with students between the ages of 5–26 years old with EBDs. Along with having students use appropriate behaviors and skills to successfully return to their local school setting. Classroom programs, consultation, coaching, and professional development services are available within the school districts. Florida The state of Florida has Students with Emotional/Behavioral Disabilities Network (SEDNET). SEDNET projects across the state aid the local school districts to work with those at-risk of EBDs. “Dealing with adverse behavior in the educational environment,” it serves students who poorly function at home, school, or community due to drugs and substance abuse or mental health issues. SEDNET 2A Services: Family Services Planning Team (FSPT)- agencies, school officials and SEDNET meet with parents to assist and aid the childs poor performance at school and home. Positive Behavior Support providing technical assistance to promote positive behavior. Classroom Observation/Teacher Consultation- working with EBD children using successful strategies and tips in a classroom environment. References External links "What is an emotional or behavioral disorder?" (PDF). Childrens mental health and emotional or behavioral disorders project. PACER: Minnesota Parent Training and Information Center. Archived from the original (PDF) on 2 October 2006. Retrieved 29 June 2022. Behaviour Management (EBD) Review Group: Published reviews
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
What does the medical term 'Asenapine' encompass?
Asenapine, sold under the brand name Saphris among others, is an atypical antipsychotic medication used to treat schizophrenia and acute mania associated with bipolar disorder.It was chemically derived via altering the chemical structure of the tetracyclic (atypical) antidepressant, mianserin.It was initially approved in the United States in 2009 and approved as a generic medication in 2020. Medical uses Asenapine has been approved by the FDA for the acute treatment of adults with schizophrenia and acute treatment of manic or mixed episodes associated with bipolar I disorder with or without psychotic features in adults. In Australia asenapines approved (and also listed on the PBS) indications include the following: Schizophrenia Treatment, for up to 6 months, of an episode of acute mania or mixed episodes associated with bipolar I disorder Maintenance treatment, as monotherapy, of bipolar I disorderIn the European Union and the United Kingdom, asenapine is only licensed for use as a treatment for acute mania in bipolar I disorder.Asenapine is absorbed readily if administered sublingually, asenapine is poorly absorbed when swallowed. A transdermal formulation of asenapine was approved in the United States in October 2019 under the brand name Secuado. Schizophrenia A Cochrane systematic review found that while Asenapine has some preliminary evidence that it improves positive, negative, and depressive symptoms, it does not have enough research to merit a certain recommendation of Asenapine for the treatment of schizophrenia. Acute mania As for its efficacy in the treatment of acute mania, a recent meta-analysis showed that it produces comparatively small improvements in manic symptoms in patients with acute mania and mixed episodes than most other antipsychotic drugs (with the exception of ziprasidone) such as risperidone and olanzapine. Drop-out rates (in clinical trials) were also unusually high with asenapine. According to a post-hoc analysis of two 3-week clinical trials it may possess some antidepressant effects in patients with acute mania or mixed episodes. Adverse effects Adverse effect incidenceVery common (>10% incidence) adverse effects include: SomnolenceCommon (1-10% incidence) adverse effects include: Weight gain Increased appetite Extrapyramidal side effects (EPS; such as dystonia, akathisia, dyskinesia, muscle rigidity, parkinsonism) Sedation Dizziness Dysgeusia Oral hypoaesthesia Increased alanine aminotransferase FatigueUncommon (0.1-1% incidence) adverse effects include: Hyperglycaemia — elevated blood glucose (sugar) Syncope Seizure Dysarthria sinus bradycardia Bundle branch block QTc interval prolongation (has a relatively low risk for causing QTc interval prolongation.) sinus tachycardia Orthostatic hypotension Hypotension Swollen tongue Dysphagia (difficulty swallowing) Glossodynia Oral paraesthesiaRare (0.01-0.1% incidence) adverse effects include: Neuroleptic malignant syndrome (Combination of fever, muscle stiffness, faster breathing, sweating, reduced consciousness, and sudden change in blood pressure and heart rate) Tardive dyskinesia Speech disturbance Rhabdomyolysis Angioedema Blood dyscrasias such as agranulocytosis, leukopenia and neutropenia Accommodation disorder Pulmonary embolism Gynaecomastia GalactorrhoeaUnknown incidence adverse effects Allergic reaction Restless legs syndrome Nausea Oral mucosal lesions (ulcerations, blistering and inflammation) Salivary hypersecretion HyperprolactinaemiaAsenapine seems to have a relatively low weight gain liability for an atypical antipsychotic (which are notorious for their metabolic side effects) and a 2013 meta-analysis found significantly less weight gain (SMD [standard mean difference in weight gained in those on placebo vs. active drug]: 0.23; 95% CI: 0.07-0.39) than, paliperidone (SMD: 0.38; 95% CI: 0.27-0.48), risperidone (SMD: 0.42; 95% CI: 0.33-0.50), quetiapine (SMD: 0.43; 95% CI: 0.34-0.53), sertindole (SMD: 0.53; 95% CI: 0.38-0.68), chlorpromazine (SMD: 0.55; 95% CI: 0.34-0.76), iloperidone (SMD: 0.62; 95% CI: 0.49-0.74), clozapine (SMD: 0.65; 95% CI: 0.31-0.99), zotepine (SMD: 0.71; 95% CI: 0.47-0.96) and olanzapine (SMD: 0.74; 95% CI: 0.67-0.81) and approximately (that is, no statistically significant difference at the p=0.05 level) as much as weight gain as aripiprazole (SMD: 0.17; 95% CI: 0.05-0.28), lurasidone (SMD: 0.10; 95% CI: –0.02-0.21), amisulpride (SMD: 0.20; 95% CI: 0.05-0.35), haloperidol (SMD: 0.09; 95% CI: 0.00-0.17) and ziprasidone (SMD: 0.10; 95% CI: –0.02-0.22). Its potential for elevating plasma prolactin levels seems relatively limited too according to this meta-analysis. This meta-analysis also found that asenapine has approximately the same odds ratio (3.28; 95% CI: 1.37-6.69) for causing sedation [compared to placebo-treated patients] as olanzapine (3.34; 95% CI: 2.46-4.50]) and haloperidol (2.76; 95% CI: 2.04-3.66) and a higher odds ratio (although not significantly) for sedation than aripiprazole (1.84; 95% CI: 1.05-3.05), paliperidone (1.40; 95% CI: 0.85-2.19) and amisulpride (1.42; 95% CI: 0.72 to 2.51) to name a few and is hence a mild-moderately sedating antipsychotic. The same meta-analysis suggested that asenapine had a relatively high risk of extrapyramidal symptoms compared to other atypical antipsychotics but a lower risk than first-generation or typical antipsychotics. Discontinuation For all antipsychotics, the British National Formulary recommends a gradual dose reduction when discontinuing to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in recurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped. Pharmacology Pharmacodynamics Asenapine shows high affinity (pKi) for numerous receptors, including the serotonin 5-HT1A (8.6), 5-HT1B (8.4), 5-HT2A (10.2), 5-HT2B (9.8), 5-HT2C (10.5), 5-HT5A (8.8), 5-HT6 (9.5), and 5-HT7 (9.9) receptors, the adrenergic α1 (8.9), α2A (8.9), α2B (9.5), and α2C (8.9) receptors, the dopamine D1 (8.9), D2 (8.9), D3 (9.4), and D4 (9.0) receptors, and the histamine H1 (9.0) and H2 (8.2) receptors. It has much lower affinity (pKi < 5) for the muscarinic acetylcholine receptors. Asenapine behaves as a partial agonist at the 5-HT1A receptors. At all other targets asenapine is an antagonist. As of November 2010 asenapine is also in clinical trials at UC Irvine to treat stuttering. Even relative to other atypical antipsychotics, asenapine has unusually high affinity for the 5-HT2A, 5-HT2C, 5-HT6, and 5-HT7 receptors, and very high affinity for the α2 and H1 receptors. References External links "Asenapine". Drug Information Portal. U.S. National Library of Medicine. "Asenapine maleate". Drug Information Portal. U.S. National Library of Medicine.
You are an assistant in medical learning. Your role is to clarify medical terminology, ensuring the information is both accurate and accessible.
I'm trying to expand my medical knowledge. Can you elucidate the term 'Hypervolemia'?
Hypervolemia, also known as fluid overload, is the medical condition where there is too much fluid in the blood. The opposite condition is hypovolemia, which is too little fluid volume in the blood. Fluid volume excess in the intravascular compartment occurs due to an increase in total body sodium content and a consequent increase in extracellular body water. The mechanism usually stems from compromised regulatory mechanisms for sodium handling as seen in congestive heart failure (CHF), kidney failure, and liver failure. It may also be caused by excessive intake of sodium from foods, intravenous (IV) solutions and blood transfusions, medications, or diagnostic contrast dyes. Treatment typically includes administration of diuretics and limit the intake of water, fluids, sodium, and salt. Signs and symptoms The excess fluid, primarily salt and water, builds up in various locations in the body and leads to an increase in weight, swelling in the legs and arms (peripheral edema), and/or fluid in the abdomen (ascites). Eventually, the fluid enters the air spaces in the lungs (pulmonary edema) reduces the amount of oxygen that can enter the blood, leading to Anemia and causes shortness of breath (dyspnea) or enters pleural space by transudation (pleural effusion which also causes dyspnea), which is the best indicator of estimating central venous pressure is increased. It can also cause swelling of the face. Fluid can also collect in the lungs when lying down at night, possibly making nighttime breathing and sleeping difficult (paroxysmal nocturnal dyspnea). Complications Congestive heart failure is the most common result of fluid overload. Also, it may be associated with hyponatremia (hypervolemic hyponatremia). Causes Excessive sodium and/or fluid intake: IV therapy containing sodium As a transfusion reaction to a rapid blood transfusion. High intake of sodiumSodium and water retention: Heart failure Liver cirrhosis Nephrotic syndrome Corticosteroid therapy Hyperaldosteronism Low protein intakeFluid shift into the intravascular space: Fluid remobilization after burn treatment Administration of hypertonic fluids, e.g. mannitol or hypertonic saline solution Administration of plasma proteins, such as albumin Treatment Treatment includes diuretics, particularly loop diuretics. See also Volume status Volume overload Fluid balance Edema Anasarca (swelling of skin) Pleural effusion (excess fluid in the pleural cavity) Diuretics Aquapheresis Low pressure receptor zones Apsia References == External links ==
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
I've come across the term 'Nelfinavir' in a medical context, but I'm not sure what it means. Can you clarify?
Nelfinavir, sold under the brand name Viracept, is an antiretroviral medication used in the treatment of HIV/AIDS. Nelfinavir belongs to the class of drugs known as protease inhibitors (PIs) and like other PIs is almost always used in combination with other antiretroviral drugs. Nelfinavir is an orally bioavailable human immunodeficiency virus HIV-1 protease inhibitor (Ki = 2 nM) and is widely prescribed in combination with HIV reverse transcriptase inhibitors for the treatment of HIV infection.It was patented in 1992 and approved for medical use in 1997. Toxicity Common (>1%) side effects include insulin resistance, hyperglycemia and lipodystrophy.Nelfinavir can produce a range of adverse side effects. Flatulence, diarrhea, or abdominal pain are common (i.e. experienced by more than one in one hundred patients). Fatigue, urination, rash, mouth ulcers, or hepatitis are less frequent effects (experienced by one in one thousand to one in one hundred patients). Nephrolithiasis, arthralgia, leukopenia, pancreatitis, or allergic reactions may occur, but are rare (less than one in one thousand patients) . Other bioactivity Antiviral Nelfinavir inhibits maturation and export of the herpes simplex 1 virus and the Kaposis sarcoma virus. Anti-virulence activity Nelfinavir and simple derivatives have been found to inhibit the production of the virulence factor streptolysin S, a cytolysin produced by the human pathogen Streptococcus pyogenes. Nelfinavir and these related molecules did not exhibit detectable antibiotic activity, but did also inhibit the production of other biologically active molecules, including plantazolicin (antibiotic), listeriolysin S (cytolysin), and clostridiolysin S (cytolysin), by other bacteria. Interactions Nelfinavirs interaction profile is similar to that of other protease inhibitors. Most interactions occur at the level of the Cytochrome P450 isozymes 3A4 and CYP2C19, by which nelfinavir is metabolised. Pharmacology Nelfinavir should be taken with food. Taking the drug with food decreases the risk of diarrhea as a side effect. Mechanism of action Nelfinavir is a protease inhibitor: it inhibits HIV-1 and HIV-2 proteases. HIV protease is an aspartate protease which splits viral protein molecules into smaller fragments, and it is vital to both the replication of the virus within the cell, and also to the release of mature viral particles from an infected cell. Nelfinavir is a competitive inhibitor (2 nM) which is designed to bind tightly and is not cleaved due to the presence of a hydroxyl group as opposed to a keto group in the middle amino acid residue mimic, which would be otherwise S-phenylcysteine. All protease inhibitors bind to the protease, the precise mode of binding determines how the molecule inhibits the protease. The way Nelfinavir binds the enzyme may be sufficiently unique to reduce cross-resistance between it and other PIs. Also, not all PIs inhibit both HIV-1 and HIV-2 proteases. History Nelfinavir was developed by Agouron Pharmaceuticals as part of a joint venture with Eli Lilly and Company. Agouron Pharmaceuticals was acquired by Warner Lambert in 1999 and is now a subsidiary of Pfizer. It is marketed in Europe by Hoffman-La Roche and elsewhere by ViiV Healthcare. The U.S. Food and Drug Administration (FDA) approved it for therapeutic use on March 14, 1997, making it the twelfth approved antiretroviral. The initial product launched proved to be the largest "biotech launch" in the history of the pharmaceutical industry, achieving first full year sales exceeding $US335M. Agourons patent on the drug expired in 2014. On the 6 June 2007, both the Medicines and Healthcare products Regulatory Agency and the European Medicines Agency put out an alert requesting the recall of any of the drug in circulation, because some batches may have been contaminated with potentially cancer-causing chemicals. Research Since 2009, nelfinavir has been under investigation for potential use as an anti-cancer agent. When applied to cancer cells in culture (in vitro), it can inhibit the growth of a variety of cancer types and can trigger cell death (apoptosis). When Nelfinavir was given to laboratory mice with tumors of the prostate or of the brain, it could suppress tumor growth in these animals. At the cellular level, nelfinavir exerts multiple effects to inhibit cancer growth; the two main ones appear to be inhibition of the Akt/PKB signaling pathway and activation of endoplasmic reticulum stress with subsequent unfolded protein response.In the United States, about three dozen clinical trials are being conducted (or have been completed) in order to determine whether nelfinavir is effective as a cancer therapeutic agent in humans. In some of these trials, nelfinavir is used alone in monotherapy fashion, whereas in others it is combined with other modes of cancer therapy, such as well-established chemotherapeutic agents or radiation therapy. As of April 2022, nelfinavir is being studied as a radiosensitizing agent as part of treatment of advanced cervical cancer. References Further reading Pai, VB; Nahata, MC (March 1999). "Nelfinavir mesylate: a protease inhibitor". The Annals of Pharmacotherapy. 33 (3): 325–39. doi:10.1345/aph.18089. PMID 10200859. S2CID 24066955. Bardsley-Elliot, A; Plosker, GL (March 2000). "Nelfinavir: an update on its use in HIV infection". Drugs. 59 (3): 581–620. doi:10.2165/00003495-200059030-00014. PMID 10776836. External links "Nelfinavir". Drug Information Portal. U.S. National Library of Medicine.
You act as a medical communicator. Your goal is to translate medical terms into comprehensive, yet straightforward language for all readers.
What does the medical term 'Reslizumab' encompass?
Reslizumab is a humanized monoclonal antibody against human interleukin-5 (IL-5). Reslizumab binds specifically to IL-5, a key cytokine responsible for the differentiation, maturation, recruitment and activation of human eosinophils. By binding to human IL-5, it blocks its biological function; consequently survival and activity of eosinophils are reduced. The benefits with reslizumab are its ability to reduce the exacerbation rate and improve lung function and asthma-related quality of life in patients with severe eosinophilic asthma (with blood eosinophil count ≥ 400 cells/μL) and with at least one previous asthma exacerbation in the preceding year. The most common side effects are increased blood creatine phosphokinase, myalgia and anaphylactic reactions.The FDA approved reslizumab (US trade name Cinqair) for use with other asthma medicines for the maintenance treatment of severe asthma in patients aged 18 years and older on 23 March 2016. Cinqair is approved for patients who have a history of severe asthma attacks (exacerbations) despite receiving their current asthma medicines.The European Medicines Agency recommended the granting of a marketing authorisation for reslizumab (EU trade name Cinqaero) intended as add-on treatment in adult patients with severe eosinophilic asthma on 23 June 2016.Reslizumab is supplied as a refrigerated, sterile, single-use, preservative-free solution for intravenous infusion. The reslizumab solution is a slightly hazy/opalescent, slightly yellow liquid and is supplied as 100 mg in a 10 mL glass vial. Each single-use vial of reslizumab is formulated as 10 mg/mL reslizumab in an aqueous solution containing 2.45 mg/mL sodium acetate trihydrate, 0.12 mg/mL glacial acetic acid, and 70 mg/mL sucrose, with a pH of 5.5. Medical uses Eosinophilic asthma Reslizumab was first used for eosinophilic asthma in 2008. In a 106-patient, phase II clinical trial, the researchers showed reslizumab was effective in reducing sputum eosinophils. Furthermore, the patients receiving reslizumab showed improvements in airway function, and a general trend toward greater asthma control than those receiving placebo was observed. A large, 981-patient, phase III clinical trial showed that reslizumab was effective at improving lung function, asthma control, and quality of life in comparison to placebo. These results led to the FDA approval for the maintenance treatment of severe asthma in patients aged 18 years and older, with an eosinophilic phenotype on March 23, 2016. Adverse effects Common adverse effects include: oropharyngeal painLess common adverse effects include: musculoskeletal pain neck pain muscle spasms extremity pain muscle fatigue anaphylaxis malignancyThe most common adverse effect of reslizumab was oropharyngeal (mouth and throat) pain. According to the phase III clinical trials data, oropharyngeal pain occurred in ≥2% of individuals along with elevated baseline creatine phosphokinase (CPK), which was more common in patients treated with reslizumab versus placebo. Myalgia was also reported more in patients in the reslizumab 3 mg/kg group versus the placebo group as well as some musculoskeletal adverse reactions. Lastly, some serious adverse reactions that occurred in subjects treated with reslizumab but not in those treated with placebo included anaphylaxis and malignancy. Pharmacology Mechanism of action Reslizumab is an interleukin-5 antagonist monoclonal antibody. IL-5 is the major cytokine responsible for the growth and differentiation, recruitment, activation, and survival of eosinophils. Eosinophils play a role in the mediation of inflammation in the airways. Eosinophilic asthma is a phenotype of asthma that is characterized by the higher than normal presence of eosinophils in the lung and sputum. It has been shown that the numbers of eosinophils in the blood and bronchial fluid can correlate with asthma severity. Reslizumab binds to IL-5 with a dissociation constant of 81 pM and inhibiting IL-5 signaling, which reduces the production and survival of eosinophils. However, the mechanism of reslizumab action in asthma has not been definitively established. Pharmacodynamics Reductions in blood eosinophil counts were observed following the first dose of reslizumab and maintained through 52 weeks of treatment. In phase III clinical trials, mean eosinophil counts were 696 cells/µL (n=245) and 624 cells/µL (n=244) at baseline. Following 52 weeks of reslizumab treatment, eosinophil cells were counted and were reported to be 55 cells/µL (92% reduction, n=212) and 496 cells/µL (21% reduction, n=212) for the reslizumab and placebo treatment groups, respectively. Furthermore, eosinophil count returned towards baseline in those reslizumab-treated patients who completed a follow-up assessment (n=35, 480 cells/µL), approximately 120 days after the last dose of reslizumab. Therefore, reductions of blood eosinophils were related to reslizumab serum levels. Pharmacokinetics The pharmacokinetic characteristics of reslizumab are similar across the children and adults. Peak serum concentrations are observed at the end of infusion and declines in a biphasic manner. The mean observed accumulation ratio of reslizumab following multiple doses of administration ranged from 1.5 to 1.9-fold. Reslizumab has a volume of distribution of approximately 5 L, clearance of approximately 7 mL/hour, and a half-life of about 24 days. Reslizumab is degraded by enzymatic proteolysis into small peptides and amino acids, as are other monoclonal antibodies. History Reslizumab was initially developed by Chuan-Chu Chou at Schering-Plough and was previously known as SCH-55700. In 1993, Chou and his group at Schering-Plough were granted the patent for the design, cloning and expression of the reslizumab drug. Ception Therapeutics acquired the drug and continued its development under the name CTx55700. In 2010, Ception Therapeutics was acquired by Cephalon for $250 million and the drug continued under development under the codename CEP-38072. In 2011, Teva Pharmaceuticals acquired Cephalon for $6.8 billion and continued the development of reslizumab. References External links "Reslizumab". Drug Information Portal. U.S. National Library of Medicine.
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
Could you please explain the term 'Hallucination' in simple language?
A hallucination is a perception in the absence of an external stimulus that has the qualities of a real perception. Hallucinations are vivid, substantial, and are perceived to be located in external objective space. Hallucination is a combination of 2 conscious states of brain wakefulness and REM sleep. They are distinguishable from several related phenomena, such as dreaming (REM sleep), which does not involve wakefulness; pseudohallucination, which does not mimic real perception, and is accurately perceived as unreal; illusion, which involves distorted or misinterpreted real perception; and mental imagery, which does not mimic real perception, and is under voluntary control. Hallucinations also differ from "delusional perceptions", in which a correctly sensed and interpreted stimulus (i.e., a real perception) is given some additional significance. Many hallucinations happen also during sleep paralyses.Hallucinations can occur in any sensory modality—visual, auditory, olfactory, gustatory, tactile, proprioceptive, equilibrioceptive, nociceptive, thermoceptive and chronoceptive. Hallucinations are referred to as multimodal if multiple sensory modalities occur.A mild form of hallucination is known as a disturbance, and can occur in most of the senses above. These may be things like seeing movement in peripheral vision, or hearing faint noises or voices. Auditory hallucinations are very common in schizophrenia. They may be benevolent (telling the subject good things about themselves) or malicious, cursing the subject. 55% of auditory hallucinations are malicious in content, for example, people talking about the subject, not speaking to them directly. Like auditory hallucinations, the source of the visual counterpart can also be behind the subject. This can produce a feeling of being looked or stared at, usually with malicious intent. Frequently, auditory hallucinations and their visual counterpart are experienced by the subject together.Hypnagogic hallucinations and hypnopompic hallucinations are considered normal phenomena. Hypnagogic hallucinations can occur as one is falling asleep and hypnopompic hallucinations occur when one is waking up. Hallucinations can be associated with drug use (particularly deliriants), sleep deprivation, psychosis, neurological disorders, and delirium tremens. The word "hallucination" itself was introduced into the English language by the 17th-century physician Sir Thomas Browne in 1646 from the derivation of the Latin word alucinari meaning to wander in the mind. For Browne, hallucination means a sort of vision that is "depraved and receive[s] its objects erroneously". Classification Hallucinations may be manifested in a variety of forms. Various forms of hallucinations affect different senses, sometimes occurring simultaneously, creating multiple sensory hallucinations for those experiencing them. Auditory Auditory hallucinations (also known as paracusia) are the perception of sound without outside stimulus. Auditory hallucinations can be divided into elementary and complex, along with verbal and nonverbal. These hallucinations are the most common type of hallucination, with auditory verbal hallucinations being more common than nonverbal. Elementary hallucinations are the perception of sounds such as hissing, whistling, an extended tone, and more. In many cases, tinnitus is an elementary auditory hallucination. However, some people who experience certain types of tinnitus, especially pulsatile tinnitus, are actually hearing the blood rushing through vessels near the ear. Because the auditory stimulus is present in this situation, it does not qualify it as a hallucination. Complex hallucinations are those of voices, music, or other sounds that may or may not be clear, may or may not be familiar, and may be friendly, aggressive, or among other possibilities. A hallucination of a single individual person of one or more talking voices is particularly associated with psychotic disorders such as schizophrenia, and hold special significance in diagnosing these conditions. In schizophrenia voices are normally perceived coming from outside the person but in dissociative disorders they are perceived as originating from within the person, commenting in their head instead of behind their back. Differential diagnosis between schizophrenia and dissociative disorders is challenging due to many overlapping symptoms, especially Schneiderian first rank symptoms such as hallucinations. However, many people who do not have a diagnosable mental illness may sometimes hear voices as well. One important example to consider when forming a differential diagnosis for a patient with paracusia is lateral temporal lobe epilepsy. Despite the tendency to associate hearing voices, or otherwise hallucinating, and psychosis with schizophrenia or other psychiatric illnesses, it is crucial to take into consideration that, even if a person does exhibit psychotic features, they do not necessarily have a psychiatric disorder on its own. Disorders such as Wilsons disease, various endocrine diseases, numerous metabolic disturbances, multiple sclerosis, systemic lupus erythematosus, porphyria, sarcoidosis, and many others can present with psychosis. Musical hallucinations are also relatively common in terms of complex auditory hallucinations and may be the result of a wide range of causes ranging from hearing-loss (such as in musical ear syndrome, the auditory version of Charles Bonnet syndrome), lateral temporal lobe epilepsy, arteriovenous malformation, stroke, lesion, abscess, or tumor.The Hearing Voices Movement is a support and advocacy group for people who hallucinate voices, but do not otherwise show signs of mental illness or impairment. High caffeine consumption has been linked to an increase in likelihood of one experiencing auditory hallucinations. A study conducted by the La Trobe University School of Psychological Sciences revealed that as few as five cups of coffee a day (approximately 500 mg of caffeine) could trigger the phenomenon. Visual A visual hallucination is "the perception of an external visual stimulus where none exists". A separate but related phenomenon is a visual illusion, which is a distortion of a real external stimulus. Visual hallucinations are classified as simple or complex: Simple visual hallucinations (SVH) are also referred to as non-formed visual hallucinations and elementary visual hallucinations. These terms refer to lights, colors, geometric shapes, and indiscrete objects. These can be further subdivided into phosphenes which are SVH without structure, and photopsias which are SVH with geometric structures. Complex visual hallucinations (CVH) are also referred to as formed visual hallucinations. CVHs are clear, lifelike images or scenes such as people, animals, objects, places, etc.For example, one may report hallucinating a giraffe. A simple visual hallucination is an amorphous figure that may have a similar shape or color to a giraffe (looks like a giraffe), while a complex visual hallucination is a discrete, lifelike image that is, unmistakably, a giraffe. Command Command hallucinations are hallucinations in the form of commands; they appear to be from an external source, or can appear coming from the subjects head. The contents of the hallucinations can range from the innocuous to commands to cause harm to the self or others. Command hallucinations are often associated with schizophrenia. People experiencing command hallucinations may or may not comply with the hallucinated commands, depending on the circumstances. Compliance is more common for non-violent commands.Command hallucinations are sometimes used to defend a crime that has been committed, often homicides. In essence, it is a voice that one hears and it tells the listener what to do. Sometimes the commands are quite benign directives such as "Stand up" or "Shut the door." Whether it is a command for something simple or something that is a threat, it is still considered a "command hallucination." Some helpful questions that can assist one in determining if they may have this includes: "What are the voices telling you to do?", "When did your voices first start telling you to do things?", "Do you recognize the person who is telling you to harm yourself (or others)?", "Do you think you can resist doing what the voices are telling you to do?" Olfactory Phantosmia (olfactory hallucinations), smelling an odor that is not actually there, and parosmia (olfactory illusions), inhaling a real odor but perceiving it as different scent than remembered, are distortions to the sense of smell (olfactory system), and in most cases, are not caused by anything serious and will usually go away on their own in time. It can result from a range of conditions such as nasal infections, nasal polyps, dental problems, migraines, head injuries, seizures, strokes, or brain tumors. Environmental exposures can sometimes cause it as well, such as smoking, exposure to certain types of chemicals (e.g., insecticides or solvents), or radiation treatment for head or neck cancer. It can also be a symptom of certain mental disorders such as depression, bipolar disorder, intoxication, substance withdrawal, or psychotic disorders (e.g., schizophrenia). The perceived odors are usually unpleasant and commonly described as smelling burned, foul, spoiled, or rotten. Tactile Tactile hallucinations are the illusion of tactile sensory input, simulating various types of pressure to the skin or other organs. One subtype of tactile hallucination, formication, is the sensation of insects crawling underneath the skin and is frequently associated with prolonged cocaine use. However, formication may also be the result of normal hormonal changes such as menopause, or disorders such as peripheral neuropathy, high fevers, Lyme disease, skin cancer, and more. Gustatory This type of hallucination is the perception of taste without a stimulus. These hallucinations, which are typically strange or unpleasant, are relatively common among individuals who have certain types of focal epilepsy, especially temporal lobe epilepsy. The regions of the brain responsible for gustatory hallucination in this case are the insula and the superior bank of the sylvian fissure. General somatic sensations General somatic sensations of a hallucinatory nature are experienced when an individual feels that their body is being mutilated, i.e. twisted, torn, or disemboweled. Other reported cases are invasion by animals in the persons internal organs, such as snakes in the stomach or frogs in the rectum. The general feeling that ones flesh is decomposing is also classified under this type of this hallucination. Multimodal A hallucination involving sensory modalities is called multimodal, analogous to unimodal hallucinations which have only one sensory modality. The multiple sensory modalities can occur at the same time (simultaneously) or with a delay (serial), be related or unrelated to each other, and be consistent with reality (congruent) or not (incongruent). For example, a person talking in a hallucination would be congruent with reality, but a cat talking would not be. Multimodal hallucinations are correlated to poorer mental health outcomes, and are often experienced as feeling more real. Cause Hallucinations can be caused by a number of factors. Hypnagogic hallucination These hallucinations occur just before falling asleep and affect a high proportion of the population: in one survey 37% of the respondents experienced them twice a week. The hallucinations can last from seconds to minutes; all the while, the subject usually remains aware of the true nature of the images. These may be associated with narcolepsy. Hypnagogic hallucinations are sometimes associated with brainstem abnormalities, but this is rare. Peduncular hallucinosis Peduncular means pertaining to the peduncle, which is a neural tract running to and from the pons on the brain stem. These hallucinations usually occur in the evenings, but not during drowsiness, as in the case of hypnagogic hallucination. The subject is usually fully conscious and then can interact with the hallucinatory characters for extended periods of time. As in the case of hypnagogic hallucinations, insight into the nature of the images remains intact. The false images can occur in any part of the visual field, and are rarely polymodal. Delirium tremens One of the more enigmatic forms of visual hallucination is the highly variable, possibly polymodal delirium tremens. Individuals with delirium tremens may be agitated and confused, especially in the later stages of this disease. Insight is gradually reduced with the progression of this disorder. Sleep is disturbed and occurs for a shorter period of time, with rapid eye movement sleep. Parkinsons disease and Lewy body dementia Parkinsons disease is linked with Lewy body dementia for their similar hallucinatory symptoms. The symptoms strike during the evening in any part of the visual field, and are rarely polymodal. The segue into hallucination may begin with illusions where sensory perception is greatly distorted, but no novel sensory information is present. These typically last for several minutes, during which time the subject may be either conscious and normal or drowsy/inaccessible. Insight into these hallucinations is usually preserved and REM sleep is usually reduced. Parkinsons disease is usually associated with a degraded substantia nigra pars compacta, but recent evidence suggests that PD affects a number of sites in the brain. Some places of noted degradation include the median raphe nuclei, the noradrenergic parts of the locus coeruleus, and the cholinergic neurons in the parabrachial area and pedunculopontine nuclei of the tegmentum. Migraine coma This type of hallucination is usually experienced during the recovery from a comatose state. The migraine coma can last for up to two days, and a state of depression is sometimes comorbid. The hallucinations occur during states of full consciousness, and insight into the hallucinatory nature of the images is preserved. It has been noted that ataxic lesions accompany the migraine coma. Charles Bonnet syndrome Charles Bonnet syndrome is the name given to visual hallucinations experienced by a partially or severely sight impaired person. The hallucinations can occur at any time and can distress people of any age, as they may not initially be aware that they are hallucinating. They may fear for their own mental health initially, which may delay them sharing with carers until they start to understand it themselves. The hallucinations can frighten and disconcert as to what is real and what is not. The hallucinations can sometimes be dispersed by eye movements, or by reasoned logic such as, "I can see fire but there is no smoke and there is no heat from it" or perhaps, "We have an infestation of rats but they have pink ribbons with a bell tied on their necks." Over elapsed months and years, the hallucinations may become more or less frequent with changes in ability to see. The length of time that the sight impaired person can have these hallucinations varies according to the underlying speed of eye deterioration. A differential diagnosis are ophthalmopathic hallucinations. Focal epilepsy Visual hallucinations due to focal seizures differ depending on the region of the brain where the seizure occurs. For example, visual hallucinations during occipital lobe seizures are typically visions of brightly colored, geometric shapes that may move across the visual field, multiply, or form concentric rings and generally persist from a few seconds to a few minutes. They are usually unilateral and localized to one part of the visual field on the contralateral side of the seizure focus, typically the temporal field. However, unilateral visions moving horizontally across the visual field begin on the contralateral side and move toward the ipsilateral side.Temporal lobe seizures, on the other hand, can produce complex visual hallucinations of people, scenes, animals, and more as well as distortions of visual perception. Complex hallucinations may appear to be real or unreal, may or may not be distorted with respect to size, and may seem disturbing or affable, among other variables. One rare but notable type of hallucination is heautoscopy, a hallucination of a mirror image of ones self. These "other selves" may be perfectly still or performing complex tasks, may be an image of a younger self or the present self, and tend to be briefly present. Complex hallucinations are a relatively uncommon finding in temporal lobe epilepsy patients. Rarely, they may occur during occipital focal seizures or in parietal lobe seizures.Distortions in visual perception during a temporal lobe seizure may include size distortion (micropsia or macropsia), distorted perception of movement (where moving objects may appear to be moving very slowly or to be perfectly still), a sense that surfaces such as ceilings and even entire horizons are moving farther away in a fashion similar to the dolly zoom effect, and other illusions. Even when consciousness is impaired, insight into the hallucination or illusion is typically preserved. Drug-induced hallucination Drug-induced hallucinations are caused by hallucinogens, dissociatives, and deliriants, including many drugs with anticholinergic actions and certain stimulants, which are known to cause visual and auditory hallucinations. Some psychedelics such as lysergic acid diethylamide (LSD) and psilocybin can cause hallucinations that range in the spectrum of mild to intense. Hallucinations, pseudohallucinations, or intensification of pareidolia, particularly auditory, are known side effects of opioids to different degrees—it may be associated with the absolute degree of agonism or antagonism of especially the kappa opioid receptor, sigma receptors, delta opioid receptor and the NMDA receptors or the overall receptor activation profile as synthetic opioids like those of the pentazocine, levorphanol, fentanyl, pethidine, methadone and some other families are more associated with this side effect than natural opioids like morphine and codeine and semi-synthetics like hydromorphone, amongst which there also appears to be a stronger correlation with the relative analgesic strength. Three opioids, Cyclazocine (a benzormorphan opioid/pentazocine relative) and two levorphanol-related morphinan opioids, Cyclorphan and Dextrorphan are classified as hallucinogens, and Dextromethorphan as a dissociative. These drugs also can induce sleep (relating to hypnagogic hallucinations) and especially the pethidines have atropine-like anticholinergic activity, which was possibly also a limiting factor in the use, the psychotomimetic side effects of potentiating morphine, oxycodone, and other opioids with scopolamine (respectively in the Twilight Sleep technique and the combination drug Skophedal, which was eukodal (oxycodone), scopolamine and ephedrine, called the "wonder drug of the 1930s" after its invention in Germany in 1928, but only rarely specially compounded today) (q.q.v.). Sensory deprivation hallucination Hallucinations can be caused by sensory deprivation when it occurs for prolonged periods of time, and almost always occurs in the modality being deprived (visual for blindfolded/darkness, auditory for muffled conditions, etc.) Experimentally-induced hallucinations Anomalous experiences, such as so-called benign hallucinations, may occur in a person in a state of good mental and physical health, even in the apparent absence of a transient trigger factor such as fatigue, intoxication or sensory deprivation. The evidence for this statement has been accumulating for more than a century. Studies of benign hallucinatory experiences go back to 1886 and the early work of the Society for Psychical Research, which suggested approximately 10% of the population had experienced at least one hallucinatory episode in the course of their life. More recent studies have validated these findings; the precise incidence found varies with the nature of the episode and the criteria of "hallucination" adopted, but the basic finding is now well-supported. Non-celiac gluten sensitivity There is tentative evidence of a relationship with non-celiac gluten sensitivity, the so-called "gluten psychosis". Pathophysiology Dopaminergic and serotonergic hallucinations It has been reported that in serotonergic hallucinations, the person maintains an awareness that they are hallucinating, unlike dopaminergic hallucinations. Neuroanatomy Hallucinations are associated with structural and functional abnormalities in primary and secondary sensory cortices. Reduced grey matter in regions of the superior temporal gyrus/middle temporal gyrus, including Brocas area, is associated with auditory hallucinations as a trait, while acute hallucinations are associated with increased activity in the same regions along with the hippocampus, parahippocampus, and the right hemispheric homologue of Brocas area in the inferior frontal gyrus. Grey and white matter abnormalities in visual regions are associated with visual hallucinations in diseases such as Alzheimers disease, further supporting the notion of dysfunction in sensory regions underlying hallucinations. One proposed model of hallucinations posits that over-activity in sensory regions, which is normally attributed to internal sources via feedforward networks to the inferior frontal gyrus, is interpreted as originating externally due to abnormal connectivity or functionality of the feedforward network. This is supported by cognitive studies of those with hallucinations, who have demonstrated abnormal attribution of self generated stimuli.Disruptions in thalamocortical circuitry may underlie the observed top down and bottom up dysfunction. Thalamocortical circuits, composed of projections between thalamic and cortical neurons and adjacent interneurons, underlie certain electrophysical characteristics (gamma oscillations) that are underlie sensory processing. Cortical inputs to thalamic neurons enable attentional modulation of sensory neurons. Dysfunction in sensory afferents, and abnormal cortical input may result in pre-existing expectations modulating sensory experience, potentially resulting in the generation of hallucinations. Hallucinations are associated with less accurate sensory processing, and more intense stimuli with less interference are necessary for accurate processing and the appearance of gamma oscillations (called "gamma synchrony"). Hallucinations are also associated with the absence of reduction in P50 amplitude in response to the presentation of a second stimuli after an initial stimulus; this is thought to represent failure to gate sensory stimuli, and can be exacerbated by dopamine release agents.Abnormal assignment of salience to stimuli may be one mechanism of hallucinations. Dysfunctional dopamine signaling may lead to abnormal top down regulation of sensory processing, allowing expectations to distort sensory input. Treatments There are few treatments for many types of hallucinations. However, for those hallucinations caused by mental disease, a psychologist or psychiatrist should be consulted, and treatment will be based on the observations of those doctors. Antipsychotic and atypical antipsychotic medication may also be utilized to treat the illness if the symptoms are severe and cause significant distress. For other causes of hallucinations there is no factual evidence to support any one treatment is scientifically tested and proven. However, abstaining from hallucinogenic drugs, stimulant drugs, managing stress levels, living healthily, and getting plenty of sleep can help reduce the prevalence of hallucinations. In all cases of hallucinations, medical attention should be sought out and informed of ones specific symptoms. Meta-analyses show that cognitive behavioral therapy and metacognitive training can also reduce the severity of hallucinations. Epidemiology Prevalence of hallucinations varies depending on underlying medical conditions, which sensory modalities are affected, age and culture. As of 2022, auditory hallucinations are the most well studied and most common sensory modality of hallucinations, with an estimated lifetime prevalence of 9.6%. Children and adolescents have been found to experience similar rates (12.7% and 12.4% respectively) which occur mostly during late childhood and adolescence. This is compared with adults and those over 60 (with rates of 5.8% and 4.8% respectively). For those with schizophrenia, the lifetime prevalence of hallucinations is 80% and the estimated prevalence of visual hallucinations is 27%, compared to 79% for auditory hallucinations. A 2019 study suggested 16.2% of adults with hearing impairment experience hallucinations, with prevalence rising to 24% in the most hearing impaired group.A risk factor for multimodal hallucinations is prior experience of unimodal hallucinations. In 90% cases of psychosis, a visual hallucination occurs in combination with another sensory modality, most often being auditory or somatic. In schizophrenia, multimodal hallucinations are twice as common as unimodal ones.A 2015 review of 55 publications from 1962 to 2014 found 16–28.6% of those experiencing hallucinations report at least some religious content in them,: 415  along with 20-60% reporting some religious content in delusions.: 415  There is some evidence for delusions being a risk factor for religious hallucinations, with and 61.7% of people having experienced any delusion and 75.9% of those having experienced a religious delusion found to also experience hallucinations.: 421 See also References Further reading Johnson FH (1978). The Anatomy of Hallucinations. Chicago: Nelson-Hall Co. ISBN 0-88229-155-6. Bentall RP, Slade PD (1988). Sensory Deception: A Scientific Analysis of Hallucination. London: Croom Helm. ISBN 0-7099-3961-2. Aleman A, Larøi F (2008). Hallucinations: The Science of Idiosyncratic Perception. American Psychological Association (APA). ISBN 1-4338-0311-9. Sacks O (2012). Hallucinations. New York: Alfred A. Knopf. ISBN 978-0307957245 External links Hearing Voices Network "Anthropology and Hallucinations; chapter from The Making of Religion". psychanalyse-paris.com. November 4, 2006. Archived from the original on May 29, 2016. Retrieved October 4, 2016. "The voice inside: A practical guide to coping with hearing voices" Psychology Terms Hallucination: A Normal Phenomenon? Fasting-induced hallucination Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex
You act as a bridge between medical jargon and everyday language. Explain medical terms in a manner that is comprehensive yet approachable for non-experts.
I'm looking for a concise explanation of the medical term 'Hypersalivation.'
Hypersalivation, or ptyalism, also known as sialorrhea or hypersialosis is the excessive production of saliva. It has also been defined as increased amount of saliva in the mouth, which may also be caused by decreased clearance of saliva.Hypersalivation can contribute to drooling if there is an inability to keep the mouth closed or difficulty in swallowing (dysphagia) the excess saliva, which can lead to excessive spitting. Hypersalivation also often precedes emesis (vomiting), where it accompanies nausea (a feeling of needing to vomit). Causes Excessive production Conditions that can cause saliva overproduction include: Rabies Pellagra (niacin or Vitamin B3 deficiency) Gastroesophageal reflux disease, in such cases specifically called a water brash (a loosely defined lay term), and is characterized by a sour fluid or almost tasteless saliva in the mouth Gastroparesis (main symptoms are nausea, vomiting, and reflux) Pregnancy Fluoride therapy Excessive starch intake Anxiety (common sign of separation anxiety in dogs) Pancreatitis Liver disease Serotonin syndrome Mouth ulcers Oral infections Sjögren syndrome (an early symptom in some patients)Medications that can cause overproduction of saliva include: aripiprazole clozapine pilocarpine ketamine potassium chlorate risperidone pyridostigmineSubstances that can cause hypersalivation include: mercury copper organophosphates (insecticide) arsenic nicotine thallium Decreased clearance Causes of decreased clearance of saliva include: Infections such as tonsillitis, retropharyngeal and peritonsillar abscesses, epiglottitis and mumps. Problems with the jaw, e.g., fracture or dislocation Radiation therapy Neurologic disorders such as Amyotrophic lateral sclerosis, myasthenia gravis, Parkinsons disease, multiple system atrophy, rabies, bulbar paralysis, bilateral facial nerve palsy, and hypoglossal nerve palsy Treatment Hypersalivation is optimally treated by treating or avoiding the underlying cause. Mouthwash and tooth brushing may have drying effects.In the palliative care setting, anticholinergics and similar drugs that would normally reduce the production of saliva causing a dry mouth could be considered for symptom management: scopolamine, atropine, propantheline, hyoscine, amitriptyline, glycopyrrolate.As of 2008, it is unclear if medication for people who have too much saliva due to clozapine treatment is useful. References == External links ==
You serve as a medical tutor. Your objective is to demystify medical terms, providing thorough explanations that cater to various levels of medical knowledge.
I'm not familiar with the medical term 'Hennekam syndrome.' Could you provide some insights?
Hennekam syndrome also known as intestinal lymphagiectasia–lymphedema–mental retardation syndrome, is an autosomal recessive disorder consisting of intestinal lymphangiectasia, facial anomalies, peripheral lymphedema, and mild to moderate levels of growth and intellectual disability.It is also known as "lymphedema-lymphangiectasia-mental retardation syndrome".Hennekam Syndrome is subdivided according to the causative genetic lesion, most (or all) of which are affecting the VEGF-C/VEGFR-3 signaling pathway: Type 1 (mutations in CCBE1) Type 2 (mutations in FAT4) Type 3 (mutations in ADAMTS3)The first recognition of a genetic association was with CCBE1, published by its namesake, Raoul Hennekam. The molecular mechanism of the lymphedema phenotype in CCBE1-associated cases was identified as a diminished ability of the mutated CCBE1 to accelerate and focus the activation of the primary lymphangiogenic growth factor VEGF-C. Mutations in the FAT4 gene had previously been only associated with van Maldergem syndrome, but the pathogenetic molecular mechanism and the function of FAT4 within lymphangiogenesis are still unknown. References == External links ==
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
I'm curious about the meaning of the medical term 'Binge eating disorder.' Can you give me some insights?
Binge eating disorder (BED) is an eating disorder characterized by frequent and recurrent binge eating episodes with associated negative psychological and social problems, but without the compensatory behaviors common to bulimia nervosa, OSFED, or the binge-purge subtype of anorexia nervosa. BED is a recently described condition, which was required to distinguish binge eating similar to that seen in bulimia nervosa but without characteristic purging. Individuals who are diagnosed with bulimia nervosa and binge eating disorder exhibit similar patterns of compulsive overeating, neurobiological features of dysfunctional cognitive control and food addiction, and biological and environmental risk factors. Some professionals consider BED to be a milder form of bulimia with the two conditions on the same spectrum.Binge eating is one of the most prevalent eating disorders among adults, though there tends to be less media coverage and research about the disorder in comparison to anorexia nervosa and bulimia nervosa. Signs and symptoms Binge eating is the core symptom of BED; however, not everyone who binge eats has BED. An individual may occasionally binge eat without experiencing many of the negative physical, psychological, or social effects of BED. This may be considered disordered eating rather than a clinical disorder. Precisely defining binge eating can be problematic, however binge eating episodes in BED are generally described as having the following potential features: Eating much faster than normal, perhaps in a short space of time Eating until feeling uncomfortably full Eating a large amount when not hungry Subjective loss of control over how much or what is eaten Binges may be planned in advance, involving the purchase of special binge foods, and the allocation of specific time for binging, sometimes at night Eating alone or secretly due to embarrassment over the amount of food consumed There may be a dazed mental state during the binge Not being able to remember what was eaten after the binge Feelings of guilt, shame or disgust following a food binge Body image disturbanceIn contrast to bulimia nervosa, binge eating episodes are not regularly followed by activities intended to compensate for the amount of food consumed, such as self-induced vomiting, laxative or enema misuse, or strenuous exercise. BED is characterized more by overeating than dietary restriction. Those with BED often have poor body image and frequently diet, but are unsuccessful due to the severity of their binge eating.Obesity is common in persons with BED, as is depression, low self-esteem, stress and boredom. Those with BED are also at risk of Non-alcoholic fatty liver disease, menstrual irregularities such as amenorrhea, and gastrointestinal problems such as acid reflux and heartburn. Causes As with other eating disorders, binge eating is an "expressive disorder"—a disorder that is an expression of deeper psychological problems. People who have binge eating disorder have been found to have higher weight bias internalization, which includes low self-esteem, unhealthy eating patterns, and general body dissatisfaction. Binge eating disorder commonly develops as a result or side effect of depression, as it is common for people to turn to comfort foods when they are feeling down.There was resistance to give binge eating disorder the status of a fully fledged eating disorder because many perceived binge eating disorder to be caused by individual choices. Previous research has focused on the relationship between body image and eating disorders, and concludes that disordered eating might be linked to rigid dieting practices. In the majority of cases of anorexia, extreme and inflexible restriction of dietary intake leads at some point to the development of binge eating, weight regain, bulimia nervosa, or a mixed form of eating disorder not otherwise specified. Binge eating may begin when individuals recover from an adoption of rigid eating habits. When under a strict diet that mimics the effects of starvation, the body may be preparing for a new type of behavior pattern, one that consumes a large amount of food in a relatively short period of time.Some studies show that BED aggregates in families and could be genetic. However, very few published studies around the genetics exist.However, other research suggests that binge eating disorder can also be caused by environmental factors and the impact of traumatic events. One study showed that women with binge eating disorder experienced more adverse life events in the year prior to the onset of the development of the disorder, and that binge eating disorder was positively associated with how frequently negative events occur. Additionally, the research found that individuals who had binge eating disorder were more likely to have experienced physical abuse, perceived risk of physical abuse, stress, and body criticism. Other risk factors may include childhood obesity, critical comments about weight, low self-esteem, depression, and physical or sexual abuse in childhood. A systematic review concluded that bulimia nervosa and binge eating disorder are more impacted by family separations, a loss in their lives and negative parent-child interactions compared to those with anorexia nervosa. A few studies have suggested that there could be a genetic component to binge eating disorder, though other studies have shown more ambiguous results. Studies have shown that binge eating tends to run in families and a twin study by Bulik, Sullivan, and Kendler has shown a, "moderate heritability for binge eating" at 41 percent. More research must be done before any firm conclusions can be drawn regarding the heritability of binge eating disorder. Studies have also shown that eating disorders such as anorexia and bulimia reduce coping abilities, which makes it more likely for those suffering to turn to binge eating as a coping strategy.A correlation between dietary restraint and the occurrence of binge eating has been shown in some research. While binge eaters are often believed to be lacking in self-control, the root of such behavior might instead be linked to rigid dieting practices. The relationship between strict dieting and binge eating is characterized by a vicious circle. Binge eating is more likely to occur after dieting, and vice versa. Several forms of dieting include delay in eating (e.g., not eating during the day), restriction of overall calorie intake (e.g., setting calorie limit to 1,000 calories per day), and avoidance of certain types of food (e.g., "forbidden" food, such as sugar, carbohydrates, etc.) Strict and extreme dieting differs from ordinary dieting. Some evidence suggests the effectiveness of moderate calorie restriction in decreasing binge eating episodes among overweight individuals with binge eating disorder, at least in the short-term."In the U.S, it is estimated that 3.5% of young women and 30% to 40% of people who seek weight loss treatments, can be clinically diagnosed with binge eating disorder." Diagnosis International Classification of Diseases BED was first included in the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1994 simply as a feature of eating disorder. In 2013 it gained formal recognition as a psychiatric condition in the DSM-5.The 2017 update to the American version of the ICD-10 includes BED under F50.81. ICD-11 may contain a dedicated entry (6B62), defining BED as frequent, recurrent episodes of binge eating (once a week or more over a period of several months) which are not regularly followed by inappropriate compensatory behaviors aimed at preventing weight gain. Diagnostic and Statistical Manual Previously considered a topic for further research exploration, binge eating disorder was included in the Diagnostic and Statistical Manual of Mental Disorders in 2013. Until 2013, binge eating disorder was categorized as an Eating Disorder Not Otherwise Specified, an umbrella category for eating disorders that dont fall under the categories for anorexia nervosa or bulimia nervosa. Because it was not a recognized psychiatric disorder in the DSM until 2013, it has been difficult to obtain insurance reimbursement for treatments. The disorder now has its own category under DSM-5, which outlines the signs and symptoms that must be present to classify a persons behavior as binge eating disorder. Studies have confirmed the high predictive value of these criteria for diagnosing BED.According to the World Health Organizations ICD-11 classification of BED, the severity of the disorder can be classified as mild (1-3 episodes/week), moderate (4-7 episodes/week), severe (8-13 episodes/week) and extreme (>14 episodes/week).One study claims that the method for diagnosing BED is for a clinician to conduct a structured interview using the DSM-5 criteria or taking the Eating Disorder Examination. The Structured Clinical Interview takes no more than 75 minutes to complete and has a systematic approach which follows the DSM-5 criteria. The Eating Disorder Examination is a semi-structured interview which identifies the frequency of binges and associated eating disorder features. Treatment Counselling and certain medication, such as lisdexamfetamine and selective serotonin reuptake inhibitor (SSRIs), may help. Some recommend a multidisciplinary approach in the treatment of the disorder. Counselling Cognitive behavioral therapy (CBT) treatment has been demonstrated as a more effective form of treatment for BED than behavioral weight loss programs. 50 percent of BED individuals achieve complete remission from binge eating and 68-90% will reduce the amount of binge eating episodes they have. CBT has also been shown to be an effective method to address self-image issues and psychiatric comorbidities (e.g., depression) associated with the disorder. The goal of CBT is to interrupt binge-eating behaviour, learn to create a normal eating schedule, change the perception around weight and shape and develop positive attitudes about ones body. Although this treatment is successful in eliminating binge eating episodes, it does not lead to losing any weight. Recent reviews have concluded that psychological interventions such as psychotherapy and behavioral interventions are more effective than pharmacological interventions for the treatment of binge eating disorder. A meta-analysis concluded that psychotherapy based on CBT not only significantly improved binge-eating symptomatology but also reduced a clients BMI significantly at posttreatment and longer than 6 and 12 months after treatment. There is the 12-step Overeaters Anonymous or Food Addicts in Recovery Anonymous. Behavioral weight loss treatment has been proven to be effective as a means to achieve weight loss amongst patients. Medication Lisdexamfetamine is a USFDA-approved drug that is used for the treatment of moderate to severe binge eating disorder in adults. As of 2021, it is the first and only medication formally approved for the treatment of BED. It is thought that lisdexamfetamine treats BED through a combination of effects on appetite and satiety, reward, and cognitive processes, including attention, impulsivity, and behavioral inhibition.Three other classes of medications are also used in the treatment of binge eating disorder: antidepressants, anticonvulsants, and anti-obesity medications. Antidepressant medications of the selective serotonin reuptake inhibitor (SSRI) have been found to effectively reduce episodes of binge eating and reduce weight. Similarly, anticonvulsant medications such as topiramate and zonisamide may be able to effectively suppress appetite. The long-term effectiveness of medication for binge eating disorder is currently unknown. For BED patients with manic episodes, risperidone is recommended. If BED patients have bipolar depression, lamotrigine is appropriate to use.Trials of antidepressants, anticonvulsants, and anti-obesity medications suggest that these medications are superior to placebo in reducing binge eating. Medications are not considered the treatment of choice because psychotherapeutic approaches, such as CBT, are more effective than medications for binge eating disorder. A meta-analysis concluded that using medications did not reduce binge-eating episodes and BMI posttreatment at 6–12 months. This indicates a potential possibility of relapse after withdrawal from the medications. Medications also do not increase the effectiveness of psychotherapy, though some patients may benefit from anticonvulsant and anti-obesity medications, such as phentermine/topiramate, for weight loss.Blocking opioid receptors leads to less food intake. Additionally, bupropion and naltrexone used together may cause weight loss. Combining these alongside psychotherapies like CBT may lead to better outcomes for BED. Surgery Bariatric surgery has also been proposed as another approach to treat BED and a recent meta-analysis showed that approximately two-thirds of individuals who seek this type of surgery for weight loss purposes have BED. Bariatric surgery recipients who had BED prior to receiving the surgery tend to have poorer weight-loss outcomes and are more likely to continue to exhibit eating behaviors characteristic of BED. Lifestyle Interventions Other treatments for BED include lifestyle interventions like weight training, peer support groups, and investigation of hormonal abnormalities. Prognosis Individuals with BED often have a lower overall quality of life and commonly experience social difficulties. Early behavior change is an accurate prediction of remission of symptoms later.Individuals who have BED commonly have other comorbidities such as major depressive disorder, personality disorder, bipolar disorder, substance abuse, body dysmorphic disorder, kleptomania, irritable bowel syndrome, fibromyalgia, or an anxiety disorder. Individuals may also exhibit varying degrees of panic attacks and a history of attempted suicide.While people of a normal weight may overeat occasionally, an ongoing habit of consuming large amounts of food in a short period of time may ultimately lead to weight gain and obesity. The main physical health consequences of this type of eating disorder are brought on by the weight gain resulting from calorie-laden bingeing episodes. Mental and emotional consequences of binge eating disorder include social weight stigma and emotional loss of control. Up to 70% of individuals with BED may also be obese, and therefore obesity-associated morbidities such as high blood pressure and coronary artery disease type 2 diabetes mellitus gastrointestinal issues (e.g., gallbladder disease), high cholesterol levels, musculoskeletal problems and obstructive sleep apnea may also be present. Epidemiology General The prevalence of BED in the general population is approximately 1-3%.BED cases usually occur between the ages of 12.4 and 24.7, but prevalence rates increase until the age of 40.Binge eating disorder is the most common eating disorder in adults.The limited amount of research that has been done on BED shows that rates of binge eating disorder are fairly comparable among men and women. The lifetime prevalence of binge eating disorder has been observed in studies to be 2.0 percent for men and 3.5 percent for women, higher than that of the commonly recognized eating disorders anorexia nervosa and bulimia nervosa. However another systematic literature review found the prevalence average to be about 2.3% in women and about 0.3% in men. Lifetime prevalence rates for BED in women can range anywhere from 1.5 to 6 times higher than in men. One literature review found that point prevalence rates for BED vary from 0.1 percent to 24.1 percent depending on the sample. This same review also found that the 12-month prevalence rates vary between 0.1 percent to 8.8 percent.Recent studies found that eating disorders which included anorexia nervosa, bulimia nervosa and binge-eating disorder are common among sexual and gender minority populations, including gay, lesbian, bisexual and transgender people. This could be due to the minority stress and discrimination this population experiences.Due to limited and inconsistent information and research on ethnic and racial differences, prevalence rates are hard to determine for BED. Rates of binge eating disorder have been found to be similar among black women, white women, and white men, while some studies have shown that binge eating disorder is more common among black women than among white women. However, majority of the research done around BED is focused on White women. One literature review found information citing no difference between BED prevalence among Hispanic, African American, and White women while other information found that BED prevalence was highest among Hispanics followed by Black individuals and finally White people. Worldwide Prevalences Eating disorders have usually been considered something that was specific to Western countries. However, the prevalence of eating disorders is increasing in other non-Western countries. Though the research on binge eating disorders tends to be concentrated in North America, the disorder occurs across cultures. In the US, BED is present in 0.8% of male adults and 1.6% of female adults in a given year.The prevalence of BED is lower in Nordic countries compared to Europe in a study that included Finland, Sweden, Norway, and Iceland. The point prevalence ranged from 0.4 to 1.5 percent and the lifetime prevalence ranged from 0.7 to 5.8 percent for BED in women.In a study that included Argentina, Brazil, Chile, Colombia, Mexico, and Venezuela, the point prevalence for BED was 3.53 percent. Therefore, this particular study found that the prevalence for BED is higher in these Latin American countries compared to Western countries.The prevalence of BED in Europe ranges from <1 to 4 percent. Co-morbidities BED is co-morbid with diabetes, hypertension, previous stroke, and heart disease in some individuals.In people who have obsessive-compulsive disorder or bipolar I or II disorders, BED lifetime prevalence was found to be higher.Additionally, 30 to 40 percent of individuals seeking treatment for weight-loss can be diagnosed with binge eating disorder. Underreporting in men Eating disorders are oftentimes underreported in men. Underreporting could be a result of measurement bias due to how eating disorders are defined. The current definition for eating disorders focuses on thinness. However, eating disorders in men tend to center on muscularity and would therefore warrant a need for a different measurement definition. Further research should focus on including more men in samples since previous research has focused primarily on women. Frequency BED is the most common eating disorder, with 47% of people with eating disorders have BED, 3% of them have anorexia nervosa and 12% of them have bulimia nervosa . In the United States, it has been estimated that 2.8 million people are affected by BED. Over 57% of people with BED are female and it often begins in the late teens or early 20s. History The disorder was first described in 1959 by psychiatrist and researcher Albert Stunkard as "night eating syndrome" (NES). The term "binge eating" was coined to describe the same bingeing-type eating behavior but without the exclusive nocturnal component.There is generally less research on binge eating disorder in comparison to anorexia nervosa and bulimia nervosa. See also Prader–Willi syndrome References Bibliography External links Binge Eating Disorder on Medscape Binge Eating Disorder on National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)
You serve as a medical tutor. Your objective is to demystify medical terms, providing thorough explanations that cater to various levels of medical knowledge.
Could you please explain the term 'Klumpke paralysis' in simple language?
Klumpkes paralysis is a variety of partial palsy of the lower roots of the brachial plexus. The brachial plexus is a network of spinal nerves that originates in the back of the neck, extends through the axilla (armpit), and gives rise to nerves to the upper limb. The paralytic condition is named after Augusta Déjerine-Klumpke. Signs and symptoms Symptoms include intrinsic minus hand deformity, paralysis of intrinsic hand muscles, and C8/T1 Dermatome distribution numbness. Involvement of T1 may result in Horners syndrome, with ptosis, and miosis. Weakness or lack of ability to use specific muscles of the shoulder or arm. It can be contrasted to Erb-Duchennes palsy, which affects C5 and C6. Cause Klumpkes paralysis is a form of paralysis involving the muscles of the forearm and hand, resulting from a brachial plexus injury in which the eighth cervical (C8) and first thoracic (T1) nerves are injured either before or after they have joined to form the lower trunk. The subsequent paralysis affects, principally, the intrinsic muscles of the hand (notably the interossei, thenar and hypothenar muscles) and the flexors of the wrist and fingers (notably flexor carpi ulnaris and ulnar half of the flexor digitorum profundus). The classic presentation of Klumpkes palsy is the “claw hand” where the forearm is supinated, the wrist extended and the fingers flexed. If Horner syndrome is present, there is miosis (constriction of the pupils) in the affected eye.The injury can result from difficulties in childbirth. The most common aetiological mechanism is caused by a traumatic vaginal delivery. The risk is greater when the mother is small or when the infant is of large weight. Risk of injury to the lower brachial plexus results from traction on an abducted arm, as with an infant being pulled from the birth canal by an extended arm above the head or with someone catching themselves by a branch as they fall from a tree. Lower brachial plexus injuries should be distinguished from upper brachial plexus injuries, which can also result from birth trauma but give a different syndrome of weakness known as Erbs palsy. Other trauma, such as motorcycle accidents, that have similar spinal cord injuries to C8 and T1, also show the same symptoms of Klumpkes paralysis. Diagnosis Electromyography and nerve conduction velocity testing can help to diagnose the location and severity of the lesion. Otherwise, the diagnosis is one made clinically after a thorough neurologic exam. Treatment Treatment effectiveness varies depending on the initial severity of the injury. Physiotherapy is used to increase strength of muscle and improve muscle functions. Electrical modalities such as electric nerve stimulation can also be used.Occupational therapy to provide exercises and coping mechanisms to improve the patients ability to perform activities of daily living. Goals of therapy are to improve tactile sensation, proprioception, and range of motion.Acute treatment of a severe injury will involve repositioning and splinting or casting of the extremity. Epidemiology Klumpke Palsy is listed as a rare disease by the Office of Rare Diseases (ORD) of the National Institutes of Health (NIH). This means that Klumpke palsy, or a subtype of Klumpke palsy, affects fewer than 200,000 people in the US population. See also Dystocia Erbs palsy References == External links ==
You are a medical knowledge base. Your task is to elucidate medical terminology, offering insights into their meanings, origins, and applications.
I'm seeking clarification on the medical term 'Bacillary dysentery.' Could you explain it?
Bacillary dysentery is a type of dysentery, and is a severe form of shigellosis. Bacillary dysentery is associated with species of bacteria from the family Enterobacteriaceae. The term is usually restricted to Shigella infections.Shigellosis is caused by one of several types of Shigella bacteria. Three species are associated with bacillary dysentery: Shigella sonnei, Shigella flexneri and Shigella dysenteriae. A study in China indicated that Shigella flexneri 2a was the most common serotype.Salmonellosis caused by Salmonella enterica (serovar Typhimurium) has also been described as a cause of bacillary dysentery, though this definition is less common. It is sometimes listed as an explicit differential diagnosis of bacillary dysentery, as opposed to a cause.Bacillary dysentery should not be confused with diarrhea caused by other bacterial infections. One characteristic of bacillary dysentery is blood in stool, which is the result of invasion of the mucosa by the pathogen. History The Bacteria causing Shigellosis is named after "Shiga" a Japanese researcher who discovered the bacteria in 1897 Pathogenesis Transmission is fecal-oral and is remarkable for the small number of organisms that may cause disease (10 ingested organisms cause illness in 10% of volunteers, and 500 organisms cause disease in 50% of volunteers). Shigella bacteria invade the intestinal mucosal cells but do not usually go beyond the lamina propria. Dysentery is caused when the bacteria escape the epithelial cell phagolysosome, multiply within the cytoplasm, and destroy host cells. Shiga toxin causes hemorrhagic colitis and hemolytic-uremic syndrome by damaging endothelial cells in the microvasculature of the colon and the glomeruli, respectively. In addition, chronic arthritis secondary to S. flexneri infection, called reactive arthritis, may be caused by a bacterial antigen; the occurrence of this syndrome is strongly linked to HLA-B27 genotype, but the immunologic basis of this reaction is not understood. Diagnosis Specimen: Fresh stool is collected. Culture: Specimen is inoculated on selective media like MacConkeys agar, DCA, XLD agar. Selenite F broth(0.4%) is used as enrichment medium which permits the rapid growth of enteric pathogens while inhibiting the growth of normal flora like E. coli for 6–8 hours. Subculture is done on the solid media from selenite F broth. All the solid media are incubated at 37 degrees for 24 hours. Cultural characteristics: Colorless (NLF) colonies appear on MacConkeys agar which are further confirmed by gram staining, hanging drop preparation and biochemical reactions. Treatment Dysentery is initially managed by maintaining fluid intake using oral rehydration therapy. If this treatment cannot be adequately maintained due to vomiting or the profuseness of diarrhea, hospital admission may be required for intravenous fluid replacement. Ideally, no antimicrobial therapy should be administered until microbiological microscopy and culture studies have established the specific infection involved. When laboratory services are not available, it may be necessary to administer a combination of drugs, including an amoebicidal drug to kill the parasite and an antibiotic to treat any associated bacterial infection. Anyone with bloody diarrhea needs immediate medical help. Treatment often starts with an oral rehydrating solution—water mixed with salt and carbohydrates—to prevent dehydration. (Emergency relief services often distribute inexpensive packets of sugars and mineral salts that can be mixed with clean water and used to restore lifesaving fluids in dehydrated children gravely ill from dysentery.) If Shigella is suspected and it is not too severe, the doctor may recommend letting it run its course—usually less than a week. The patient will be advised to replace fluids lost through diarrhea. If the infection is severe, the doctor may prescribe antibiotics, such as ciprofloxacin or TMP-SMX (Bactrim). Unfortunately, many strains of Shigella are becoming resistant to common antibiotics, and effective medications are often in short supply in developing countries. If necessary, a doctor may have to reserve antibiotics for those at highest risk for death, including young children, people over 50, and anyone suffering from dehydration or malnutrition. No vaccine is available. There are several Shigella vaccine candidates in various stages of development that could reduce the incidence of dysentery in endemic countries, as well as in travelers with travelers diarrhea. References == External links ==
You are a guide to medical language. Please provide in-depth and understandable explanations of medical terms, highlighting their significance in healthcare.
I'm not familiar with the medical term 'Mepivacaine.' Could you provide some insights?
Mepivacaine is a local anesthetic of the amide type. Mepivacaine has a reasonably rapid onset (more rapid than that of procaine) and medium duration of action (shorter than that of procaine) and is marketed under various trade names including Carbocaine and Polocaine. Mepivacaine became available in the United States in the 1960s. Mepivacaine is used in any infiltration and local anesthesia. It is supplied as the hydrochloride salt of the racemate, which consists of R(-)-mepivacaine and S(+)-mepivacaine in equal proportions. These two enantiomers have markedly different pharmacokinetic properties.Mepivacaine was originally synthesized in Sweden at the laboratory of Bofors Nobelkrut in 1956. References External links Mepivacaine at RxList
You serve as an encyclopedia of medical terms. Deliver clear, detailed explanations, making medical language understandable to a diverse audience.
Could you provide a brief overview of 'Tracheobronchitis' in a medical context?
Tracheobronchitis is inflammation of the trachea and bronchi. It is characterised by a cough, fever, and purulent (containing pus) sputum and is therefore suggestive of pneumonia. It is classified as a respiratory tract infection.Tracheobronchitis is often a hospital-acquired infection, particularly in an intensive care setting, associated with the use of mechanical ventilators, and the need for inserting a tracheal tube. In these cases it is known as ventilator-associated tracheobronchitis. The infection begins in the trachea where it colonises and spreads to the bronchi. The characteristic increased sputum produced can give problems in the removal of the tracheal tube (extubation). Tracheobronchial infections are responsible for up to 80% of exacerbations in chronic obstructive pulmonary disease. Causes Ventilator-associated tracheobronchitis is a hospital-acquired infection usually contracted in an intensive care unit when a mechanical ventilator is used. The insertion of a tracheal tube can cause an infection in the trachea which then colonises and spreads to the bronchi. If there is further spread and development into the lungs this will give rise to ventilator-associated pneumonia. Antibiotics are recommended to prevent this development but only as a short term measure as antibiotic resistance is already high in some of the pathogens involved. This does not always progress to pneumonia.Fungal tracheobronchitis can be invasive into the lung tissue or pulmonary artery and cause a severe lung infection. The extra secreted mucus from tracheobronchitis plugs the airways allowing the fungal pathogens to lodge and multiply. Local damage to the tracheal wall that can be caused by mechanical ventilation is a risk factor for this. Respiratory failure may develop from this infection.Herpetic tracheobronchitis is caused by herpes simplex virus and causes small ulcers covered in exudate to form on the mucous membranes. The exudate contains necrotic cells from the mucosal epithelium. The characteristic increased sputum produced can give problems in the removal of the tracheal tube (extubation). In the course of cystic fibrosis the lungs are affected. Thickened mucus secretions block the airways making infection possible. The recurrence of tracheobronchitis presents a major problem.In chronic obstructive pulmonary disease tracheobronchial infections are responsible for up to 80% of exacerbations. In dogs Tracheobronchitis often affects dogs particularly those confined in kennels where their persistent barking worsens the inflammation. This canine infectious tracheobronchitis is more usually known as kennel cough. == References ==
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Please help me grasp the concept behind the medical term 'Animal.'
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and go through an ontogenetic stage in which their body consists of a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. Animals range in length from 8.5 micrometres (0.00033 in) to 33.6 metres (110 ft). They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous for Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many animal species, such as for food (including meat, milk, and eggs), for materials (such as leather and wool), as pets, and as working animals including for transport. Dogs have been used in hunting, as have birds of prey, while many terrestrial and aquatic animals were hunted for sports. Nonhuman animals have appeared in art from the earliest times and are featured in mythology and religion. Etymology The word "animal" comes from the Latin animalis, meaning having breath, having soul or living being. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term "metazoa" is from Ancient Greek μετα (meta, used to mean "later") and ζῷᾰ (zōia, plural of ζῷον zōion "animal"). Characteristics Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular. Unlike plants and algae, which produce their own nutrients, animals are heterotrophic, feeding on organic material and digesting it internally. With very few exceptions, animals respire aerobically. All animals are motile (able to spontaneously move their bodies) during at least part of their life cycle, but some animals, such as sponges, corals, mussels, and barnacles, later become sessile. The blastula is a stage in embryonic development that is unique to animals, allowing cells to be differentiated into specialised tissues and organs. Structure All animals are composed of cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised, making the formation of complex structures possible. This may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Animal cells uniquely possess the cell junctions called tight junctions, gap junctions, and desmosomes.With few exceptions—in particular, the sponges and placozoans—animal bodies are differentiated into tissues. These include muscles, which enable locomotion, and nerve tissues, which transmit signals and coordinate the body. Typically, there is also an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Reproduction and development Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs.Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on how they obtain or consume organic material, including carnivores, herbivores, omnivores, detritivores, and parasites. Interactions between animals form complex food webs. In carnivorous or omnivorous species, predation is a consumer–resource interaction where a predator feeds on another organism (called its prey). Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various anti-predator adaptations. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles primarily eating sponges. Most animals rely on the biomass and energy produced by plants through photosynthesis. Herbivores eat plant material directly, while carnivores, and other animals on higher trophic levels typically acquire it indirectly by eating other animals. Animals oxidize carbohydrates, lipids, proteins, and other biomolecules, which allows the animal to grow and to sustain biological processes such as locomotion. Animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter of archaea and bacteria produced in these locations through chemosynthesis (by oxidizing inorganic compounds, such as hydrogen sulfide).Animals originally evolved in the sea. Lineages of arthropods colonised land around the same time as land plants, probably between 510 and 471 million years ago during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Animals occupy virtually all of earths habitats and microhabitats, including salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of animals, plants, fungi and rocks. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F). Only very few species of animals (mostly nematodes) inhabit the most extreme cold deserts of continental Antarctica. Diversity Size The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 µm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 µm when fully grown. Numbers and habitats The following table lists estimated numbers of described extant species for all the animal groups, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011. Evolutionary origin Animals are found as long ago as the Ediacaran biota, towards the end of the Precambrian, and possibly somewhat earlier. It had long been doubted whether these life-forms included animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialized for aerobic metabolism they became fully dependent on oxygen in their environments.Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do.Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges.Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced today by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are sister to the Choanoflagellata, with which they form the Choanozoa. The most basal animals, the Porifera, Ctenophora, Cnidaria, and Placozoa, have body plans that lack bilateral symmetry. Their relationships are still disputed; the sister group to all other animals could be the Porifera or the Ctenophora, both of which lack hox genes, important in body plan development.These genes are found in the Placozoa and the higher animals, the Bilateria. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. 25 of these are novel core gene groups, found only in animals; of those, 8 are for essential components of the Wnt and TGF-beta signalling pathways which may have enabled animals to become multicellular by providing a pattern for the bodys system of axes (in three dimensions), and another 7 are for transcription factors including homeodomain proteins involved in the control of development.The phylogenetic tree indicates approximately how many millions of years ago (mya) the lineages split. Non-Bilateria Several animal phyla lack bilateral symmetry. Among these, the sponges (Porifera) probably diverged first, representing the oldest animal phylum. Sponges lack the complex organization found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues. They typically feed by drawing in water through pores.The Ctenophora (comb jellies) and Cnidaria (which includes jellyfish, sea anemones, and corals) are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. They are sometimes placed together in the group Coelenterata because of common traits, not because of close relationships. Animals in both phyla have distinct tissues, but these are not organised into organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans are similar, but they do not have a permanent digestive chamber. Bilateria The remaining animals, the great majority—comprising some 29 phyla and over a million species—form a clade, the Bilateria, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side.Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures.Genetic studies have considerably changed zoologists understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. The basalmost bilaterians are the Xenacoelomorpha. Protostomes and deuterostomes Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm.The main deuterostome phyla are the Echinodermata and the Chordata. Echinoderms are exclusively marine and include starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The deuterostomes also include the Hemichordata (acorn worms). Ecdysozoa The Ecdysozoa are protostomes, named after their shared trait of ecdysis, growth by moulting. They include the largest animal phylum, the Arthropoda, which contains insects, spiders, crabs, and their kin. All of these have a body divided into repeating segments, typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal phylum. Roundworms are typically microscopic, and occur in nearly every environment where there is water; some are important parasites. Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Spiralia The Spiralia are a large group of protostomes that develop by spiral cleavage in the early embryo. The Spiralias phylogeny has been disputed, but it contains a large clade, the superphylum Lophotrochozoa, and smaller groups of phyla such as the Rouphozoa which includes the gastrotrichs and the flatworms. All of these are grouped as the Platytrochozoa, which has a sister group, the Gnathifera, which includes the rotifers.The Lophotrochozoa includes the molluscs, annelids, brachiopods, nemerteans, bryozoa and entoprocts. The molluscs, the second-largest animal phylum by number of described species, includes snails, clams, and squids, while the annelids are the segmented worms, such as earthworms, lugworms, and leeches. These two groups have long been considered close relatives because they share trochophore larvae. History of classification In the classical era, Aristotle divided animals, based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, 2 legs, rational soul) down through the live-bearing tetrapods (with blood, 4 legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch, and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about.In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos (a chaotic mess) and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created 9 phyla apart from vertebrates (where he still had 4 phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians.In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ("branches" with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860.In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture Practical uses The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined.Invertebrates including cephalopods, crustaceans, and bivalve or gastropod molluscs are hunted or farmed for food. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture.Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug Yondelis are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas and octopuses, insects including praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. Symbolic uses Animals have been the subjects of art from the earliest times, both historical, as in Ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürers 1515 The Rhinoceros, and George Stubbss c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies.Animals including insects and mammals feature in mythology and religion. In both Japan and Europe, a butterfly was seen as the personification of a persons soul, while the scarab beetle was sacred in ancient Egypt. Among the mammals, cattle, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. The signs of the Western and Chinese zodiacs are based on animals. See also Animal attacks Animal coloration Ethology Fauna List of animal names Lists of organisms by population Notes References External links Tree of Life Project Archived 12 June 2011 at the Wayback Machine Animal Diversity Web – University of Michigans database of animals ARKive – multimedia database of endangered/protected species
You are a conduit for medical education. Your task is to offer detailed explanations of medical terms, ensuring they are understandable and precise.
The term 'Fontanelle' keeps coming up in medical discussions. What does it stand for?
A fontanelle (or fontanel) (colloquially, soft spot) is an anatomical feature of the infant human skull comprising soft membranous gaps (sutures) between the cranial bones that make up the calvaria of a fetus or an infant. Fontanelles allow for stretching and deformation of the neurocranium both during birth and later as the brain expands faster than the surrounding bone can grow. Premature complete ossification of the sutures is called craniosynostosis. After infancy, the anterior fontanelle is known as the bregma. Structure An infants skull consists of five main bones: two frontal bones, two parietal bones, and one occipital bone. These are joined by fibrous sutures, which allow movement that facilitates childbirth and brain growth. Posterior fontanelle is triangle-shaped. It lies at the junction between the sagittal suture and lambdoid suture. At birth, the skull features a small posterior fontanelle with an open area covered by a tough membrane, where the two parietal bones adjoin the occipital bone (at the lambda). The posterior fontanelles ossify within 6–8 weeks after birth. This is called intramembranous ossification. The mesenchymal connective tissue turns into bone tissue. Anterior fontanelle is a diamond-shaped membrane-filled space located between the two frontal and two parietal bones of the developing fetal skull. It persists until approximately 18 months after birth. It is at the junction of the coronal suture and sagittal suture. The fetal anterior fontanelle may be palpated until 18 months. In cleidocranial dysostosis, however, it is often late in closing at 8–24 months or may never close. Examination of an infant includes palpating the anterior fontanelle. Two smaller fontanelles are located on each side of the head, more anteriorly the sphenoidal or anterolateral fontanelle (between the sphenoid, parietal, temporal, and frontal bones) and more posteriorly the mastoid or posterolateral fontanelle (between the temporal, occipital, and parietal bones).During birth, fontanelles enable the bony plates of the skull to flex, allowing the childs head to pass through the birth canal. The ossification of the bones of the skull causes the anterior fontanelle to close over by 9 to 18 months. The sphenoidal and posterior fontanelles close during the first few months of life. The closures eventually form the sutures of the neurocranium. Other than the anterior and posterior fontanelles, the mastoid fontanelle and the sphenoidal fontanelle are also significant. Closure In humans, the sequence of fontanelle closure is as follows: The posterior fontanelle generally closes 2 to 3 months after birth; The sphenoidal fontanelle is the next to close around 6 months after birth; The mastoid fontanelle closes next from 6 to 18 months after birth; and The anterior fontanelle is generally the last to close between 12 and 18 months. Clinical significance The fontanelle may pulsate, and although the precise cause of this is not known, it is normal and seems to echo the heartbeat, perhaps via the arterial pulse within the brain vasculature, or in the meninges. This pulsating action is how the soft spot got its name – fontanelle is borrowed from the old French word fontenele, which is a diminutive of fontaine, meaning "spring". It is assumed that the term spring is used because of the analogy of the dent in a rock or earth where a spring arises.Parents may worry that their infant may be more prone to injury at the fontanelles. In fact, although they may colloquially be called "soft-spots", the membrane covering the fontanelles is extremely tough and difficult to penetrate.Fontanelles allow the infant brain to be imaged using ultrasonography. Once they are closed, most of the brain is inaccessible to ultrasound imaging, because the bony skull presents an acoustic barrier. Disorders Bulging A very tense or bulging anterior fontanelle indicates raised intracranial pressure. Increased cranial pressure in infants may cause the fontanelles to bulge or the head to begin to enlarge abnormally. It can occur due to: Craniosynostosis – premature fusion of the cranial sutures Encephalitis – swelling (inflammation) of the brain, most often due to infections Hydrocephalus – a buildup of fluid inside the skull Meningitis – infection of the membranes covering the brain Shaken baby syndrome Sunken A sunken (also called "depressed") fontanelle indicates dehydration or malnutrition. Enlarged The fontanelles may be enlarged, may be slow to close, or may never close, most commonly due to causes like: Down syndrome Hydrocephalus Intrauterine growth restriction (IUGR) Premature birthRarer causes include: Achondroplasia Apert syndrome Cleidocranial dysostosis Congenital rubella Neonatal hypothyroidism Osteogenesis imperfecta Rickets Third Sometimes there is a third bigger fontanelle other than posterior and anterior ones in a newborn. In one study, the frequency of third fontanelles in an unselected population of newborn infants was 6.3%. It is very common in Down syndrome and some congenital infections. If present, the physician should rule out serious conditions associated with the third fontanelle. Other animals Primates In apes the fontanelles fuse soon after birth. In chimpanzees the anterior fontanelle is fully closed by 3 months of age. Dogs One of the more serious problems that can affect canines is known as an "open fontanelle", which occurs when the skull bones at the top of the head fail to close. The problem is often found in conjunction with hydrocephalus, which is a condition in which too much fluid is found within and around the brain, placing pressure on the brain and surrounding tissues. Often the head will appear dome-shaped, and the open fontanelle is noticeable as a "soft spot" on the top of the dogs head. The fluid-filled spaces within the brain, known as ventricles, also become swollen. The increased pressure damages or prevents the development of brain tissue.Not all open fontanelles are connected with hydrocephalus. In many young dogs the skull bones are not fused at birth, but instead will close slowly over a three- to six-month period. Occasionally these bones fail to close, but the dog is still healthy. In these cases, however, the dogs owners need to be very careful, since any injury or bumps to the animals head could cause significant brain damage, as well as conditions like epilepsy. An open fontanelle, known as a "molera", is a recognized feature of the Chihuahua breed. The American Kennel Club breed standard states that the skull of the Chihuahua should be domed, with or without the molera being present. However, the Fédération Cynologique Internationale (FCI) standard for the Chihuahua lists an open fontanelle as a disqualification. Additional images == References ==
You act as a bridge between medical jargon and everyday language. Explain medical terms in a manner that is comprehensive yet approachable for non-experts.
I need a basic explanation for the medical term 'Xerostomia.'
Xerostomia, also known as dry mouth, is dryness in the mouth, which may be associated with a change in the composition of saliva, or reduced salivary flow, or have no identifiable cause. This symptom is very common and is often seen as a side effect of many types of medication. It is more common in older people (mostly because this group tend to take several medications) and in persons who breathe through their mouths. Dehydration, radiotherapy involving the salivary glands, chemotherapy and several diseases can cause reduced salivation (hyposalivation), or a change in saliva consistency and hence a complaint of xerostomia. Sometimes there is no identifiable cause, and there may sometimes be a psychogenic reason for the complaint. Definition Xerostomia is the subjective sensation of dry mouth, which is often (but not always) associated with hypofunction of the salivary glands. The term is derived from the Greek words ξηρός (xeros) meaning "dry" and στόμα (stoma) meaning "mouth". A drug or substance that increases the rate of salivary flow is termed a sialogogue. Hyposalivation is a clinical diagnosis that is made based on the history and examination, but reduced salivary flow rates have been given objective definitions. Salivary gland hypofunction has been defined as any objectively demonstrable reduction in whole and/or individual gland flow rates. An unstimulated whole saliva flow rate in a normal person is 0.3–0.4 ml per minute, and below 0.1 ml per minute is significantly abnormal. A stimulated saliva flow rate less than 0.5 ml per gland in 5 minutes or less than 1 ml per gland in 10 minutes is decreased. The term subjective xerostomia is sometimes used to describe the symptom in the absence of any clinical evidence of dryness. Xerostomia may also result from a change in composition of saliva (from serous to mucous). Salivary gland dysfunction is an umbrella term for the presence of xerostomia, salivary gland hyposalivation, and hypersalivation. Signs and symptoms Hyposalivation may give the following signs and symptoms: Dental caries (xerostomia related caries) – Without the buffering effects of saliva, tooth decay becomes a common feature and may progress much more aggressively than it would otherwise ("rampant caries"). It may affect tooth surfaces that are normally spared, e.g., cervical caries and root surface caries. This is often seen in patients who have had radiotherapy involving the major salivary glands, termed radiation-induced caries. Therefore, it is important that any products used in managing dry mouth symptoms are sugar-free, as the presence of sugars in the mouth support the growth of oral bacteria, resulting in acid production and development of dental caries. Acid erosion. Saliva acts as a buffer and helps to prevent demineralization of teeth. Oral candidiasis – A loss of the antimicrobial actions of saliva may also lead to opportunistic infection with Candida species. Ascending (suppurative) sialadenitis – an infection of the major salivary glands (usually the parotid gland) that may be recurrent. It is associated with hyposalivation, as bacteria are able to enter the ductal system against the diminished flow of saliva. There may be swollen salivary glands even without acute infection, possibly caused by autoimmune involvement. Dysgeusia – altered taste sensation (e.g., a metallic taste) and dysosmia, altered sense of smell. Intraoral halitosis – possibly due to increased activity of halitogenic biofilm on the posterior dorsal tongue (although dysgeusia may cause a complaint of nongenuine halitosis in the absence of hyposalivation). Burning mouth syndrome – a burning or tingling sensation in the mouth. Saliva that appears thick or ropey. Mucosa that appears dry. A lack of saliva pooling in the floor of the mouth during examination. Dysphagia – difficulty swallowing and chewing, especially when eating dry foods. Food may stick to the tissues during eating. The tongue may stick to the palate, causing a clicking noise during speech, or the lips may stick together. Gloves or a dental mirror may stick to the tissues. Fissured tongue with atrophy of the filiform papillae and a lobulated, erythematous appearance of the tongue. Saliva cannot be "milked" (expressed) from the parotid duct. Difficulty wearing dentures, e.g., when swallowing or speaking. There may be generalized mucosal soreness and ulceration of the areas covered by the denture. Mouth soreness and oral mucositis. Lipstick or food may stick to the teeth. A need to sip drinks frequently while talking or eating. Dry, sore, and cracked lips and angles of mouth. Thirst.However, sometimes the clinical findings do not correlate with the symptoms experienced. For example, a person with signs of hyposalivation may not complain of xerostomia. Conversely a person who reports experiencing xerostomia may not show signs of reduced salivary secretions (subjective xerostomia). In the latter scenario, there are often other oral symptoms suggestive of oral dysesthesia ("burning mouth syndrome"). Some symptoms outside the mouth may occur together with xerostomia. These include: Xerophthalmia (dry eyes). Inability to cry. Blurred vision. Photophobia (light intolerance). Dryness of other mucosae, e.g., nasal, laryngeal, and/or genital. Burning sensation. Itching or grittiness. Dysphonia (voice changes).There may also be other systemic signs and symptoms if there is an underlying cause such as Sjögrens syndrome, for example, joint pain due to associated rheumatoid arthritis. Cause The differential of hyposalivation significantly overlaps with that of xerostomia. A reduction in saliva production to about 50% of the normal unstimulated level will usually result in the sensation of dry mouth. Altered saliva composition may also be responsible for xerostomia. Physiologic Salivary flow rate is decreased during sleep, which may lead to a transient sensation of dry mouth upon waking. This disappears with eating or drinking or with oral hygiene. When associated with halitosis, this is sometimes termed "morning breath". Dry mouth is also a common sensation during periods of anxiety, probably owing to enhanced sympathetic drive. Dehydration is known to cause hyposalivation, the result of the body trying to conserve fluid. Physiologic age-related changes in salivary gland tissues may lead to a modest reduction in salivary output and partially explain the increased prevalence of xerostomia in older people. However, polypharmacy is thought to be the major cause in this group, with no significant decreases in salivary flow rate being likely to occur through aging alone. Drug induced Aside from physiologic causes of xerostomia, iatrogenic effects of medications are the most common cause. A medication which is known to cause xerostomia may be termed xerogenic. Over 400 medications are associated with xerostomia. Although drug induced xerostomia is commonly reversible, the conditions for which these medications are prescribed are frequently chronic. The likelihood of xerostomia increases in relation to the total number of medications taken, whether the individual medications are xerogenic or not. The sensation of dryness usually starts shortly after starting the offending medication or after increasing the dose. Anticholinergic, sympathomimetic, or diuretic drugs are usually responsible. Sjögrens syndrome Xerostomia may be caused by autoimmune conditions which damage saliva-producing cells. Sjögrens syndrome is one such disease, and it is associated with symptoms including fatigue, myalgia and arthralgia. The disease is characterised by inflammatory changes in the moisture-producing glands throughout the body, leading to reduced secretions from glands that produce saliva, tears and other secretions throughout the body. Primary Sjögrens syndrome is the combination of dry eyes and xerostomia. Secondary Sjögrens syndrome is identical to primary form but with the addition of a combination of other connective tissue disorders such as systemic lupus erythematosus or rheumatoid arthritis. Celiac disease Xerostomia may be the only symptom of celiac disease, especially in adults, who often have no obvious digestive symptoms. Radiation therapy Radiation therapy for cancers of the head and neck (including brachytherapy for thyroid cancers) where the salivary glands are close to or within the field irradiated is another major cause of xerostomia. A radiation dose of 52 Gy is sufficient to cause severe salivary dysfunction. Radiotherapy for oral cancers usually involves up to 70 Gy of radiation, often given along with chemotherapy which may also have a damaging effect on saliva production. This side effect is a result of radiation damage of the parasympathetic nerves. Formation of salivary gland ducts depends on the secretion of a neuropeptide from the parasympathetic nerves, while development of the end buds of the salivary gland depends on acetylcholine from the parasympathetic nerves. Sicca syndrome "Sicca" simply means dryness. Sicca syndrome is not a specific condition, and there are varying definitions, but the term can describe oral and eye dryness that is not caused by autoimmune diseases (e.g., Sjögren syndrome). Other causes Oral dryness may also be caused by mouth breathing, usually caused by partial obstruction of the upper respiratory tract. Examples include hemorrhage, vomiting, diarrhea, and fever.Alcohol may be involved in the cause of salivary gland disease, liver disease, or dehydration.Smoking is another possible cause. Other recreational drugs such as methamphetamine, cannabis, hallucinogens, or heroin, may be implicated. Hormonal disorders, such as poorly controlled diabetes, chronic graft versus host disease or low fluid intake in people undergoing hemodialysis for renal impairment may also result in xerostomia, due to dehydration.Nerve damage can be a cause of oral dryness. An injury to the face or surgery can cause nerve damage to your head and neck area which can effect the nerves that are associated with the salivary flow.Xerostomia may be a consequence of infection with hepatitis C virus (HCV) and a rare cause of salivary gland dysfunction may be sarcoidosis.Infection with Human Immunodeficiency Virus/Acquired immunodeficiency Syndrome (AIDS) can cause a related salivary gland disease known as Diffuse Infiltrative Lymphocytosis Syndrome (DILS).Similar to taste dysfunction, xerostomia is one of the most prevalent and persistent oral symptoms associated with COVID-19. Despite a close association with COVID-19, xerostomia, dry mouth and hyposalivation tend to be overlooked in COVID-19 patients and survivors, unlike ageusia, dysgeusia and hypogeusia. Diagnostic approach A diagnosis of hyposalivation is based predominantly on the clinical signs and symptoms. The Challacombe scale maybe used to classify the extent of dryness. The rate of the salivary flow in an individuals mouth can also be measured. There is little correlation between symptoms and objective tests of salivary flow, such as sialometry. This test is simple and noninvasive, and involves measurement of all the saliva a patient can produce during a certain time, achieved by dribbling into a container. Sialometery can yield measures of stimulated salivary flow or unstimulated salivary flow. Stimulated salivary flow rate is calculated using a stimulant such as 10% citric acid dropped onto the tongue, and collection of all the saliva that flows from one of the parotid papillae over five or ten minutes. Unstimulated whole saliva flow rate more closely correlates with symptoms of xerostomia than stimulated salivary flow rate. Sialography involves introduction of radio-opaque dye such as iodine into the duct of a salivary gland. It may show blockage of a duct due to a calculus. Salivary scintiscanning using technetium is rarely used. Other medical imaging that may be involved in the investigation include chest x-ray (to exclude sarcoidosis), ultrasonography and magnetic resonance imaging (to exclude Sjögrens syndrome or neoplasia). A minor salivary gland biopsy, usually taken from the lip, may be carried out if there is a suspicion of organic disease of the salivary glands. Blood tests and urinalysis may be involved to exclude a number of possible causes. To investigate xerophthalmia, the Schirmer test of lacrimal flow may be indicated. Slit-lamp examination may also be carried out. Treatment The successful treatment of xerostomia is difficult to achieve and often unsatisfactory. This involves finding any correctable cause and removing it if possible, but in many cases it is not possible to correct the xerostomia itself, and treatment is symptomatic, and also focuses on preventing tooth decay through improving oral hygiene. Where the symptom is caused by hyposalivation secondary to underlying chronic disease, xerostomia can be considered permanent or even progressive. The management of salivary gland dysfunction may involve the use of saliva substitutes and/or saliva stimulants: Saliva substitutes – These are viscous products which are applied to the oral mucosa, which can be found in the form of sprays, gels, oils, mouthwashes, mouth rinses, pastilles or viscous liquids. This includes water, artificial salivas (mucin-based, carboxymethylcellulose-based), and other substances (milk, vegetable oil): Mucin Spray: 4 Trials have been completed on the effects of Mucin Spray on Xerostomia, overall there is no strong evidence showing that Mucin Spray is more effective than a placebo in reducing the symptoms of dry mouth. Mucin Lozenge: Only 1 trial (Gravenmade 1993) has been completed regarding the effectiveness of Mucin Lozenges. Whilst it was assessed as being at high risk of bias, it showed that Mucin Lozenges were ineffective when compared to a placebo. Mucoadhesive Disk: These disks are stuck to the palate and they contain lubricating agents, flavouring agents and some antimicrobial agents. One trial (Kerr 2010) assessed their effectiveness against a placebo disk. Strangely, patients from both groups (placebo and the real disk) reported an increase in subjective oral moistness. No adverse effects were reported. More research is needed in this area before conclusions are drawn. Biotene oral Balance Gel & toothpaste: One trial has been completed (Epstein 1999) regarding the effectiveness of Biotene Oral Balance gel & toothpaste. The results showed that Biotene products were "more effective than control and reduced dry mouth on waking". Saliva stimulants – organic acids (ascorbic acid, malic acid), chewing gum, parasympathomimetic drugs (choline esters, e.g. pilocarpine hydrochloride, cholinesterase inhibitors), and other substances (sugar-free mints, nicotinamide). Medications which stimulate saliva production traditionally have been administered through oral tablets, which the patient goes on to swallow, although some saliva stimulants can also be found in the form of toothpastes. Lozenges, which are retained in the mouth and then swallowed are becoming more and more popular. Lozenges are soft and gentle on the mouth and there is a belief that prolonged contact with the oral mucosa mechanically stimulates saliva production.Pilocarpine: A study by Taweechaisupapong in 2006 showed no statistical significant improvement in oral dryness and saliva production compared to placebo when administering pilocarpine lozenges. Physostigmine Gel: A study by Knosravini in 2009 showed a reduction in the oral dryness and a 5 times increase in saliva following physostigmine treatment. Chewing gum increases saliva production but there is no strong evidence that it improves dry mouth symptoms. The Cochrane oral health group concluded there is insufficient evidence to determine whether pilocarpine or physostigmine are effective treatments for Xerostomia. More research is needed. Dentirol chewing gum (xylitol): A study by Risheim in 1993 showed that when subjects had 2 sticks of gum up to 5 x daily, the gum gave subjective dry mouth symptom relief in approximately 1/3 of participants but no change in SWS (stimulated whole saliva). Profylin lozenge (xylitol/sorbitol):A study by Risheim in 1993 showed that when subjects had 1 lozenge 4 to 8 x daily, Profylin lozenges gave subjective dry mouth symptom relief in approximately 1/3 of participants but no change in SWS (stimulated whole saliva).Saliva substitutes can improve xerostomia, but tend not to improve the other problems associated with salivary gland dysfunction. Parasympathomimetic drugs (saliva stimulants) such as pilocarpine may improve xerostomia symptoms and other problems associated with salivary gland dysfunction, but the evidence for treatment of radiation-induced xerostomia is limited. Both stimulants and substitutes relieve symptoms to some extent. Salivary stimulants are probably only useful in people with some remaining detectable salivary function. A systematic review compromising of 36 randomised controlled trials for the treatment of dry mouth found that there was no strong evidence to suggest that a specific topical therapy is effective. This review also states that topical therapies can be expected to provide only short-term effects, which are reversible. The review reported limited evidence that oxygenated glycerol triester spray was more effective than electrolyte sprays. Sugar free chewing gum increases saliva production but there is no strong evidence that it improves symptoms. Plus, there is no clear evidence to suggest whether chewing gum is more or less effective as a treatment. There is a suggestion that intraoral devices and integrated mouthcare systems may be effective in reducing symptoms, but there was a lack of strong evidence. A systematic review of the management of radiotherapy-induced xerostomia with parasympathomimetic drugs found that there was limited evidence to support the use of pilocarpine in the treatment of radiation-induced salivary gland dysfunction. It was suggested that, barring any contraindications, a trial of the drug be offered in the above group (at a dose of five mg three times per day to minimize side effects). Improvements can take up to twelve weeks. However, pilocarpine is not always successful in improving xerostomia symptoms. The review also concluded that there was little evidence to support the use of other parasympathomimetics in this group. Another systematic review showed, that there is some low-quality evidence to suggest that amifostine prevents the feeling of dry mouth or reduce the risk of moderate to severe xerostomia in people receiving radiotherapy to the head and neck (with or without chemotherapy) in the short- (end of radiotherapy) to medium-term (three months postradiotherapy). But, it is less clear whether or not this effect is sustained to 12 months postradiotherapy.A 2013 review looking at non-pharmacological interventions reported a lack of evidence to support the effects of electrostimulation devices, or acupuncture, on symptoms of dry mouth. Epidemiology Xerostomia is a very common symptom. A conservative estimate of prevalence is about 20% in the general population, with increased prevalences in females (up to 30%) and the elderly (up to 50%). Estimates of the prevalence of persistent dry mouth vary between 10 and 50%. History Xerostomia has been used as a test to detect lies, which relied on emotional inhibition of salivary secretions to indicate possible incrimination. See also Xerosis (dry skin) References External links University of Illinois at Chicago NIH MedlinePlus Encyclopedia Drymouth Drymouth Drug Database
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Can you demystify the medical term 'Headache' for me?
Headache is the symptom of pain in the face, head, or neck. It can occur as a migraine, tension-type headache, or cluster headache. There is an increased risk of depression in those with severe headaches.Headaches can occur as a result of many conditions. There are a number of different classification systems for headaches. The most well-recognized is that of the International Headache Society, which classifies it into more than 150 types of primary and secondary headaches. Causes of headaches may include dehydration; fatigue; sleep deprivation; stress; the effects of medications (overuse) and recreational drugs, including withdrawal; viral infections; loud noises; head injury; rapid ingestion of a very cold food or beverage; and dental or sinus issues (such as sinusitis).Treatment of a headache depends on the underlying cause, but commonly involves pain medication (especially in case of migraine or cluster headache). A headache is one of the most commonly experienced of all physical discomforts.About half of adults have a headache in a given year. Tension headaches are the most common, affecting about 1.6 billion people (21.8% of the population) followed by migraine headaches which affect about 848 million (11.7%). Causes There are more than 200 types of headaches. Some are harmless and some are life-threatening. The description of the headache and findings on neurological examination, determine whether additional tests are needed and what treatment is best.Headaches are broadly classified as "primary" or "secondary". Primary headaches are benign, recurrent headaches not caused by underlying disease or structural problems. For example, migraine is a type of primary headache. While primary headaches may cause significant daily pain and disability, they are not dangerous from a physiological point of view. Secondary headaches are caused by an underlying disease, like an infection, head injury, vascular disorders, brain bleed, stomach irritation, or tumors. Secondary headaches can be dangerous. Certain "red flags" or warning signs indicate a secondary headache may be dangerous. Primary Ninety percent of all headaches are primary headaches. Primary headaches usually first start when people are between 20 and 40 years old. The most common types of primary headaches are migraines and tension-type headaches. They have different characteristics. Migraines typically present with pulsing head pain, nausea, photophobia (sensitivity to light) and phonophobia (sensitivity to sound). Tension-type headaches usually present with non-pulsing "bandlike" pressure on both sides of the head, not accompanied by other symptoms. Such kind of headaches maybe further classified into-episodic and chronic tension type headaches Other very rare types of primary headaches include: cluster headaches: short episodes (15–180 minutes) of severe pain, usually around one eye, with autonomic symptoms (tearing, red eye, nasal congestion) which occur at the same time every day. Cluster headaches can be treated with triptans and prevented with prednisone, ergotamine or lithium. trigeminal neuralgia or occipital neuralgia: shooting face pain hemicrania continua: continuous unilateral pain with episodes of severe pain. Hemicrania continua can be relieved by the medication indomethacin. primary stabbing headache: recurrent episodes of stabbing "ice pick pain" or "jabs and jolts" for 1 second to several minutes without autonomic symptoms (tearing, red eye, nasal congestion). These headaches can be treated with indomethacin. primary cough headache: starts suddenly and lasts for several minutes after coughing, sneezing or straining (anything that may increase pressure in the head). Serious causes (see secondary headaches red flag section) must be ruled out before a diagnosis of "benign" primary cough headache can be made. primary exertional headache: throbbing, pulsatile pain which starts during or after exercising, lasting for 5 minutes to 24 hours. The mechanism behind these headaches is unclear, possibly due to straining causing veins in the head to dilate, causing pain. These headaches can be prevented by not exercising too strenuously and can be treated with medications such as indomethacin. primary sex headache: dull, bilateral headache that starts during sexual activity and becomes much worse during orgasm. These headaches are thought to be due to lower pressure in the head during sex. It is important to realize that headaches that begin during orgasm may be due to a subarachnoid hemorrhage, so serious causes must be ruled out first. These headaches are treated by advising the person to stop sex if they develop a headache. Medications such as propranolol and diltiazem can also be helpful. hypnic headache: a moderate-severe headache that starts a few hours after falling asleep and lasts 15–30 minutes. The headache may recur several times during the night. Hypnic headaches are usually in older women. They may be treated with lithium. Secondary Headaches may be caused by problems elsewhere in the head or neck. Some of these are not harmful, such as cervicogenic headache (pain arising from the neck muscles). The excessive use of painkillers can paradoxically cause worsening painkiller headaches.More serious causes of secondary headaches include the following: meningitis: inflammation of the meninges which presents with fever and meningismus, or stiff neck bleeding inside the brain (intracranial hemorrhage) subarachnoid hemorrhage (acute, severe headache, stiff neck without fever) ruptured aneurysm, arteriovenous malformation, intraparenchymal hemorrhage (headache only) brain tumor: dull headache, worse with exertion and change in position, accompanied by nausea and vomiting. Often, the person will have nausea and vomiting for weeks before the headache starts. temporal arteritis: inflammatory disease of arteries common in the elderly (average age 70) with fever, headache, weight loss, jaw claudication, tender vessels by the temples, polymyalgia rheumatica acute closed-angle glaucoma (increased pressure in the eyeball): a headache that starts with eye pain, blurry vision, associated with nausea and vomiting. On physical exam, the person will have red eyes and a fixed, mid-dilated pupil. Post-ictal headaches: Headaches that happen after a convulsion or other type of seizure, as part of the period after the seizure (the post-ictal state)Gastrointestinal disorders may cause headaches, including Helicobacter pylori infection, celiac disease, non-celiac gluten sensitivity, irritable bowel syndrome, inflammatory bowel disease, gastroparesis, and hepatobiliary disorders. The treatment of the gastrointestinal disorders may lead to a remission or improvement of headaches.Migraine headaches are also associated with Cyclic Vomiting Syndrome (CVS). CVS is characterized by episodes of severe vomiting, and often occur alongside symptoms similar to those of migraine headaches (photophobia, abdominal pain, etc.). Pathophysiology The brain itself is not sensitive to pain, because it lacks pain receptors. However, several areas of the head and neck do have pain receptors and can thus sense pain. These include the extracranial arteries, middle meningeal artery, large veins, venous sinuses, cranial and spinal nerves, head and neck muscles, the meninges, falx cerebri, parts of the brainstem, eyes, ears, teeth, and lining of the mouth. Pial arteries, rather than pial veins are responsible for pain production.Headaches often result from traction to or irritation of the meninges and blood vessels. The pain receptors may be stimulated by head trauma or tumors and cause headaches. Blood vessel spasms, dilated blood vessels, inflammation or infection of meninges and muscular tension can also stimulate pain receptors. Once stimulated, a nociceptor sends a message up the length of the nerve fiber to the nerve cells in the brain, signalling that a part of the body hurts.Primary headaches are more difficult to understand than secondary headaches. The exact mechanisms which cause migraines, tension headaches and cluster headaches are not known. There have been different hypotheses over time that attempt to explain what happens in the brain to cause these headaches.Migraines are currently thought to be caused by dysfunction of the nerves in the brain. Previously, migraines were thought to be caused by a primary problem with the blood vessels in the brain. This vascular theory, which was developed in the 20th century by Wolff, suggested that the aura in migraines is caused by constriction of intracranial vessels (vessels inside the brain), and the headache itself is caused by rebound dilation of extracranial vessels (vessels just outside the brain). Dilation of these extracranial blood vessels activates the pain receptors in the surrounding nerves, causing a headache. The vascular theory is no longer accepted. Studies have shown migraine head pain is not accompanied by extracranial vasodilation, but rather only has some mild intracranial vasodilation.Currently, most specialists think migraines are due to a primary problem with the nerves in the brain. Auras are thought to be caused by a wave of increased activity of neurons in the cerebral cortex (a part of the brain) known as cortical spreading depression followed by a period of depressed activity. Some people think headaches are caused by the activation of sensory nerves which release peptides or serotonin, causing inflammation in arteries, dura and meninges and also cause some vasodilation. Triptans, medications that treat migraines, block serotonin receptors and constrict blood vessels.People who are more susceptible to experiencing migraines without headache are those who have a family history of migraines, women, and women who are experiencing hormonal changes or are taking birth control pills or are prescribed hormone replacement therapy.Tension headaches are thought to be caused by activation of peripheral nerves in the head and neck muscles.Cluster headaches involve overactivation of the trigeminal nerve and hypothalamus in the brain, but the exact cause is unknown. Diagnosis Most headaches can be diagnosed by the clinical history alone. If the symptoms described by the person sound dangerous, further testing with neuroimaging or lumbar puncture may be necessary. Electroencephalography (EEG) is not useful for headache diagnosis.The first step to diagnosing a headache is to determine if the headache is old or new. A "new headache" can be a headache that has started recently, or a chronic headache that has changed character. For example, if a person has chronic weekly headaches with pressure on both sides of his head, and then develops a sudden severe throbbing headache on one side of his head, they have a new headache. Red flags It can be challenging to differentiate between low-risk, benign headaches and high-risk, dangerous headaches since symptoms are often similar. Headaches that are possibly dangerous require further lab tests and imaging to diagnose.The American College for Emergency Physicians published criteria for low-risk headaches. They are as follows: age younger than 30 years features typical of primary headache history of similar headache no abnormal findings on neurologic exam no concerning change in normal headache pattern no high-risk comorbid conditions (for example, HIV) no new concerning history or physical examination findingsA number of characteristics make it more likely that the headache is due to potentially dangerous secondary causes which may be life-threatening or cause long-term damage. These "red flag" symptoms mean that a headache warrants further investigation with neuroimaging and lab tests.In general, people complaining of their "first" or "worst" headache warrant imaging and further workup. People with progressively worsening headache also warrant imaging, as they may have a mass or a bleed that is gradually growing, pressing on surrounding structures and causing worsening pain. People with neurological findings on exam, such as weakness, also need further workup.The American Headache Society recommends using "SSNOOP", a mnemonic to remember the red flags for identifying a secondary headache: Systemic symptoms (fever or weight loss) Systemic disease (HIV infection, malignancy) Neurologic symptoms or signs Onset sudden (thunderclap headache) Onset after age 40 years Previous headache history (first, worst, or different headache)Other red flag symptoms include: Old headaches Old headaches are usually primary headaches and are not dangerous. They are most often caused by migraines or tension headaches. Migraines are often unilateral, pulsing headaches accompanied by nausea or vomiting. There may be an aura (visual symptoms, numbness or tingling) 30–60 minutes before the headache, warning the person of a headache. Migraines may also not have auras. Tension-type headaches usually have bilateral "bandlike" pressure on both sides of the head usually without nausea or vomiting. However, some symptoms from both headache groups may overlap. It is important to distinguish between the two because the treatments are different. The mnemonic POUND helps distinguish between migraines and tension-type headaches. POUND stands for: One review article found that if 4–5 of the POUND characteristics are present, a migraine is 24 times as likely a diagnosis than a tension-type headache (likelihood ratio 24). If 3 characteristics of POUND are present, migraine is 3 times more likely a diagnosis than tension type headache (likelihood ratio 3). If only 2 POUND characteristics are present, tension-type headaches are 60% more likely (likelihood ratio 0.41). Another study found the following factors independently each increase the chance of migraine over tension-type headache: nausea, photophobia, phonophobia, exacerbation by physical activity, unilateral, throbbing quality, chocolate as a headache trigger, and cheese as a headache trigger.Cluster headaches are relatively rare (1 in 1000 people) and are more common in men than women. They present with sudden onset explosive pain around one eye and are accompanied by autonomic symptoms (tearing, runny nose and red eye).Temporomandibular jaw pain (chronic pain in the jaw joint), and cervicogenic headache (headache caused by pain in muscles of the neck) are also possible diagnoses.For chronic, unexplained headaches, keeping a headache diary can be useful for tracking symptoms and identifying triggers, such as association with menstrual cycle, exercise and food. While mobile electronic diaries for smartphones are becoming increasingly common, a recent review found most are developed with a lack of evidence base and scientific expertise. New headaches New headaches are more likely to be dangerous secondary headaches. They can, however, simply be the first presentation of a chronic headache syndrome, like migraine or tension-type headaches.One recommended diagnostic approach is as follows. If any urgent red flags are present such as visual loss, new seizures, new weakness, new confusion, further workup with imaging and possibly a lumbar puncture should be done (see red flags section for more details). If the headache is sudden onset (thunderclap headache), a computed tomography test to look for a brain bleed (subarachnoid hemorrhage) should be done. If the CT scan does not show a bleed, a lumbar puncture should be done to look for blood in the CSF, as the CT scan can be falsely negative and subarachnoid hemorrhages can be fatal. If there are signs of infection such as fever, rash, or stiff neck, a lumbar puncture to look for meningitis should be considered. If there is jaw claudication and scalp tenderness in an older person, a temporal artery biopsy to look for temporal arteritis should be performed and immediate treatment should be started. Neuroimaging Old headaches The US Headache Consortium has guidelines for neuroimaging of non-acute headaches. Most old, chronic headaches do not require neuroimaging. If a person has the characteristic symptoms of a migraine, neuroimaging is not needed as it is very unlikely the person has an intracranial abnormality. If the person has neurological findings, such as weakness, on exam, neuroimaging may be considered. New headaches All people who present with red flags indicating a dangerous secondary headache should receive neuroimaging. The best form of neuroimaging for these headaches is controversial. Non-contrast computerized tomography (CT) scan is usually the first step in head imaging as it is readily available in Emergency Departments and hospitals and is cheaper than MRI. Non-contrast CT is best for identifying an acute head bleed. Magnetic Resonance Imaging (MRI) is best for brain tumors and problems in the posterior fossa, or back of the brain. MRI is more sensitive for identifying intracranial problems, however it can pick up brain abnormalities that are not relevant to the persons headaches.The American College of Radiology recommends the following imaging tests for different specific situations: Lumbar puncture A lumbar puncture is a procedure in which cerebral spinal fluid is removed from the spine with a needle. A lumbar puncture is necessary to look for infection or blood in the spinal fluid. A lumbar puncture can also evaluate the pressure in the spinal column, which can be useful for people with idiopathic intracranial hypertension (usually young, obese women who have increased intracranial pressure), or other causes of increased intracranial pressure. In most cases, a CT scan should be done first. Classification Headaches are most thoroughly classified by the International Headache Societys International Classification of Headache Disorders (ICHD), which published the second edition in 2004. The third edition of the International Headache Classification was published in 2013 in a beta version ahead of the final version. This classification is accepted by the WHO.Other classification systems exist. One of the first published attempts was in 1951. The US National Institutes of Health developed a classification system in 1962. ICHD-2 The International Classification of Headache Disorders (ICHD) is an in-depth hierarchical classification of headaches published by the International Headache Society. It contains explicit (operational) diagnostic criteria for headache disorders. The first version of the classification, ICHD-1, was published in 1988. The current revision, ICHD-2, was published in 2004. The classification uses numeric codes. The top, one-digit diagnostic level includes 14 headache groups. The first four of these are classified as primary headaches, groups 5-12 as secondary headaches, cranial neuralgia, central and primary facial pain and other headaches for the last two groups.The ICHD-2 classification defines migraines, tension-types headaches, cluster headache and other trigeminal autonomic headache as the main types of primary headaches. Also, according to the same classification, stabbing headaches and headaches due to cough, exertion and sexual activity (sexual headache) are classified as primary headaches. The daily-persistent headaches along with the hypnic headache and thunderclap headaches are considered primary headaches as well.Secondary headaches are classified based on their cause and not on their symptoms. According to the ICHD-2 classification, the main types of secondary headaches include those that are due to head or neck trauma such as whiplash injury, intracranial hematoma, post craniotomy or other head or neck injury. Headaches caused by cranial or cervical vascular disorders such as ischemic stroke and transient ischemic attack, non-traumatic intracranial hemorrhage, vascular malformations or arteritis are also defined as secondary headaches. This type of headache may also be caused by cerebral venous thrombosis or different intracranial vascular disorders. Other secondary headaches are those due to intracranial disorders that are not vascular such as low or high pressure of the cerebrospinal fluid pressure, non-infectious inflammatory disease, intracranial neoplasm, epileptic seizure or other types of disorders or diseases that are intracranial but that are not associated with the vasculature of the central nervous system.ICHD-2 classifies headaches that are caused by the ingestion of a certain substance or by its withdrawal as secondary headaches as well. This type of headache may result from the overuse of some medications or exposure to some substances. HIV/AIDS, intracranial infections and systemic infections may also cause secondary headaches. The ICHD-2 system of classification includes the headaches associated with homeostasis disorders in the category of secondary headaches. This means that headaches caused by dialysis, high blood pressure, hypothyroidism, cephalalgia and even fasting are considered secondary headaches. Secondary headaches, according to the same classification system, can also be due to the injury of any of the facial structures including teeth, jaws, or temporomandibular joint. Headaches caused by psychiatric disorders such as somatization or psychotic disorders are also classified as secondary headaches.The ICHD-2 classification puts cranial neuralgias and other types of neuralgia in a different category. According to this system, there are 19 types of neuralgias and headaches due to different central causes of facial pain. Moreover, the ICHD-2 includes a category that contains all the headaches that cannot be classified.Although the ICHD-2 is the most complete headache classification there is and it includes frequency in the diagnostic criteria of some types of headaches (primarily primary headaches), it does not specifically code frequency or severity which are left at the discretion of the examiner. NIH The NIH classification consists of brief definitions of a limited number of headaches.The NIH system of classification is more succinct and only describes five categories of headaches. In this case, primary headaches are those that do not show organic or structural causes. According to this classification, primary headaches can only be vascular, myogenic, cervicogenic, traction, and inflammatory. Management Primary headache syndromes have many different possible treatments. In those with chronic headaches the long term use of opioids appears to result in greater harm than benefit. Migraines Migraine can be somewhat improved by lifestyle changes, but most people require medicines to control their symptoms. Medications are either to prevent getting migraines, or to reduce symptoms once a migraine starts.Preventive medications are generally recommended when people have more than four attacks of migraine per month, headaches last longer than 12 hours or the headaches are very disabling. Possible therapies include beta blockers, antidepressants, anticonvulsants and NSAIDs. The type of preventive medicine is usually chosen based on the other symptoms the person has. For example, if the person also has depression, an antidepressant is a good choice.Abortive therapies for migraines may be oral, if the migraine is mild to moderate, or may require stronger medicine given intravenously or intramuscularly. Mild to moderate headaches should first be treated with acetaminophen (paracetamol) or NSAIDs, like ibuprofen. If accompanied by nausea or vomiting, an antiemetic such as metoclopramide (Reglan) can be given orally or rectally. Moderate to severe attacks should be treated first with an oral triptan, a medication that mimics serotonin (an agonist) and causes mild vasoconstriction. If accompanied by nausea and vomiting, parenteral (through a needle in the skin) triptans and antiemetics can be given.Sphenopalatine ganglion block (SPG block, also known nasal ganglion block or pterygopalatine ganglion blocks) can abort and prevent migraines, tension headaches and cluster headaches. It was originally described by American ENT surgeon Greenfield Sluder in 1908. Both blocks and neurostimulation have been studied as treatment for headaches.Several complementary and alternative strategies can help with migraines. The American Academy of Neurology guidelines for migraine treatment in 2000 stated relaxation training, electromyographic feedback and cognitive behavioral therapy may be considered for migraine treatment, along with medications. Tension-type headaches Tension-type headaches can usually be managed with NSAIDs (ibuprofen, naproxen, aspirin), or acetaminophen. Triptans are not helpful in tension-type headaches unless the person also has migraines. For chronic tension type headaches, amitriptyline is the only medication proven to help. Amitriptyline is a medication which treats depression and also independently treats pain. It works by blocking the reuptake of serotonin and norepinephrine, and also reduces muscle tenderness by a separate mechanism. Studies evaluating acupuncture for tension-type headaches have been mixed. Overall, they show that acupuncture is probably not helpful for tension-type headaches. Cluster headaches Abortive therapy for cluster headaches includes subcutaneous sumatriptan (injected under the skin) and triptan nasal sprays. High flow oxygen therapy also helps with relief.For people with extended periods of cluster headaches, preventive therapy can be necessary. Verapamil is recommended as first line treatment. Lithium can also be useful. For people with shorter bouts, a short course of prednisone (10 days) can be helpful. Ergotamine is useful if given 1–2 hours before an attack. Secondary headaches Treatment of secondary headaches involves treating the underlying cause. For example, a person with meningitis will require antibiotics. A person with a brain tumor may require surgery, chemotherapy or brain radiation. Neuromodulation Peripheral neuromodulation has tentative benefits in primary headaches including cluster headaches and chronic migraine. How it may work is still being looked into. Epidemiology Literature reviews find that approximately 64–77% of adults have had a headache at some point in their lives. During each year, on average, 46–53% of people have headaches. However, the prevalence of headache varies widely depending on how the survey was conducted, with studies finding lifetime prevalence of as low as 8% to as high as 96%. Most of these headaches are not dangerous. Only approximately 1–5% of people who seek emergency treatment for headaches have a serious underlying cause.More than 90% of headaches are primary headaches. Most of these primary headaches are tension headaches. Most people with tension headaches have "episodic" tension headaches that come and go. Only 3.3% of adults have chronic tension headaches, with headaches for more than 15 days in a month.Approximately 12–18% of people in the world have migraines. More women than men experience migraines. In Europe and North America, 5–9% of men experience migraines, while 12–25% of women experience migraines.Cluster headaches are relatively uncommon. They affect only 1–3 per thousand people in the world. Cluster headaches affect approximately three times as many men as women. History The first recorded classification system was published by Aretaeus of Cappadocia, a medical scholar of Greco-Roman antiquity. He made a distinction between three different types of headache: i) cephalalgia, by which he indicates a shortlisting, mild headache; ii) cephalea, referring to a chronic type of headache; and iii) heterochromia, a paroxysmal headache on one side of the head. Another classification system that resembles the modern ones was published by Thomas Willis, in De Cephalalgia in 1672. In 1787 Christian Baur generally divided headaches into idiopathic (primary headaches) and symptomatic (secondary ones), and defined 84 categories. Children In general, children experience the same types of headaches as adults do, but their symptoms may be slightly different. The diagnostic approach to headaches in children is similar to that of adults. However, young children may not be able to verbalize pain well. If a young child is fussy, they may have a headache.Approximately 1% of emergency department visits for children are for headache. Most of these headaches are not dangerous. The most common type of headache seen in pediatric emergency rooms is headache caused by a cold (28.5%). Other headaches diagnosed in the emergency department include post-traumatic headache (20%), headache related to a problem with a ventriculoperitoneal shunt (a device put into the brain to remove excess CSF and reduce pressure in the brain) (11.5%) and migraine (8.5%). The most common serious headaches found in children include brain bleeds (subdural hematoma, epidural hematoma), brain abscesses, meningitis and ventriculoperitoneal shunt malfunction. Only 4–6.9% of kids with a headache have a serious cause.Just as in adults, most headaches are benign, but when head pain is accompanied with other symptoms such as speech problems, muscle weakness, and loss of vision, a more serious underlying cause may exist: hydrocephalus, meningitis, encephalitis, abscess, hemorrhage, tumor, blood clots, or head trauma. In these cases, the headache evaluation may include CT scan or MRI in order to look for possible structural disorders of the central nervous system. If a child with a recurrent headache has a normal physical exam, neuroimaging is not recommended. Guidelines state children with abnormal neurologic exams, confusion, seizures and recent onset of worst headache of life, change in headache type or anything suggesting neurologic problems should receive neuroimaging.When children complain of headaches, many parents are concerned about a brain tumor. Generally, headaches caused by brain masses are incapacitating and accompanied by vomiting. One study found characteristics associated with brain tumor in children are: headache for greater than 6 months, headache related to sleep, vomiting, confusion, no visual symptoms, no family history of migraine and abnormal neurologic exam.Some measures can help prevent headaches in children. Drinking plenty of water throughout the day, avoiding caffeine, getting enough and regular sleep, eating balanced meals at the proper times, and reducing stress and excess of activities may prevent headaches. Treatments for children are similar to those for adults, however certain medications such as narcotics should not be given to children.Children who have headaches will not necessarily have headaches as adults. In one study of 100 children with headache, eight years later 44% of those with tension headache and 28% of those with migraines were headache free. In another study of people with chronic daily headache, 75% did not have chronic daily headaches two years later, and 88% did not have chronic daily headaches eight years later. See also Eye strain References External links Headache at Curlie
You are a conduit for medical education. Your task is to offer detailed explanations of medical terms, ensuring they are understandable and precise.
Can you demystify the medical term 'Epididymitis' for me?
Epididymitis is a medical condition characterized by inflammation of the epididymis, a curved structure at the back of the testicle. Onset of pain is typically over a day or two. The pain may improve with raising the testicle. Other symptoms may include swelling of the testicle, burning with urination, or frequent urination. Inflammation of the testicle is commonly also present.In those who are young and sexually active gonorrhea and chlamydia are frequently the underlying cause. In older males and men who practice insertive anal sex, enteric bacteria are a common cause. Diagnosis is typically based on symptoms. Conditions that may result in similar symptoms include testicular torsion, inguinal hernia, and testicular cancer. Ultrasound can be useful if the diagnosis is unclear.Treatment may include pain medications, NSAIDs, and elevation. Recommended antibiotics in those who are young and sexually active are ceftriaxone and doxycycline. Among those who are older, ofloxacin may be used. Complications include infertility and chronic pain. People aged 15 to 35 are most commonly affected, with about 600,000 people within this age group affected per year in the United States. Signs and symptoms Those aged 15 to 35 are most commonly affected. The acute form usually develops over the course of several days, with pain and swelling frequently in only one testis, which will hang low in the scrotum. There will often be a recent history of dysuria or urethral discharge. Fever is also a common symptom. In the chronic version, the patient may have painful point tenderness but may or may not have an irregular epididymis upon palpation, though palpation may reveal an indurated epididymis. A scrotal ultrasound may reveal problems with the epididymis, but such an ultrasound may also show nothing unusual. The majority of patients who present with chronic epididymitis have had symptoms for over five years.: p.311 Complications Untreated, acute epididymitiss major complications are abscess formation and testicular infarction. Chronic epididymitis can lead to permanent damage or even destruction of the epididymis and testicle (resulting in infertility and/or hypogonadism), and infection may spread to any other organ or system of the body. Chronic pain is also an associated complication for untreated chronic epididymitis. Causes Though urinary tract infections in men are rare, bacterial infection is the most common cause of acute epididymitis. The bacteria in the urethra back-track through the urinary and reproductive structures to the epididymis. In rare circumstances, the infection reaches the epididymis via the bloodstream.In sexually active men, Chlamydia trachomatis is responsible for two-thirds of acute cases, followed by Neisseria gonorrhoeae and E. coli (or other bacteria that cause urinary tract infection). Particularly among men over age 35 in whom the cause is E. coli, epididymitis is commonly due to urinary tract obstruction. Less common microbes include Ureaplasma, Mycobacterium, and cytomegalovirus, or Cryptococcus in patients with HIV infection. E. coli is more common in boys before puberty, the elderly, and men who have sex with men. In the majority of cases in which bacteria are the cause, only one side of the scrotum or the other is the locus of pain.Non-infectious causes are also possible. Reflux of sterile urine (urine without bacteria) through the ejaculatory ducts may cause inflammation with obstruction. In children, it may be a response following an infection with enterovirus, adenovirus or Mycoplasma pneumoniae. Rare non-infectious causes of chronic epididymitis include sarcoidosis (more prevalent in black men) and Behçets disease.: p.311 Any form of epididymitis can be caused by genito-urinary surgery, including prostatectomy and urinary catheterization. Congestive epididymitis is a long-term complication of vasectomy. Chemical epididymitis may also result from drugs such as amiodarone. Diagnosis Diagnosis is typically based on symptoms. Conditions that may result in similar symptoms include testicular torsion, inguinal hernia, and testicular cancer. Ultrasound can be useful if the diagnosis is unclear.Epididymitis usually has a gradual onset. Typical findings are redness, warmth and swelling of the scrotum, with tenderness behind the testicle, away from the middle (this is the normal position of the epididymis relative to the testicle). The cremasteric reflex (elevation of the testicle in response to stroking the upper inner thigh) remains normal. This is a useful sign to distinguish it from testicular torsion. If there is pain relieved by elevation of the testicle, this is called Prehns sign, which is, however, non-specific and is not useful for diagnosis. Before the advent of sophisticated medical imaging techniques, surgical exploration was the standard of care. Today, Doppler ultrasound is a common test: it can demonstrate areas of blood flow and can distinguish clearly between epididymitis and torsion. However, as torsion and other sources of testicular pain can often be determined by palpation alone, some studies have suggested that the only real benefit of an ultrasound is to assure the person that they do not have testicular cancer.: p.237  Nuclear testicular blood flow testing is rarely used.Additional tests may be necessary to identify underlying causes. In younger children, a urinary tract anomaly is frequently found. In sexually active men, tests for sexually transmitted diseases may be done. These may include microscopy and culture of a first void urine sample, Gram stain and culture of fluid or a swab from the urethra, nucleic acid amplification tests (to amplify and detect microbial DNA or other nucleic acids) or tests for syphilis and HIV. Classification Epididymitis can be classified as acute, subacute, and chronic, depending on the duration of symptoms. Chronic epididymitis Chronic epididymitis is epididymitis that is present for more than 3 months. Chronic epididymitis is characterized by inflammation even when there is no infection present. Tests are needed to distinguish chronic epididymitis from a range of other disorders that can cause constant scrotal pain including testicular cancer (though this is often painless), enlarged scrotal veins (varicocele), calcifications, and a possible cyst within the epididymis. Some research has found that as much as 80% of visits to a urologist for scrotal pain are for chronic epididymitis.: p.311  As a further complication, the nerves in the scrotal area are closely connected to those of the abdomen, sometimes causing abdominal pain similar to a hernia (see referred pain). Chronic epididymitis is most commonly associated with lower back pain, and the onset of pain often co-occurs with activity that stresses the low back (i.e., heavy lifting, long periods of car driving, poor posture while sitting, or any other activity that interferes with the normal curve of the lumbar lordosis region).: p.237 Treatment In both the acute and chronic forms, antibiotics are used if an infection is suspected. The treatment of choice is often azithromycin and cefixime to cover both gonorrhoeae and chlamydia. Fluoroquinolones are no longer recommended due to widespread resistance of gonorrhoeae to this class. Doxycycline may be used as an alternative to azithromycin. In chronic epididymitis, a four- to six-week course of antibiotics may be prescribed to ensure the complete eradication of any possible bacterial cause, especially the various chlamydiae. For cases caused by enteric organisms (such as E. coli), ofloxacin or levofloxacin are recommended.In children, fluoroquinolones and doxycycline are best avoided. Since bacteria that cause urinary tract infections are often the cause of epididymitis in children, co-trimoxazole or suited penicillins (for example, cephalexin) can be used.Household remedies such as elevation of the scrotum and cold compresses applied regularly to the scrotum may relieve the pain in acute cases. Painkillers or anti-inflammatory drugs are often used for treatment of both chronic and acute forms. Hospitalisation is indicated for severe cases, and check-ups can ensure the infection has cleared up. Surgical removal of the epididymis is rarely necessary, causes sterility, and only gives relief from pain in approximately 50% of cases. However, in acute suppurating epididymitis (acute epididymitis with a discharge of pus), an epididymotomy may be recommended; in refractory cases, a full epididymectomy may be required. In cases with unrelenting testicular pain, removal of the entire testicle—orchiectomy—may also be warranted. It is generally believed that most cases of chronic epididymitis will eventually "burn out" of patients system if left untreated, though this might take years or even decades. However, some prostate-related medications have proven effective in treating chronic epididymitis, including doxazosin. Epidemiology Epididymitis makes up 1 in 144 visits for medical care (0.69 percent) in men 18 to 50 years old or 600,000 cases in males between 18 and 35 in the United States.It occurs primarily in those 16 to 30 years of age and 51 to 70 years. As of 2008 there appears to be an increase in incidences in the United States that parallels an increase in reported cases of chlamydia and gonorrhea. References Further reading Galejs LE (February 1999). "Diagnosis and treatment of the acute scrotum". Am Fam Physician. 59 (4): 817–24. PMID 10068706. Nickel JC (2003). "Chronic epididymitis: a practical approach to understanding and managing a difficult urologic enigma". Rev Urol. 5 (4): 209–15. PMC 1553215. PMID 16985840. External links Epididymitis at Curlie
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
Can you demystify the medical term 'Embolic and thrombotic events after COVID-19 vaccination' for me?
Post-vaccination embolic and thrombotic events, termed vaccine-induced immune thrombotic thrombocytopenia (VITT), vaccine-induced prothrombotic immune thrombocytopenia (VIPIT), thrombosis with thrombocytopenia syndrome (TTS), vaccine-induced immune thrombocytopenia and thrombosis (VITT), or vaccine-associated thrombotic thrombocytopenia (VATT), are rare types of blood clotting syndromes that were initially observed in a number of people who had previously received the Oxford–AstraZeneca COVID‑19 vaccine (AZD1222) during the COVID‑19 pandemic. It was subsequently also described in the Janssen COVID‑19 vaccine (Johnson & Johnson) leading to suspension of its use until its safety had been reassessed. On 5 May 2022 the FDA posted a bulletin limiting the use of the Janssen Vaccine to very specific cases due to further reassesment of the risks of TTS, although the FDA also stated in the same bulletin that the benefits of the vaccine outweigh the risks.In April 2021, AstraZeneca and the European Medicines Agency (EMA) updated their information for healthcare professionals about AZD1222, saying it is "considered plausible" that there is a causal relationship between the vaccination and the occurrence of thrombosis in combination with thrombocytopenia and that, "although such adverse reactions are very rare, they exceeded what would be expected in the general population". Signs and symptoms The thrombosis events associated with the COVID‑19 vaccine may occur 4–28 days after its administration and mainly affects women under 55. Several relatively unusual types of thrombosis were specifically reported to be occurring in those with the reaction: cerebral venous sinus thrombosis and thrombosis of the splanchnic veins. Cerebral venous sinus thrombosis may cause severe headache, stroke-like symptoms (weakness of a limb and/or facial muscles), seizures and coma. Splanchnic vein thrombosis may cause abdominal pain, accumulation of fluid in the abdominal cavity, and gastrointestinal bleeding.Other forms of thrombosis, such as the more common pulmonary embolism, may also occur. Arterial thrombosis has also been reported. The low platelet count may manifest as petechia (tiny blood spots under the skin) beyond the site of the injection.Disseminated intravascular coagulation (DIC), diffuse formation of blood clots throughout the blood vessels of the body, has been reported as part of the syndrome. DIC may cause a range of symptoms, including abnormal bleeding, breathlessness, chest pain, neurological symptoms, low blood pressure, or swelling.COVID‑19 vaccines have some adverse effects that are listed as common in the two or three days following vaccination which are usually mild and temporary. Causes The rare simultaneous occurrence of thrombocytopenia (low blood platelets) with blood clots after vaccination raised the original concern about this condition. In many cases where acute thrombosis and thrombocytopenia have been found together after COVID‑19 vaccination, an antibody against platelet factor 4 has been identified. This phenomenon is mostly encountered in some people who have been administered heparin, but none of the reported cases had received heparin. More rarely, this phenomenon had previously been described as an autoimmune phenomenon in people who had not been exposed to heparin. One striking feature of thrombocytopenia in the presence of anti-PF4 antibodies is the propensity of some to develop thrombosis, a phenomenon called heparin-induced thrombocytopenia if heparin is involved.Thrombocytopenia is generally a common symptom after or during many viral infections, and it "has been consistently reported" after intravenous administration of adenoviral gene transfer vectors, although its mechanisms are not yet clear. There is no confirmed causal link to the syndrome and any COVID‑19 vaccination, however EMA is conducting investigations into AZD1222 and the Janssen COVID‑19 vaccine (Johnson & Johnson) for possible causal links.On 7 April 2021, the EMA noted one "plausible explanation" for the combination of blood clots and low blood platelets is "an immune response, leading to a condition similar to one seen sometimes in people treated with heparin", that is heparin induced thrombocytopenia (HIT). Diagnosis In the United Kingdom, professional societies led by the Royal College of Emergency Medicine have issued a guideline for suspected cases. Someone presenting with concerning symptoms between five and 28 days after administration of the vaccine is assessed for a possible thrombotic complication, with a full blood count (which includes a platelet count) as the initial investigation. If the platelet count is decreased, determination of the D-dimer and fibrinogen levels may be performed, with hematology expert advice recommended if these are elevated above specific cut offs. Management Guidelines from professional societies recommend treatment with alternative anticoagulants instead of heparin, as there is a possibility that it may aggravate the phenomenon. Alternative options as the directly acting oral anticoagulants (DOACs), argatroban, fondaparinux or danaparoid depending on the circumstances. Platelet transfusion is discouraged, as this too may aggravate thrombosis. UK guidelines by the British Society for Haematology recommend the administration of intravenous immunoglobulin (IVIG) to reduce levels of the pathogenic antibody. Low fibrinogen levels may require correction with fibrinogen concentrate or cryoprecipitate. Epidemiology The Paul Ehrlich Institute has recorded 31 cerebral venous sinus thromboses (CVST) and nine deaths out of 2.7 million vaccinated in Germany with the AZD1222. On 2 April 2021, the UKs Medicines and Healthcare products Regulatory Agency reported 22 cases of CVST and a further eight cases of clotting problems both associated with a low level of blood platelets following a "rigorous review" of its Yellow Card reporting. The institute also reported finding no events of this type which occurred after vaccination with the Pfizer–BioNTech COVID‑19 vaccine. The EMA had earlier said that a link between certain very rare blood clots and the AstraZeneca vaccine is "not proven, but is possible".Observations in Germany of these rare events seemed to relate mostly women aged under 55. However, because Germany had previously restricted AZD1222 to under 65s, the population vaccinated there with AZD1222 is comparatively younger, and consequently contained a higher proportion of women taking the contraceptive pill. As CVSTs are more likely in women using hormonal contraceptives, this inherent risk factor may be an influence on the reported preponderance of women experiencing these events following vaccination. The UK, in contrast, has applied its Pfizer and AZD1222 vaccines generally to older groups first, then by decreasing age.The UK Medicines and Healthcare products Regulatory Agency (MHRA) reporting regards AZD1222 to 3 November 2021, recording 73 deaths out of 425 cases (17%) in the context of 24.9 million first doses administered. Regulatory status According to the European Medicines Agency (EMA), as of 28 March 2021, the reported number of cases of embolic and thrombotic events after vaccination is lower than the rate of such events in the general population overall. However, the specific syndrome - of embolic and thrombotic events in combination with low levels of blood platelets - presenting in post-vaccination cases raised the possibility of an association between the vaccine and the relatively rare syndrome. The EMA also said that there is no proof that these events are caused by the vaccines, but that the possibility could not yet be ruled out completely. Accordingly, the EMA advised that people who received the vaccine and experienced symptoms suggestive of thrombosis, including shortness of breath, blurred vision and severe or persistent headache, should seek medical attention.In a press briefing on 7 April 2021, Emer Cooke, the executive director of the European Medicines Agency (EMA) began by stating "Our safety committee, the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, has confirmed that the benefits of the AstraZeneca vaccine in preventing COVID‑19 overall outweigh the risks of side effects. COVID‑19 is a very serious disease with high hospitalization and death rates and every day COVID is still causing thousands of deaths across the EU. This vaccine has proven to be highly effective, it prevents severe disease and hospitalization, and it is saving lives. Vaccination is extremely important in helping us in the fight against COVID‑19 and we need to use the vaccines we have to protect us from the devastating effects". She went on to say "The PRAC after a very in-depth analysis has concluded that the reported cases of unusual blood clotting following vaccination with the AstraZeneca vaccine should be listed as possible side effects of the vaccine". At the same briefing Dr. Samina Strauss of PRAC confirmed "our conclusion is that these clotting disorders are very rare side effects of the vaccine".The UK Medicines and Healthcare products Regulatory Agency (MHRA) held a news conference on 7 April 2021, and while there is no proof that the AZD1222 vaccination caused the rare blood clots, they indicated the possibility of a link is getting stronger. The UK is to offer under 30s alternative vaccines. The reasoning is because in the 20-29 age range the benefits to individual of vaccination were less as their likelihood of harm from COVID‑19 is less and closer to the potential risk of harm from the vaccine (at a medium exposure risk with COVID‑19 infection cases running at a rate of 60 per 100,000). For higher age groups the benefit to risk ratio increased.Also on 7 April 2021, an interim statement from the WHO said its advisory body, GACVS, found any "causal relationship" between the rare blood clot cases and AZD1222 to be "plausible but is not confirmed".On 20 April 2021, the safety committee of the EMA (PRAC) found a "possible link to very rare cases of unusual blood clots with low blood platelets" for the Johnson & Johnson Janssen vaccine; and required that these rare events, similar to those noted for AZD1222, should be listed as a very rare side effect. The EMA states the overall risk-benefit for the Janssen vaccine remains positive.On 16 December 2021, the US Centers for Disease Control and Prevention (CDC) recommended the Moderna and Pfizer­­-BioNTech vaccines should be preferred over the Janssen vaccine, following growing concerns about rare blood clots. Janssen should still be offered to people who specifically request it. History Organizations Global vaccine safety comes under the remit of the World Health Organization (WHO), and in particular its Global Advisory Committee on Vaccine Safety (GAVCS). Other drug regulatory agencies significantly involved include: European Medicines Agency (EMA), the regional regulatory authority for the EU. Medicines and Healthcare products Regulatory Agency (MHRA), the medical authority for the United Kingdom. Paul Ehrlich Institute (PEI), a German federal agency supervised by the Federal Ministry of Health with expertise in vaccines and biomedicines. It is a WHO collaborating centre. Syndrome identification A number of COVID‑19 vaccines began to become approved and available at scale in December 2020, with vaccinations beginning to ramp up at scale from the beginning of 2021, among them the Oxford–AstraZeneca COVID‑19 vaccine, based on an adenovirus vector and internally termed AZD1222.On 11 March 2021, the EMA issued a statement noting Denmark had suspended AZD1222 vaccinations due to a vaccinated patient dying with blood clots. While noting there had been reports of other vaccinated people having blood clots and that its safety committee is already reviewing such cases, the number of thromboembolic events in vaccinated people is no higher than in the general population.The World Health Organization (WHO) Global Advisory Committee on Vaccine Safety on 19 March 2021, issued a statement relating to safety signals related to AZD1222 relating to thromboembolic events and thrombocytopenia following review of available data and conclusions included that AZD1222 "a positive benefit-risk profile, with tremendous potential to prevent infections and reduce deaths across the world".In its safety update of 29 March 2021, the EMA indicated it had initiated investigations into the very rare cases of specific embolic and thrombotic events in combination with thrombocytopenia (low levels of blood platelets) and related bleeding including disseminated intravascular coagulation and cerebral venous sinus thrombosis (CVST), noting any link with AZD1222 is not proven but could not be excluded. The EMA also initiated an assessment for all COVID‑19 vaccines used in the EU for immune thrombocytopenia (ITP), described as low blood platelet levels that could lead to bruising and bleeding, as a possible side effect, whilst also stating that up to this point no link with any COVID‑19 had been established.On 7 April 2021, the EMA determined that unusual blood clots with low blood platelets should be listed as very rare side effects of AZD1222, with WHO and UK EHRA issuing generally similar statements on the same day. None of the agencies found a confirmed causal link between the vaccine and these incidents at the time, but were listing them out of an abundance of caution.A highlight of minutes of the EMAs Pharmacovigilance Risk Assessment Committee (PRAC) concluding 9 April 2021, indicating they also were investigating four cases of unusual blood clots with low blood platelets, including one death, amongst people who had taken the Janssen COVID‑19 vaccine. The Janssen vaccine is approved but not yet deployed in the EU, though vaccinations are in progress in the US. PRAC has determined that it is not clear if there is a causal association. Should regulatory action prove necessary, PRAC have indicated the likely outcome would be an update to product information characteristics. Vaccination campaign responses Early reports of the events of concern seemed to indicate the presentation rate for the specific blood clots of concern might be higher for women of younger ages, UK Medicines and Healthcare products Regulatory Agency (MHRA) found examples across all genders and ages, their data skewing towards these specific blood clots being more prevalent in AZD1222 vaccinated persons of younger ages.The WHO has continued to stress the administration of vaccines is based on risk versus benefit analysis. Some variables that may be factored into such analysis include risk of an individual from catching COVID‑19, which relate to the infection rate in that area, and the benefits to that individual if vaccinated and exposed to COVID‑19 which varies with age, versus whatever the risks of vaccination are to that individual. Regional and national responses The advisory panel for the government of Ontario, Canada has recommended against the use of heparin for management of thrombosis after vaccination until more is known.In response to the concerns over the adverse effects relating to rare blood clotting types Germany has suspended use of the AZD1222 in those under 60 years of age; in contrast to a period previously having suspended use of AZD1222 to over–65s due to limited data of the efficacy of the vaccine to this age group at that time.Following a few days of suspended use of AZD1222, the Ministry of Health, Welfare and Sport of the Netherlands decided to continue administering the vaccine only to persons above the age of 60.On 8 April 2021, the Australian Technical Advisory Group on Immunisation (ATAGI) advised the Australian Government that the Pfizer COVID‑19 vaccine is recommended over AZD1222 for adults aged under 50 years. The advice is "based on the increasing risk of severe outcomes from COVID‑19 in older adults (and hence a higher benefit from vaccination) and a potentially increased risk of thrombosis with thrombocytopenia following AstraZeneca vaccination in those under 50 years." AZD1222 is still recommended by ATAGI for people over 50, and those under 50 who have already had their first dose with no ill effects. In the state of Victoria, there were reports of some, aged under 50, being turned away from vaccination centres, despite having confirmed appointments. It is understood a special consent process will be developed by the Australian Government for people under 50 who choose to receive the AZD1222 vaccine. Studies A study convened by a group of British hematologists on 19 March 2021, just two days after the acknowledgement of the condition, published its finding in The New England Journal of Medicine, establishing case definition criteria. The study included 294 participants who presented with symptoms of thrombocytopenia and thrombosis after receipt of the first dose of the Oxford–AstraZeneca COVID‑19 vaccine, showing an independent association between baseline platelet count and the presence of intracranial hemorrhage. The study established that 85% of the participants effected by the condition were aged younger than 60 years, and that those participants with a history of thrombosis or prothrombotic disorders did not appear to be at increased risk. The study showed an overall mortality rate of 22% and set out plans for additional research to determine the genetic factors that may increase risk of the condition and identify potential therapeutic agents. References Notes Footnotes Further reading Aleem A, Nadeem AJ (July 2021). "Coronavirus (COVID-19) Vaccine-Induced Immune Thrombotic Thrombocytopenia (VITT)". StatPearls. PMID 34033367. Arepally GM, Ortel TL (July 2021). "Vaccine-induced immune thrombotic thrombocytopenia: what we know and do not know". Blood. 138 (4): 293–8. doi:10.1182/blood.2021012152. PMC 8172307. PMID 34323940. Iba T, Levy JH, Warkentin TE (January 2022). "Recognizing Vaccine-Induced Immune Thrombotic Thrombocytopenia". Crit Care Med. 50 (1): e80–e86. doi:10.1097/CCM.0000000000005211. PMC 8670081. PMID 34259661. Schultz NH, Sørvoll IH, Michelsen AE, Munthe LA, Lund-Johansen F, Ahlen MT, et al. (June 2021). "Thrombosis and Thrombocytopenia after ChAdOx1 nCoV-19 Vaccination". N Engl J Med. 384 (22): 2124–30. doi:10.1056/NEJMoa2104882. PMC 8112568. PMID 33835768. Scully M, Singh D, Lown R, Poles A, Solomon T, Levi M, et al. (June 2021). "Pathologic Antibodies to Platelet Factor 4 after ChAdOx1 nCoV-19 Vaccination". N Engl J Med. 384 (23): 2202–11. doi:10.1056/NEJMoa2105385. PMC 8112532. PMID 33861525. External links EMA press conference 7th April on YouTube
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
I've encountered the term 'Yao syndrome' while reading about medical topics. What does it refer to exactly?
Yao syndrome (YAOS) (formerly called NOD2-associated autoinflammatory disease) is an autoinflammatory syndrome involving episodes of fever and abnormal inflammation affecting many parts of the body, particularly the skin, joints, and gastrointestinal system. Signs and symptoms Arthralgia (Arthritis) Fever Diarrhea Abdominal Pain Keratoconjunctivitis Sicca (Dry eyes) Pleurisy Idiopathic Pericarditis Xerostomia (Dry mouth) Erythematous Plaques Dermatitis Diagnostic criteria Yao syndrome is diagnosed if 2 major criteria, at least one minor criterion, the molecular criterion, and exclusion criteria are fulfilled. Treatment A study to determine the effectiveness of Novartis pharmaceutical drug Canakinumab was conducted. In this study, canakinumab was effective in patients with YAOS, and thus clinical trial of canakinumab may be warranted as a therapeutic option for this disease. Inheritance Yao Syndrome inheritance is classified as Multifactorial Inheritance. References External links MedGen-Yao Syndrome Genetics Home Reference-Yao Syndrome
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
I'd like to learn more about the medical term 'Lurbinectedin.' Can you provide some details?
Lurbinectedin, sold under the brand name Zepzelca, is a medication used for the treatment of small cell lung cancer.The most common side effects include leukopenia, lymphopenia, fatigue, anemia, neutropenia, increased creatinine, increased alanine aminotransferase, increased glucose, thrombocytopenia, nausea, decreased appetite, musculoskeletal pain, decreased albumin, constipation, dyspnea, decreased sodium, increased aspartate aminotransferase, vomiting, cough, decreased magnesium and diarrhea.Lurbinectedin is a synthetic tetrahydropyrrolo [4,3,2-de]quinolin-8(1H)-one alkaloid analogue with potential antineoplastic activity. Lurbinectedin covalently binds to residues lying in the minor groove of DNA, which may result in delayed progression through S phase, cell cycle arrest in the G2/M phase and cell death.Lurbinectedin was approved for medical use in the United States in June 2020. Medical uses Lurbinectedin is indicated for the treatment of adults with metastatic small cell lung cancer (SCLC) with disease progression on or after platinum-based chemotherapy. Structure Lurbinectedin is structurally similar to trabectedin, although the tetrahydroisoquinoline present in trabectedin is replaced with a tetrahydro β-carboline which enables lurbinectedin to exhibit increased antitumor activity compared with trabectedin. Synthesis Synthesis of lurbinectedin starts from small, common starting materials that require twenty-six individual steps to produce the drug with overall yield of 1.6%. Mechanism of action According to PharmaMar, lurbinectedin inhibits the active transcription of the encoding genes. This has two consequences. It promotes tumor cell death and normalizes the tumor microenvironment. Active transcription is the process by which there are specific signal where information contained in the DNA sequence is transferred to an RNA molecule. This activity depends on the activity of an enzyme called RNA polymerase II. Lurbinectedin inhibits transcription through a very precise mechanism. Firstly, lurbinectedin binds to specific DNA sequences. It is at these precise spots that slides down the DNA to produce RNA polymerase II that is blocked and degraded by lurbinectedin. Lurbinectedin also has important role in tumor microenvironment. The tumor cells act upon macrophages to avoid them from behaving like an activator of the immune system. Macrophages can contribute to tumor growth and progression by promoting tumor cell proliferation and invasion, fostering tumor angiogenesis and suppressing antitumor immune cells. Attracted to oxygen-starved (hypoxic) and necrotic tumor cells they promote chronic inflammation. So, not only that macrophages inhibit immune system avoiding the destruction of tumor cells, but they also create tumor tissue that allows tumor growth. However, macrophages associated with tumors are cells that are addicted to the transcription process. Lurbinectedin acts specifically on the macrophages associated with tumors in two ways: firstly, by inhibiting the transcription of macrophages that leads to cell death and secondly, inhibiting the production of tumor growth factors. In this way, lurbinectedin normalizes the tumor microenvironment. History Lurbinectedin was approved for medical use in the United States in June 2020.Efficacy was demonstrated in the PM1183-B-005-14 trial (Study B-005; NCT02454972), a multicenter open-label, multi-cohort study enrolling 105 participants with metastatic SCLC who had disease progression on or after platinum-based chemotherapy. Participants received lurbinectedin 3.2 mg/m2 by intravenous infusion every 21 days until disease progression or unacceptable toxicity. The trial was conducted at 26 sites in the United States, Great Britain, Belgium, France, Italy, Spain and Czech Republic.The U.S. Food and Drug Administration (FDA) granted the application for lurbinectedin priority review and orphan drug designations and granted the approval of Zepzelca to Pharma Mar S.A. Research Clinical Trials Lurbinectedin can be used as monotherapy in the treatment of SCLC. Lurbinectedin monotherapy demonstrated the following clinical results in relapsed extensive stage SCLC: For sensitive disease (chemotherapy-free interval of ≥ 90 days) overall response rate (ORR) was 46.6% with 79.3% disease control rate and median overall survival (OS) being increased to 15.2 months. For resistant disease (chemotherapy-free interval of < 90 days) overall response rate (ORR) was 21.3% with 46.8% disease control rate and 5.1 months median overall survival (OS).Lurbinectedin is also being investigated in combination with doxorubicin as second-line therapy in a randomized Phase III trial. While overall survival in this trial is not yet known, response rates at second line were 91.7% in sensitive disease with median progression-free survival of 5.8 months, and 33.3% in resistant disease with median progression-free of 3.5 months.Lurbinectedin is available in the U.S. under Expanded Access Program (EAP). References External links "Lurbinectedin". Drug Information Portal. U.S. National Library of Medicine. "Lurbinectedin". NCI Dictionary of Cancer Terms. National Cancer Institute. Clinical trial number NCT02454972 for "Clinical Trial of Lurbinectedin (PM01183) in Selected Advanced Solid Tumors" at ClinicalTrials.gov
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
What is the significance of the term 'Plastic bronchitis' in the medical field?
Plastic bronchitis (PB) is a disorder in which branching casts of the airways are expectorated. PB is not a single disease with a defined mechanism that explains the cast formation in all conditions. Examples of diseases associated with expectoration of casts, and which sometimes are labeled PB include tuberculosis, atypical mycobacterial disease, allergic bronchopulmonary aspergillosis, and asthma. When casts are very large with many branches, an abnormal communication or leakage of lymphatic fluid into the airway is often the cause. This entity is termed lymphatic plastic bronchitis (LPB). LPB is a lymphatic flow disorder characterized by the recurrent formation of branching, rubbery bronchial casts composed primarily of proteinaceous and sometimes chylous material and lymphocytes. Lymphatic fluids deposited into the airspaces become gelatinous as they cool, forming large string cheese-like casts of the airways, which can obstruct airflow. Attempts to expectorate casts can be quite frightening, leading to fears of asphyxiation. Signs and symptoms The clinical presentation of plastic bronchitis beyond expectoration of casts includes a productive cough, dyspnea, fever and wheezing. Focal wheezing is a characteristic, if not specific, physical examination finding. If the casts completely obstruct the airway, breath sounds will be decreased and dullness will be present with percussion. With partial obstruction, a “fan sound” or “flag flapping” sound can be heard during auscultation. Bronchial casts can sometimes fill the airways of almost an entire lung, and present as an acute, life-threatening emergency. Pathology The majority of PB cases are associated with an underlying disease. Several systemic illnesses have been associated with plastic bronchitis: Cardiac: constrictive pericarditis, congenital heart disease Pulmonary: asthma, allergic bronchopulmonary aspergillosis, aspergillosis, bronchiectasis, cystic fibrosis, tuberculosis, pneumonia, and bronchocentric granulomatosis Disorders of lymphatic drainage: lymphangiectasia, lymphangiomatosis Miscellaneous: acute chest syndrome/sickle cell disease, amyloidosis, rheumatoid arthritis, membranous colitis, inhaled irritants, neoplastic (lymphoma)The most common form of plastic bronchitis follows cardiac surgery for congenital heart disease, especially the Fontan procedure. Systemic blood flow is diverted to pulmonary flow, elevating pressures in the pulmonary venous system, and promoting leaks of proteinaceous and lipid-rich fluids from the lymphatics into the bronchial tree. Diagnosis The diagnosis of plastic bronchitis is confirmed by recovery of casts that have been coughed up or visualized during a bronchoscopy. There is no specific cytologic, pathologic or laboratory test that is diagnostic for casts due to lymphatic PB. Imaging Simple chest X-rays may reveal collapse due to airway obstruction. The contralateral lung may be hyperinflated. Casts can be visualized within the major airways using computerized axial tomography scans.Heavy T2-weighted MRI, and, as appropriate, intranodal lymphangiogram and/or dynamic contrast-enhanced MR lymphangiography may be useful for identifying pathological lymphatic tissue or lymphatic flow. Management Acute therapy for PB is often focused on removal or facilitated expectoration of the casts. This is followed by short and long term efforts to identify and remediate the underlying condition resulting in the excessive airway leakage or inflammation that is causing the casts to form.PB can present as a life threatening emergency when the casts obstruct the major airways resulting in acute respiratory distress. Intervention by a skilled physician experienced with foreign body removal from the lungs is essential. Evaluation by means of bronchoscopy can be difficult and time consuming and is best performed under general anesthesia. Casts can be removed mechanically by bronchoscopy or physical therapy. High-frequency chest wall oscillation can also be used to vibrate the chest wall at a high frequency to try to loosen and thin the casts. Inhaled therapy using bronchodilators, corticosteroids or mucolytics can be used to try to disrupt the cast formation. Guaifenesin syrup or tablets can assist in loosening existing casts for expectoration. Recently, heavy T2-weighted MRI has revealed that occult lymphatic anomalies that represent developmental remnants or subclinical GLA are present in adults who present with expectoration of large multiantennary, branching casts. Intranodal lymphangiogram and dynamic contrast-enhanced MR lymphangiography have been used to more precisely image the leaks, and in the small number of patients who have been treated to date, embolization of the thoracic duct has been highly successful in controlling cast formation.Cannulation of the thoracic duct followed by embolization should be considered in those patients who are shown to have leakage of lymphatic fluid into the airway. Medications Therapeutic interventions with medium-chain triglyceride-enriched low-fat diets, intratracheal heparin, inhaled tissue plasminogen activator, and steroids have also been reported and have met with variable success.Expectorants such as guaifenesin increase thinner secretions and lubricate the airways, allowing loosening and possible self-expulsion of casts. Inhaled mucolytics: Potassium iodide and acetylcysteine inhaled therapy are often used to help the patient cough up the casts by breaking down the thick mucus formations. Inhaled and oral steroids: If PB is associated with asthma or an infection, inhaled and oral steroids have been shown to be effective. Prognosis Patients with plastic bronchitis that is being caused due to a co-morbid condition generally have a good prognosis once the underlying disease is treated. Epidemiology To date, about 420 cases have been reported in the medical literature. Given its unusual nature, the true prevalence of PB is unknown, and it is likely that many patients are undiagnosed. PB does affect patients of all age groups and genders. == References ==
You act as a medical communicator. Your goal is to translate medical terms into comprehensive, yet straightforward language for all readers.
I'd like to learn more about the medical term 'Subareolar abscess.' Can you provide some details?
Also called Zuskas disease (only nonpuerperal case), subareolar abscess is a subcutaneous abscess of the breast tissue beneath the areola of the nipple. It is a frequently aseptic inflammation and has been associated with squamous metaplasia of lactiferous ducts. The term is usually understood to include breast abscesses located in the retroareolar region or the periareolar region but not those located in the periphery of the breast. Subareolar abscess can develop both during lactation or extrapuerperal, the abscess is often flaring up and down with repeated fistulation. Pathophysiology 90% of cases are smokers, however only a very small fraction of smokers appear to develop this lesion. It has been speculated that either the direct toxic effect or hormonal changes related to smoking could cause squamous metaplasia of lactiferous ducts. It is not well established whether the lesion regresses after smoking cessation. Extrapuerperal cases are often associated with hyperprolactinemia or with thyroid problems. Also diabetes mellitus may be a contributing factor in nonpuerperal breast abscess. Treatment Treatment is problematic unless an underlying endocrine disorder can be successfully diagnosed and treated. A study by Goepel and Panhke provided indications that the inflammation should be controlled by bromocriptine even in absence of hyperprolactinemia.Antibiotic treatment is given in case of acute inflammation. However, this alone is rarely effective, and the treatment of a subareaolar abscess is primarily surgical. In case of an acute abscess, incision and drainage are performed, followed by antibiotics treatment. However, in contrast to peripheral breast abscess which often resolves after antibiotics and incision and drainage, subareaolar breast abscess has a tendency to recur, often accompanied by the formation of fistulas leading from inflammation area to the skin surface. In many cases, in particular in patients with recurrent subareolar abscess, the excision of the affected lactiferous ducts is indicated, together with the excision of any chronic abscess or fistula. This can be performed using radial or circumareolar incision.There is no universal agreement on what should be the standard way of treating the condition. In a recent review article, antibiotics treatment, ultrasound evaluation and, if fluid is present, ultrasound-guided fine needle aspiration of the abscess with an 18 gauge needle, under saline lavage until clear, has been suggested as initial line of treatment for breast abscess in puerperal and non-puerperal cases including central (subareolar) abscess (see breast abscess for details). Elsewhere, it has been stated that treatment of subareolar abscess is unlikely to work if it does not address the ducts as such.Duct resection has been traditionally used to treat the condition; the original Hadfield procedure has been improved many times but long-term success rate remains poor even for radical surgery. Petersen even suggests that damage caused by previous surgery is a frequent cause of subareolar abscesses. Goepel and Pahnke and other authors recommend performing surgeries only with concomitant bromocriptine treatment. Squamous metaplasia of lactiferous ducts Squamous metaplasia of lactiferous ducts - abbreviated SMOLD is a change where the normal double layer cuboid epithelium of the lactiferous ducts is replaced by squamous keratinizing cell layers. The resulting epithelium is very similar to normal skin, hence some authors speak of epidermalization. SMOLD is rare in premenopausal women (possibly 0.1-3%) but more frequent (possibly up to 25%) in postmenopausal women where it does not cause any problems at all. SMOLD appears to be a completely benign lesion and may exist without causing any symptoms. In principle it ought to be completely reversible as the classification as metaplasia would suggest. Because of difficulties in observing the actual changes and rare incidence of the lesion this does not appear to be documented. The last section of the lactiferous ducts is always lined with squamous keratinizing epithelium which appears to have important physiological functions. For example, the keratin forms plugs sealing the duct entry and has bacteriostatic properties. In SMOLD the keratinizing lining which is supposed to form only the ends of the lactiferous ducts extends deep into the ducts. SMOLD is distinct from squamous metaplasia that may occur in papilomatous hyperplasia. It is believed to be unrelated to squamous cell carcinoma of the breast which probably arises from different cell types. The keratin plugs (debris) produced by SMOLD have been proposed as the cause for recurrent subareolar abscesses by causing secretory stasis. The epidermalized lining has also different permeability than the normal lining, hindering resorption of glandular secretions. The resorption is necessary to dispose of stalled secretions inside the duct - and at least equally important it affects osmotic balance which in turn is an important mechanism in the control of lactogenesis (this is relevant both in puerperal and nonpuerperal mastitis). While in lactating women this would appear to be a very plausible pathogenesis, there is some uncertainty about the pathogenesis in non-lactating women where breast secretions should be apriori minimal. It appears pathologic stimulation of lactogenesis must be present as well to cause subareolar abscess and treatment success with bromocriptin appears to confirm this as compared to poor success rate of the usual antibiotic and surgical treatments documented by Hanavadi et al.Further uncertainty in the relation of SMOLD and the subareolar abscess is that squamous metaplasia is very often caused by inflammatory processes. SMOLD could be the cause of the inflammation – or the result of a previous or longstanding inflammation. SMOLD usually affects multiple ducts and frequently (relative to extremely low absolute prevalence) both breasts hence it is very likely that systemic changes such as hormonal interactions are involved. At least the following factors have been considered in the aetiology of SMOLD: reactive change to chronic inflammation, systemic hormonal changes, smoking, dysregulation in beta-catenin expression, changes in retinoic acid and vitamin D metabolism or expression. Vitamin A deficiency may cause epidermilization of the ducts and squamous metaplasia and likely also contributes to infection. Vitamin A deficiency has been observed to cause squamous metaplasia in many types of epithelia. However supplementation with Vitamin A would be beneficial only in exceptional cases because normally the local catabolism of vitamin A will be the regulating factor. Squamous metaplasia of breast epithelia is known to be more prevalent in postmenopausal women (where it does not cause any problems at all). Staurosporine, a nonspecific protein kinase C inhibitor can induce squamous metaplasia in breast tissue while other known PKC inhibitors did not show this effect. cAMP stimulation can also induce squamous metaplasia. Research Multiple imaging modalities may be necessary to evaluate abnormalities of the nipple-areolar complex.In two studies performed in Japan, high-resolution MRI with a microscopy coil yielding 0.137-mm in-plane resolution has been used to confirm the presence of abscesses, isolated fistulas and inflammation and to reveal their position in order to guide surgery. References Further reading Kasales CJ, Han B, Smith JS, Chetlen AL, Kaneda HJ, Shereef S (February 2014). "Nonpuerperal mastitis and subareolar abscess of the breast". AJR. American Journal of Roentgenology (review). 202 (2): W133–9. doi:10.2214/AJR.13.10551. PMID 24450694. S2CID 27952386.
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
I'm not familiar with the medical term 'Blistering distal dactylitis.' Could you provide some insights?
Blistering distal dactylitis is a cutaneous condition characterized by tense superficial bullae occurring on a tender erythematous base over the volar fat pad of the phalanx of a finger or thumb.: 262  The most common organism responsible for this is Beta-hemolytic Streptococci. See also List of cutaneous conditions == References ==
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
I'm trying to understand 'Bicalutamide' within a medical context. Could you shed some light on it?
Bicalutamide, sold under the brand name Casodex among others, is an antiandrogen medication that is primarily used to treat prostate cancer. It is typically used together with a gonadotropin-releasing hormone (GnRH) analogue or surgical removal of the testicles to treat advanced prostate cancer. To a lesser extent, it is used for early prostate cancer at a higher dosage as a monotherapy without castration. Bicalutamide is also used to treat excessive hair growth and scalp hair loss in women, as a component of feminizing hormone therapy for transgender women, to treat early puberty in boys, and to prevent overly long-lasting erections in men. It is taken by mouth.Common side effects in men include breast enlargement, breast tenderness, and hot flashes. Other side effects in men include feminization and sexual dysfunction. Some side effects like breast changes and feminization are minimal when combined with castration. While the medication appears to produce few side effects in women, its use in cisgender women is not recommended by the Food and Drug Administration (FDA) at this time. Use during pregnancy may harm the baby. Bicalutamide causes abnormal liver changes necessitating discontinuation in around 1% of people. Rarely, it has been associated with cases of liver damage, lung toxicity, and sensitivity to light. Although the risk of adverse liver changes is small, monitoring of liver function is recommended during treatment.Bicalutamide is a member of the nonsteroidal antiandrogen (NSAA) group of medications. It works by selectively blocking the androgen receptor (AR), the biological target of the androgen sex hormones testosterone and dihydrotestosterone (DHT). It does not lower androgen levels. The medication can have some estrogen-like effects in men when used as a monotherapy due to increased estradiol levels. Bicalutamide is well-absorbed, and its absorption is not affected by food. The elimination half-life of the medication is around one week. It shows peripheral selectivity in animals, but crosses the blood–brain barrier and affects both the body and brain in humans.Bicalutamide was patented in 1982 and approved for medical use in 1995. It is on the World Health Organizations List of Essential Medicines. Bicalutamide is available as a generic medication. The drug is sold in more than 80 countries, including most developed countries. It is the most widely used antiandrogen in the treatment of prostate cancer, and has been prescribed to millions of men with the disease. Medical uses Bicalutamide is approved for and mainly used in the following indications: Metastatic prostate cancer (mPC) in men in combination with a gonadotropin-releasing hormone (GnRH) analogue or surgical castration at 50 mg/day Locally advanced prostate cancer (LAPC) in men as a monotherapy at 150 mg/day (not approved for this use in the United States)In Japan, bicalutamide is uniquely used at a dosage of 80 mg/day both in combination with castration and as a monotherapy in the treatment of prostate cancer.Bicalutamide is also employed for the following off-label (non-approved) indications: To reduce the effects of the testosterone flare at the initiation of GnRH agonist therapy in men Androgen-dependent skin and hair conditions such as acne, seborrhea, excessive hair growth, and scalp hair loss in women as well as high testosterone levels due to polycystic ovary syndrome (PCOS) in women, at 25 to 50 mg/day generally in combination with a birth control pill Feminizing hormone therapy for transgender women in combination with an estrogen usually at 50 mg/day Peripheral precocious puberty in boys at 12.5 to 100 mg/day in combination with an aromatase inhibitor like anastrozole, especially for familial male-limited precocious puberty (testotoxicosis) Overly long-lasting erections in men at 50 mg per week to 50 mg every other dayThe medication has been suggested for but has uncertain effectiveness in the following indication: Hypersexuality and paraphilias, particularly in combination with chemical castrationFor more information on these uses, see the medical uses of bicalutamide article. Available forms Bicalutamide is available for the treatment of prostate cancer in most developed countries, including over 80 countries worldwide. It is available in 50 mg, 80 mg (in Japan), and 150 mg tablets for oral administration. The drug is registered for use as a 150 mg/day monotherapy for the treatment of LAPC in at least 55 countries, with the U.S. being a notable exception where it is registered only for use at a dosage of 50 mg/day in combination with castration. No other formulations or routes of administration are available or used. All formulations of bicalutamide are specifically indicated for the treatment of prostate cancer alone or in combination with surgical or medication castration. A combined formulation of bicalutamide and the GnRH agonist goserelin in which goserelin is provided as a subcutaneous implant for injection and bicalutamide is included as 50 mg tablets for oral ingestion is marketed in Australia and New Zealand under the brand name ZolaCos CP (Zoladex–Cosudex Combination Pack). Contraindications Bicalutamide is pregnancy category X, or "contraindicated in pregnancy", in the U.S., and pregnancy category D, the second most restricted rating, in Australia. As such, it is contraindicated in women during pregnancy, and women who are sexually active and who can or may become pregnant are strongly recommended to take bicalutamide only in combination with adequate contraception. It is unknown whether bicalutamide is excreted in breast milk, but many drugs are excreted in breast milk, and for this reason, bicalutamide treatment is similarly not recommended while breastfeeding.In individuals with severe, though not mild-to-moderate hepatic impairment, there is evidence that the elimination of bicalutamide is slowed, and hence, caution may be warranted in these patients as circulating levels of bicalutamide may be increased. In severe hepatic impairment, the elimination half-life of the active (R)-enantiomer of bicalutamide is increased by about 1.75-fold (76% increase; elimination half-life of 5.9 and 10.4 days for normal and impaired patients, respectively). The elimination half-life of bicalutamide is unchanged in renal impairment. Side effects The side effect profile of bicalutamide is highly dependent on sex; that is, on whether the person is male or female. In men, due to androgen deprivation, a variety of side effects of varying severity may occur during bicalutamide treatment, with breast pain/tenderness and gynecomastia (breast development/enlargement) being the most common. Gynecomastia occurs in up to 80% of men treated with bicalutamide monotherapy, though is of only mild-to-moderate severity in more than 90% of affected men. In addition to breast changes, physical feminization and demasculinization in general, including reduced body hair growth, decreased muscle mass and strength, feminine changes in fat mass and distribution, reduced penile length, and decreased semen/ejaculate volume, may occur in men. Other side effects that have been observed in men and that are similarly related to androgen deprivation include hot flashes, sexual dysfunction (e.g., loss of libido, erectile dysfunction), depression, fatigue, weakness, and anemia. However, most men have preserved sexual function with bicalutamide monotherapy. In females, due to the minimal biological importance of androgens in this sex, the side effects of pure antiandrogens or NSAAs are few, and bicalutamide has been found to be very well tolerated. General side effects of bicalutamide that may occur in either sex include diarrhea, constipation, abdominal pain, nausea, dry skin, itching, and rash. The drug is well-tolerated at higher dosages than 50 mg/day, with rare additional side effects.Bicalutamide has been associated with abnormal liver function tests such as elevated liver enzymes. In the Early Prostate Cancer (EPC) clinical programme of bicalutamide for early prostate cancer, the rate of abnormal liver function tests with bicalutamide monotherapy was 3.4% relative to 1.9% for placebo. Hepatic changes that have necessitated discontinuation of bicalutamide, such as marked increases in liver enzymes or hepatitis, have occurred in approximately 0.3 to 1.4% of men in clinical trials. Elevated liver enzymes with bicalutamide usually occur within the first 3 to 6 months of treatment. Monitoring of liver function during treatment is recommended, particularly in the first few months. In men of advanced age with prostate cancer, bicalutamide monotherapy has been associated with an increase in non-prostate cancer mortality, in part due to an increase in the rate of heart failure. These mortality-related effects are thought to be a consequence of androgen deprivation, rather than due to a specific drug-related toxicity of bicalutamide.There are 10 published case reports of liver toxicity associated with bicalutamide as of 2021. Death occurred in 2 of these cases. In all reported cases, the onset of toxicity was within the first 6 months of treatment. Symptoms that may indicate liver dysfunction include nausea, vomiting, abdominal pain, fatigue, anorexia, "flu-like" symptoms, dark urine, and jaundice. There are also published case reports of interstitial pneumonitis and eosinophilic lung disease associated with bicalutamide. Interstitial pneumonitis can potentially progress to pulmonary fibrosis and may be fatal. Symptoms that may indicate lung dysfunction include dyspnea (difficult breathing or shortness of breath), cough, and pharyngitis (inflammation of the pharynx, resulting in sore throat). Both hepatotoxicity and interstitial pneumonitis are said to be very rare events with bicalutamide. A few cases of photosensitivity have been reported with bicalutamide. Hypersensitivity reactions (drug allergy) like angioedema and hives have also uncommonly been reported in association with bicalutamide.Because it is an antiandrogen, bicalutamide has a theoretical risk of birth defects like ambiguous genitalia and brain feminization in male fetuses. Due to its teratogenic capacity, contraception should be used in women taking bicalutamide who are fertile and sexually active. Comparison The side effect profile of bicalutamide in men and women differs from that of other antiandrogens and is considered favorable in comparison. Relative to GnRH analogues and the steroidal antiandrogen (SAA) cyproterone acetate (CPA), bicalutamide monotherapy has a much lower incidence and severity of hot flashes and sexual dysfunction. In addition, unlike GnRH analogues and CPA, bicalutamide monotherapy is not associated with decreased bone mineral density or osteoporosis. Conversely, bicalutamide monotherapy is associated with much higher rates of breast tenderness, gynecomastia, and feminization in men than GnRH analogues and CPA. However, gynecomastia with bicalutamide is rarely severe and discontinuation rates due to this side effect are fairly low. These differences in side effects between bicalutamide monotherapy, GnRH analogues, and CPA are attributed to the fact that whereas GnRH analogues and CPA suppress estrogen production, bicalutamide monotherapy does not lower estrogen levels and in fact actually increases them.Bicalutamide does not share the risk of neuropsychiatric side effects like depression and fatigue as well as cardiovascular side effects like coagulation changes, blood clots, fluid retention, ischemic cardiomyopathy, and adverse serum lipid changes that CPA has been associated with. It has a much lower risk of hepatotoxicity than flutamide and CPA and of interstitial pneumonitis than nilutamide. The drug also does not share the unique risks of diarrhea with flutamide and nausea, vomiting, visual disturbances, and alcohol intolerance with nilutamide. Unlike enzalutamide, bicalutamide is not associated with seizures or related central side effects like anxiety and insomnia. However, although the risk of adverse liver changes with bicalutamide is low, enzalutamide differs from bicalutamide in having no known risk of elevated liver enzymes or hepatotoxicity. In contrast to the SAA spironolactone, bicalutamide does not have antimineralocorticoid effects, and hence is not associated with hyperkalemia, urinary frequency, dehydration, hypotension, or other related side effects. In women, unlike CPA and spironolactone, bicalutamide does not produce menstrual irregularity or amenorrhea and does not interfere with ovulation or fertility. Overdose A single oral dose of bicalutamide in humans that results in symptoms of overdose or that is considered to be life-threatening has not been established. Dosages of up to 600 mg/day have been well tolerated in clinical trials, and it is notable that there is a saturation of absorption with bicalutamide such that circulating levels of its active (R)-enantiomer do not further increase above a dosage of 300 mg/day. Overdose is considered unlikely to be life-threatening with bicalutamide or other first-generation NSAAs (i.e., flutamide and nilutamide). A massive overdose of nilutamide (13 grams, or 43 times the normal maximum 300 mg/day clinical dosage) in a 79-year-old man was uneventful, producing no clinical signs, symptoms, or toxicity. There is no specific antidote for bicalutamide or NSAA overdose, and treatment should be based on symptoms, if any are present. Interactions Bicalutamide is almost exclusively metabolized by CYP3A4. As such, its levels in the body may be altered by inhibitors and inducers of CYP3A4. (For a list of CYP3A4 inhibitors and inducers, see here.) However, in spite of the fact bicalutamide is metabolized by CYP3A4, there is no evidence of clinically significant drug interactions when bicalutamide at a dosage of 150 mg/day or less is co-administered with drugs that inhibit or induce cytochrome P450 enzyme activity.In-vitro studies suggest that bicalutamide may be able to inhibit CYP3A4 and, to a lesser extent, CYP2C9, CYP2C19, and CYP2D6. Conversely, animal studies suggest that bicalutamide may induce cytochrome P450 enzymes. In a clinical study, bicalutamide co-administered with the CYP3A4 substrate midazolam caused only a small and statistically non-significant increase in midazolam levels (+27%) presumably due to CYP3A4 inhibition. However, this was well below the increases with potent CYP3A4 inhibitors like ketoconazole (+1500%), itraconazole (+1000%), and erythromycin (+350%), and is considered to not be clinically important. There is no indication of clinically significant enzyme inhibition or induction with bicalutamide at doses of 150 mg/day or below.Because bicalutamide circulates at relatively high concentrations and is highly protein-bound, it has the potential to displace other highly protein-bound drugs like warfarin, phenytoin, theophylline, and aspirin from plasma binding proteins. This could, in turn, result in increased free concentrations of such drugs and increased effects and/or side effects, potentially necessitating dosage adjustments. Bicalutamide has specifically been found to displace coumarin anticoagulants like warfarin from their plasma binding proteins (namely albumin) in vitro, potentially resulting in an increased anticoagulant effect, and for this reason, close monitoring of prothrombin time and dosage adjustment as necessary is recommended when bicalutamide is used in combination with these drugs. However, in spite of this, no conclusive evidence of an interaction between bicalutamide and other drugs was found in clinical trials of nearly 3,000 patients. Pharmacology Pharmacodynamics Antiandrogenic activity Bicalutamide acts as a highly selective competitive silent antagonist of the AR (IC50 = 159–243 nM), the major biological target of the androgen sex hormones testosterone and DHT, and hence is an antiandrogen. The activity of bicalutamide lies in the (R)-isomer. Due to its selectivity for the AR, bicalutamide does not interact importantly with other steroid hormone receptors and hence has no clinically relevant off-target hormonal activity (e.g., progestogenic, estrogenic, glucocorticoid, antimineralocorticoid). However, it has been reported that bicalutamide has weak affinity for the progesterone receptor (PR), where it is an antagonist, and hence it could have some antiprogestogenic activity. Bicalutamide does not inhibit 5α-reductase nor is known to inhibit other enzymes involved in androgen steroidogenesis (e.g., CYP17A1). Although it does not bind to the estrogen receptors (ERs), bicalutamide can increase estrogen levels secondarily to AR blockade when used as a monotherapy in males, and hence can have some indirect estrogenic effects in males. Bicalutamide neither suppresses nor inhibits androgen production in the body (i.e., it does not act as an antigonadotropin or androgen steroidogenesis inhibitor or lower androgen levels) and hence exclusively mediates its antiandrogenic effects by antagonizing the AR. In addition to the classical nuclear AR, bicalutamide has been assessed at the membrane androgen receptors (mARs) and found to act as a potent antagonist of ZIP9 (IC50 = 66.3 nM), whereas it does not appear to interact with GPRC6A.The affinity of bicalutamide for the AR is relatively low as it is approximately 30 to 100 times lower than that of DHT, which is 2.5- to 10-fold as potent as an AR agonist as testosterone in bioassays and is the main endogenous ligand of the receptor in the prostate gland. However, typical clinical dosages of bicalutamide result in circulating levels of the drug that are thousands of times higher than those of testosterone and DHT, allowing it to powerfully prevent them from binding to and activating the receptor. This is especially true in the case of surgical or medical castration, in which testosterone levels in the circulation are approximately 95% reduced and DHT levels in the prostate gland are about 50 to 60% reduced. In women, levels of testosterone are substantially lower (20- to 40-fold) than in men, so much smaller doses of bicalutamide (e.g., 25 mg/day in the hirsutism studies) are necessary.Blockade of the AR by bicalutamide in the pituitary gland and hypothalamus results in prevention of the negative feedback of androgens on the hypothalamic–pituitary–gonadal axis (HPG axis) in males and consequent disinhibition of pituitary luteinizing hormone (LH) secretion. This, in turn, results in an increase in circulating LH levels and activation of the gonadal production of testosterone and by extension production of estradiol. Levels of testosterone have been found to increase 1.5- to 2-fold (59–97% increase) and levels of estradiol about 1.5- to 2.5-fold (65–146% increase) in men treated with 150 mg/day bicalutamide monotherapy. In addition to testosterone and estradiol, there are smaller increases in concentrations of DHT, sex hormone-binding globulin, and prolactin. Estradiol levels with bicalutamide monotherapy are similar to those in the low-normal premenopausal female range while testosterone levels generally remain in the high end of the normal male range. Testosterone concentrations do not typically exceed the normal male range due to negative feedback on the HPG axis by the increased concentrations of estradiol. Bicalutamide influences the HPG axis and increases hormone levels only in men and not also in women. This is due to the much lower levels of androgens in women and their lack of basal suppression of the HPG axis in this sex. As evidenced by its effectiveness in the treatment of prostate cancer and other androgen-dependent conditions, the antiandrogenic actions of bicalutamide considerably exceed any impact of the increased levels of testosterone it results in. However, the elevated levels of estradiol remain unopposed by bicalutamide and are responsible for the gynecomastia and feminizing side effects it causes in men. Although bicalutamide monotherapy increases gonadotropin and sex hormone levels in men, this will not occur if bicalutamide is combined with an antigonadotropin such as a GnRH analogue, estrogen, or progestogen, as these medications maintain negative feedback on the HPG axis.NSAA monotherapy, including with bicalutamide, shows a number of tolerability differences from methods of androgen deprivation therapy that incorporate surgical or medical castration. For example, the rates of hot flashes, depression, fatigue, and sexual dysfunction are all much higher with GnRH analogues than with NSAA monotherapy. It is thought that this is because GnRH analogues suppress estrogen production in addition to androgen production, resulting in estrogen deficiency. In contrast, NSAA monotherapy does not decrease estrogen levels and in fact increases them, resulting in an excess of estrogens that compensates for androgen deficiency and allows for a preservation of mood, energy, and sexual function. Neurosteroids that are produced from testosterone like 3α-androstanediol and 3β-androstanediol, which are ERβ agonists and the former a potent GABAA receptor positive allosteric modulator, may also be involved. In the specific case of sexual dysfunction, an additional possibility for the difference is that without concomitant suppression of androgen production, blockade of the AR by the bicalutamide in the brain is incomplete and insufficient to markedly influence sexual function.Under normal circumstances, bicalutamide has no capacity to activate the AR. However, in prostate cancer, mutations and overexpression of the AR can accumulate in prostate gland cells which can convert bicalutamide from an antagonist of the AR into an agonist. This can result in paradoxical stimulation of prostate cancer growth with bicalutamide and is responsible for the phenomenon of the antiandrogen withdrawal syndrome, where antiandrogen discontinuation paradoxically slows the rate of prostate cancer growth.In transgender women, breast development is a desired effect of antiandrogen or estrogen treatment. Breast development and gynecomastia induced by bicalutamide is thought to be mediated by increased activation of the ER secondary to blockade of the AR (resulting in disinhibition of the ER in breast tissue) and increased levels of estradiol. In addition to fat deposition, connective tissue growth, and ductal development, bicalutamide has been found to produce moderate lobuloalveolar development of the breasts. However, full lobuloalveolar maturation necessary for lactation and breastfeeding will not occur without progestogen treatment.Bicalutamide monotherapy seems to have minimal effect on testicular spermatogenesis, testicular ultrastructure, and certain aspects of male fertility. This seems to be because testosterone levels in the testes (where ≈95% of testosterone in males is produced) are extremely high (up to 200-fold higher than circulating levels) and only a small fraction (less than 10%) of the normal levels of testosterone in the testes are actually necessary to maintain spermatogenesis. As a result, bicalutamide appears to not be able to compete with testosterone in this sole part of the body to an extent sufficient to considerably interfere with androgen signaling and function. However, while bicalutamide does not seem to be able to adversely influence testicular spermatogenesis, it may interfere with AR-dependent sperm maturation and transport outside of the testes in the epididymides and vas deferens where androgen levels are far lower, and hence may still be able to impair male fertility. In addition, the combination of bicalutamide with other medications, such as estrogens, progestogens, and GnRH analogues, can compromise spermatogenesis due to their own adverse effects on male fertility. These medications are able to strongly suppress gonadal androgen production, which can severely impair or abolish testicular spermatogenesis, and estrogens also appear to have direct and potentially long-lasting cytotoxic effects in the testes at sufficiently high concentrations. Other activities Bicalutamide has been found to act as an inhibitor or inducer of certain cytochrome P450 enzymes including CYP3A4, CYP2C9, CYP2C19, and CYP2D6 in preclinical research, but no evidence of this has been found in humans treated with up to 150 mg/day. It has also been identified in vitro as a strong inhibitor of CYP27A1 (cholesterol 27-hydroxylase) and as an inhibitor of CYP46A1 (cholesterol 24-hydroxylase), but this has yet to be assessed or confirmed in vivo or in humans and the clinical significance remains unknown. Bicalutamide has been found to be a P-glycoprotein (ABCB1) inhibitor. Like other first-generation NSAAs and enzalutamide, it has been found to act as a weak non-competitive inhibitor of GABAA receptor-mediated currents in vitro (IC50 = 5.2 μM). However, unlike enzalutamide, bicalutamide has not been found to be associated with seizures or other related adverse central effects, so the clinical relevance of this finding is uncertain. Pharmacokinetics Though its absolute bioavailability in humans is unknown, bicalutamide is known to be extensively and well-absorbed. Its absorption is not affected by food. The absorption of bicalutamide is linear at doses up to 150 mg/day and is saturable at doses above this, with no further increases in steady-state levels of bicalutamide occurring at doses above 300 mg/day. Whereas absorption of (R)-bicalutamide is slow, with levels peaking at 31 to 39 hours after a dose, (S)-bicalutamide is much more rapidly absorbed. Steady-state concentrations of the drug are reached after 4 to 12 weeks of treatment independently of dosage, with a 10- to 20-fold progressive accumulation in levels of (R)-bicalutamide. The long time to steady-state levels is the result of bicalutamides very long elimination half-life.The tissue distribution of bicalutamide is not well-characterized. The amount of bicalutamide in semen that could potentially be transferred to a female partner during sexual intercourse is low and is not thought to be important. Based on animal studies with rats and dogs it was thought that bicalutamide could not cross the blood–brain barrier and hence could not enter the brain. As such, it was initially thought to be a peripherally selective antiandrogen. However, subsequent clinical studies found that this was not also the case for humans, indicating species differences; bicalutamide crosses into the human brain and, in accordance, produces effects and side effects consistent with central antiandrogenic action. Bicalutamide is highly plasma protein bound (96.1% for racemic bicalutamide, 99.6% for (R)-bicalutamide) and is bound mainly to albumin, with negligible binding to SHBG and corticosteroid-binding globulin.Bicalutamide is metabolized in the liver. (R)-Bicalutamide is metabolized slowly and almost exclusively via hydroxylation by CYP3A4 into (R)-hydroxybicalutamide. This metabolite is then glucuronidated by UGT1A9. In contrast to (R)-bicalutamide, (S)-bicalutamide is metabolized rapidly and mainly by glucuronidation (without hydroxylation). None of the metabolites of bicalutamide are known to be active and levels of the metabolites are low in plasma, where unchanged biclautamide predominates. Due to the stereoselective metabolism of bicalutamide, (R)-bicalutamide has a far longer terminal half-life than (S)-bicalutamide and its levels are about 10- to 20-fold higher in comparison following a single dose and 100-fold higher at steady-state. (R)-Bicalutamide has a relatively long elimination half-life of 5.8 days with a single dose and 7 to 10 days following repeated administration.Bicalutamide is eliminated in similar proportions in feces (43%) and urine (34%), while its metabolites are eliminated roughly equally in urine and bile. The drug is excreted to a substantial extent in unmetabolized form, and both bicalutamide and its metabolites are eliminated mainly as glucuronide conjugates. The glucuronide conjugates of bicalutamide and its metabolites are eliminated from the circulation rapidly, unlike unconjugated bicalutamide.The pharmacokinetics of bicalutamide are not affected by consumption of food, a persons age or body weight, renal impairment, or mild-to-moderate hepatic impairment. However, steady-state levels of bicalutamide are higher in Japanese individuals than in white people. Chemistry Bicalutamide is a racemic mixture consisting of equal proportions of enantiomers (R)-bicalutamide (dextrorotatory) and (S)-bicalutamide (levorotatory). Its systematic name (IUPAC) is (RS)-N-[4-cyano-3-(trifluoromethyl)phenyl]-3-[(4-fluorophenyl)sulfonyl]-2-hydroxy-2-methylpropanamide. The compound has a chemical formula of C18H14F4N2O4S, a molecular weight of 430.373 g/mol, and is a fine white to off-white powder.The acid dissociation constant (pKa) of bicalutamide is approximately 12. It is a highly lipophilic compound (log P = 2.92). At 37 °C (98.6 °F), or normal human body temperature, bicalutamide is practically insoluble in water (4.6 mg/L), acid (4.6 mg/L at pH 1), and alkali (3.7 mg/L at pH 8). In organic solvents, it is slightly soluble in chloroform and absolute ethanol, sparingly soluble in methanol, and freely soluble in acetone and tetrahydrofuran.Bicalutamide is a synthetic and nonsteroidal compound which was derived from flutamide. It is a bicyclic compound (has two rings) and can be classified as and has variously been referred to as an anilide (N-phenylamide) or aniline, a diarylpropionamide, and a toluidide. Analogues First-generation NSAAs including bicalutamide, flutamide, and nilutamide are all synthetic, nonsteroidal anilide derivatives and structural analogues of each other. Bicalutamide is a diarylpropionamide while flutamide is a monoarylpropionamide and nilutamide is a hydantoin. Bicalutamide and flutamide, though not nilutamide, can also be classified as toluidides. All three of the compounds share a common 3-trifluoromethylaniline moiety. Bicalutamide is a modification of flutamide in which a 4-fluorophenylsulfonyl moiety has been added and the nitro group on the original phenyl ring has been replaced with a cyano group. Topilutamide, also known as fluridil, is another NSAA that is closely related structurally to the first-generation NSAAs, but, in contrast to them, is not used in the treatment of prostate cancer and is instead used exclusively as a topical antiandrogen in the treatment of pattern hair loss. The second-generation NSAAs enzalutamide and apalutamide were derived from and are analogues of the first-generation NSAAs, while another second-generation NSAA, darolutamide, is said to be structurally distinct and chemically unrelated to the other NSAAs. Enzalutamide is a modification of bicalutamide in which the inter-ring linking chain has been altered and cyclized into a 5,5-dimethyl-4-oxo-2-thioxo imidazolidine moiety. In apalutamide, the 5,5-dimethyl groups of the imidazolidine ring of enzalutamide are cyclized to form an accessory cyclobutane ring and one of its phenyl rings is replaced with a pyridine ring. The first nonsteroidal androgens, the arylpropionamides, were discovered via structural modification of bicalutamide. Unlike bicalutamide (which is purely antiandrogenic), these compounds show tissue-selective androgenic effects and were classified as selective androgen receptor modulators (SARMs). Lead SARMs of this series included acetothiolutamide, enobosarm (ostarine; S-22), and andarine (acetamidoxolutamide or androxolutamide; S-4). They are very close to bicalutamide structurally, with the key differences being that the linker sulfone of bicalutamide has been replaced with an ether or thioether group to confer agonism of the AR and the 4-fluoro atom of the pertinent phenyl ring has been substituted with an acetamido or cyano group to eliminate reactivity at the position. A few radiolabeled derivatives of bicalutamide have been developed for potential use as radiotracers in medical imaging. They include [18F]bicalutamide, 4-[76Br]bromobicalutamide, and [76Br]bromo-thiobicalutamide. The latter two were found to have substantially increased affinity for the AR relative to that of bicautamide. However, none of these agents have been evaluated in humans.5N-Bicalutamide, or 5-azabicalutamide, is a minor structural modification of bicalutamide which acts as a reversible covalent antagonist of the AR and has approximately 150-fold higher affinity for the AR and about 20-fold greater functional inhibition of the AR relative to bicalutamide. It is among the most potent AR antagonists to have been developed and is being researched for potential use in the treatment of antiandrogen-resistant prostate cancer. Synthesis A number of chemical syntheses of bicalutamide have been published in the literature. The procedure of the first published synthesis of bicalutamide can be seen below. History Bicalutamide as well as all of the other currently marketed NSAAs were derived from structural modification of flutamide, which itself was originally synthesized as a bacteriostatic agent in 1967 at Schering Plough Corporation and was subsequently and serendipitously found to possess antiandrogenic activity. Bicalutamide was discovered by Tucker and colleagues at Imperial Chemical Industries (ICI) in the 1980s and was selected for development from a group of over 2,000 synthesized compounds. It was first patented in 1982 and was first reported in the scientific literature in June 1987.Bicalutamide was first studied in a phase I clinical trial in 1987 and the results of the first phase II clinical trial in prostate cancer were published in 1990. The pharmaceutical division of ICI was split out into an independent company called Zeneca in 1993, and in April and May 1995, Zeneca (now AstraZeneca, after merging with Astra AB in 1999) began pre-approval marketing of bicalutamide for the treatment of prostate cancer in the U.S.. It was first launched in the U.K. in May 1995, and was subsequently approved by the U.S. FDA on 4 October 1995, for the treatment of prostate cancer at a dosage of 50 mg/day in combination with a GnRH analogue.Following its introduction for use in combination with a GnRH analogue, bicalutamide was developed as a monotherapy at a dosage of 150 mg/day for the treatment of prostate cancer, and was approved for this indication in Europe, Canada, and a number of other countries in the late 1990s and early 2000s. This application of bicalutamide was also under review by the FDA in the U.S. in 2002, but ultimately was not approved in this country. In Japan, bicalutamide is licensed at a dosage of 80 mg/day alone or in combination with a GnRH analogue for prostate cancer. The unique 80 mg dosage of bicalutamide used in Japan was selected for development in this country on the basis of observed pharmacokinetic differences with bicalutamide in Japanese men.Subsequent to negative findings of bicalutamide monotherapy for LPC in the EPC clinical programme, approval of bicalutamide for use specifically in the treatment of LPC was withdrawn in a number of countries including the U.K. (in October or November 2003) and several other European countries and Canada (in August 2003). In addition, the U.S. and Canada explicitly recommended against the use of 150 mg/day bicalutamide for this indication. The drug is effective for, remains approved for, and continues to be used in the treatment of LAPC and mPC, on the other hand.The patent protection of bicalutamide expired in the U.S. in March 2009 and the drug has subsequently been available as a generic, at greatly reduced cost.Bicalutamide was the fourth antiandrogen (and the third NSAA) to be introduced for the treatment of prostate cancer, following the SAA CPA in 1973 and the NSAAs flutamide in 1983 (1989 in the U.S.) and nilutamide in 1989 (1996 in the U.S.). It has been followed by abiraterone acetate in 2011, enzalutamide in 2012, apalutamide in 2018, and darolutamide in 2019, and may also be followed by in-development drugs such as proxalutamide and seviteronel. Society and culture Generic names Bicalutamide is the generic name of the drug in English and French and its INN, USAN, USP, BAN, DCF, AAN, and JAN. It is also referred to as bicalutamidum in Latin, bicalutamida in Spanish and Portuguese, bicalutamid in German, and bikalutamid in Russian and other Slavic languages. The "bica-" prefix corresponds to the fact that bicalutamide is a bicyclic compound, while the "-lutamide" suffix is the standard suffix for NSAAs. Bicalutamide is also known by its former developmental code name ICI-176,334. Brand names Bicalutamide is marketed by AstraZeneca in oral tablet form under the brand names Casodex, Cosudex, Calutide, Calumid, and Kalumid in many countries. It is also marketed under the brand names Bicadex, Bical, Bicalox, Bicamide, Bicatlon, Bicusan, Binabic, Bypro, Calutol, and Ormandyl among others in various countries. The drug is sold under a large number of generic trade names such as Apo-Bicalutamide, Bicalutamide Accord, Bicalutamide Actavis, Bicalutamide Bluefish, Bicalutamide Kabi, Bicalutamide Sandoz, and Bicalutamide Teva as well. A combination formulation of bicalutamide and goserelin is marketed by AstraZeneca in Australia and New Zealand under the brand name ZolaCos-CP. Cost and generics Bicalutamide is off-patent and available as a generic. Unlike bicalutamide, the newer NSAA enzalutamide is still on-patent, and for this reason, is considerably more expensive in comparison.The patent protection of all three of the first-generation NSAAs has expired and flutamide and bicalutamide are both available as low-cost generics. Nilutamide, on the other hand, has always been a poor third competitor to flutamide and bicalutamide and, in relation to this fact, has not been developed as a generic and is only available as brand name Nilandron, at least in the U.S.Bicalutamide is considerably less costly than GnRH analogues, which, in spite of some having been off-patent many years, have been reported (in 2013) to typically cost US$10,000–$15,000 per year (or about US$1,000 per month) of treatment. Sales and usage Sales of bicalutamide (as Casodex) worldwide peaked at US$1.3 billion in 2007, and it has been described as a "billion-dollar-a-year" drug prior to losing its patent protection starting in 2007. In 2014, despite the introduction of abiraterone acetate in 2011 and enzalutamide in 2012, bicalutamide was still the most commonly prescribed drug in the treatment of metastatic castration-resistant prostate cancer (mCRPC). Moreover, in spite of being off-patent, bicalutamide was said to still generate a few hundred million dollars in sales per year for AstraZeneca. Total worldwide sales of brand name Casodex were approximately US$13.4 billion as of the end of 2018. Between January 2007 and December 2009 (a period of three years), 1,232,143 prescriptions of bicalutamide were dispensed in the U.S., or about 400,000 prescriptions per year. During that time, bicalutamide accounted for about 87.2% of the NSAA market, while flutamide accounted for 10.5% of it and nilutamide for 2.3% of it. Approximately 96% of bicalutamide prescriptions were written for diagnosis codes that clearly indicated neoplasm. About 1,200, or 0.1% of bicalutamide prescriptions were dispensed to pediatric patients (age 0–16). Regulation Bicalutamide is a prescription drug. It is not specifically a controlled substance in any country and therefore is not an illegal drug. However, the manufacture, sale, distribution, and possession of prescription drugs are all still subject to legal regulation throughout the world. Research Bicalutamide has been studied in combination with the 5α-reductase inhibitors finasteride and dutasteride in prostate cancer. It has also been studied in combination with raloxifene, a selective estrogen receptor modulator (SERM), for the treatment of prostate cancer. Bicalutamide has been tested for the treatment of AR-positive ER/PR-negative locally advanced and metastatic breast cancer in women in a phase II study for this indication. Enzalutamide is also being investigated for this type of cancer. Bicalutamide has also been studied in a phase II clinical trial for ovarian cancer in women.Bicalutamide has been studied in the treatment of benign prostatic hyperplasia (BPH) in a 24-week trial of 15 patients at a dosage of 50 mg/day. Prostate volume decreased by 26% in patients taking bicalutamide and urinary irritative symptom scores significantly decreased. Conversely, peak urine flow rates and urine pressure flow examinations were not significantly different between bicalutamide and placebo. The decrease in prostate volume achieved with bicalutamide was comparable to that observed with the 5α-reductase inhibitor finasteride, which is approved for the treatment of BPH. Breast tenderness (93%), gynecomastia (54%), and sexual dysfunction (60%) were all reported as side effects of bicalutamide at the dosage used in the study, although no treatment discontinuations due to adverse effects occurred and sexual functioning was maintained in 75% of patients.A phase III clinical trial of bicalutamide in combination with an ethinylestradiol-containing combined oral contraceptive for the treatment of severe hirsutism in women with PCOS was completed in Italy in 2017 under supervision of the Italian Agency for Drugs (AIFA).Antiandrogens have been suggested for treating COVID-19 in men and as of May 2020 high-dose bicalutamide is in a phase II clinical trial for this purpose. Veterinary use Bicalutamide may be used to treat hyperandrogenism and associated benign prostatic hyperplasia secondary to hyperadrenocorticism (caused by excessive adrenal androgens) in male ferrets. However, it has not been formally assessed in controlled studies for this purpose. See also Comparison of bicalutamide with other antiandrogens References Further reading External links "Bicalutamide". Drug Information Portal. U.S. National Library of Medicine.
You serve as a medical tutor. Your objective is to demystify medical terms, providing thorough explanations that cater to various levels of medical knowledge.
What does the medical term 'Prucalopride' encompass?
Prucalopride, brand names Resolor and Motegrity among others, is a drug acting as a selective, high affinity 5-HT4 receptor agonist which targets the impaired motility associated with chronic constipation, thus normalizing bowel movements. Prucalopride was approved for medical use in the European Union in 2009, in Canada in 2011, in Israel in 2014, and in the United States in December 2018. The drug has also been tested for the treatment of chronic intestinal pseudo-obstruction. Medical uses The primary measure of efficacy in the clinical trials is three or more spontaneous complete bowel movements per week; a secondary measure is an increase of at least one complete spontaneous bowel movement per week. Further measures are improvements in PAC-QOL (a quality of life measure) and PAC-SYM (a range of stool, abdominal, and rectal symptoms associated with chronic constipation). Infrequent bowel movements, bloating, straining, abdominal pain, and defecation urge with inability to evacuate can be severe symptoms, significantly affecting quality of life.In three large clinical trials, 12 weeks of treatment with prucalopride 2 and 4 mg/day resulted in a significantly higher proportion of patients reaching the primary efficacy endpoint of an average of ≥3 spontaneous complete bowel movements than with placebo. There was also significantly improved bowel habit and associated symptoms, patient satisfaction with bowel habit and treatment, and HR-QOL in patients with severe chronic constipation, including those who did not experience adequate relief with prior therapies (>80% of the trial participants). The improvement in patient satisfaction with bowel habit and treatment was maintained during treatment for up to 24 months; prucalopride therapy was generally well tolerated.Small clinical trials suggested that prucalopride administration results in the 5-HT4 receptor agonism-associated memory enhancing in healthy participants improving their ability to recall and increasing neural activation in the hippocampus and functionally related areas. Contraindications Prucalopride is contraindicated where there is hypersensitivity to the active substance or to any of the excipients, renal impairment requiring dialysis, intestinal perforation or obstruction due to structural or functional disorder of the gut wall, obstructive ileus, severe inflammatory conditions of the intestinal tract, such as Crohns disease, and ulcerative colitis and toxic megacolon/megarectum. Side effects Prucalopride has been given orally to ~2700 patients with chronic constipation in controlled clinical trials. The most frequently reported side effects are headache and gastrointestinal symptoms (abdominal pain, nausea or diarrhea). Such reactions occur predominantly at the start of therapy and usually disappear within a few days with continued treatment. Mechanism of action Prucalopride, a first in class dihydro-benzofuran-carboxamide, is a selective, high affinity serotonin (5-HT4) receptor agonist with enterokinetic activities. Prucalopride alters colonic motility patterns via serotonin 5-HT4 receptor stimulation: it stimulates colonic mass movements, which provide the main propulsive force for defecation. The observed effects are exerted via highly selective action on 5-HT4 receptors: prucalopride has >150-fold higher affinity for 5-HT4 receptors than for other receptors. Prucalopride differs from other 5-HT4 agonists such as tegaserod and cisapride, which at therapeutic concentrations also interact with other receptors (5-HT1B/D and the cardiac human ether-a-go-go K+ or hERG channel respectively) and this may account for the adverse cardiovascular events that have resulted in the restricted availability of these drugs. Clinical trials evaluating the effect of prucalopride on QT interval and related adverse events have not demonstrated significant differences compared with placebo. Pharmacokinetics Prucalopride is rapidly absorbed (Cmax attained 2–3 hours after single 2 mg oral dose) and is extensively distributed. Metabolism is not the major route of elimination. In vitro, human liver metabolism is very slow and only minor amounts of metabolites are found. A large fraction of the active substance is excreted unchanged (about 60% of the administered dose in urine and at least 6% in feces). Renal excretion of unchanged prucalopride involves both passive filtration and active secretion. Plasma clearance averages 317 ml/min, terminal half-life is 24–30 hours, and steady-state is reached within 3–4 days. On once daily treatment with 2 mg prucalopride, steady-state plasma concentrations fluctuate between trough and peak values of 2.5 and 7 ng/ml, respectively.In vitro data indicate that prucalopride has a low interaction potential, and therapeutic concentrations of prucalopride are not expected to affect the CYP-mediated metabolism of co-medicated medicinal products. Approval In the European Economic Area, prucalopride was originally approved for the symptomatic treatment of chronic constipation in women in whom laxatives fail to provide adequate relief. Subsequently, it has been approved by the European Commission for use in adults – that is, including male patients – for the same indication. References External links "Prucalopride". Drug Information Portal. U.S. National Library of Medicine. "Prucalopride succinate". Drug Information Portal. U.S. National Library of Medicine.
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
Could you please explain the term 'Toxic encephalopathy' in simple language?
Toxic encephalopathy is a neurologic disorder caused by exposure to neurotoxic organic solvents such as toluene, following exposure to heavy metals such as manganese, as a side effect of melarsoprol treatment for African trypanosomiasis, adverse effects to prescription drugs, or exposure to extreme concentrations of any natural toxin such as cyanotoxins found in shellfish or freshwater cyanobacteria crusts. Toxic encephalopathy can occur following acute or chronic exposure to neurotoxicants, which includes all natural toxins. Exposure to toxic substances can lead to a variety of symptoms, characterized by an altered mental status, memory loss, and visual problems. Toxic encephalopathy can be caused by various chemicals, some of which are commonly used in everyday life, or cyanotoxins which are bio-accumulated from harmful algal blooms (HABs) which have settled on the benthic layer of a waterbody. Toxic encephalopathy can permanently damage the brain and currently treatment is mainly just for the symptoms. Signs and symptoms "Encephalopathy" is a general term describing brain malfunctions and "toxic" asserts that the malfunction is caused by toxins on the brain. The most prominent characteristic of toxic encephalopathy is an altered mental status. Acute intoxication is a reversible symptom of exposure to many synthetic chemical neurotoxicants. Acute intoxication symptoms include lightheadedness, dizziness, headache and nausea, and regular cumulative exposure to these neurotoxicants over a number of years puts the individual at high risk for developing toxic encephalopathy. Chronic exposure to low levels of neurotoxic chemicals can also cause reversible changes in mood and affect which resolve with cessation of exposure. Acute and chronic toxic encephalopathy on the other hand, are persistent changes in neurological function that typically occur with exposure to higher concentrations and longer durations respectively. The symptoms of acute and chronic toxic encephalopathy do not resolve with cessation of exposure and can include memory loss, dementia, small personality changes/increased irritability, insidious onset of concentration difficulties, headache, lightheadedness, ataxia, involuntary movements (parkinsonism), fatigue, seizures, arm strength problems, and depression. A paper by Feldman and colleagues described neurobehavioral effects in a 57-year-old house painter with regular exposure to large amounts of solvents.Magnetic Resonance Imaging (MRI) analyses have also demonstrated increased rates of dopamine synthesis in the putamen, reduced anterior and total corpus callosum volume, demyelination in the parietal white matter, basal ganglia, and thalamus, as well as atypical activation of frontal areas of the brain due to neural compensation.The regions of interest on SPECT brain imaging include a majority of all cortical regions, and the globus pallidus in Carbon monoxide poisoning. Based on medical literature on SPECT brain imaging, signature patterns for toxic encephalopathy is a nonspecific patchy, diffuse pattern on the cortex, and sometimes is shown in subcortical regions, if exposure was severe.A thorough and standard diagnostic process is paramount with toxic encephalopathy, including a careful occupational, medication and medical history, standardized imaging and neuropsychological testing. Causes In addition, chemicals, such as lead, that could instigate toxic encephalopathy are sometimes found in everyday products such as prescription drugs, cleaning products, building materials, pesticides, air fresheners, and even perfumes. These harmful chemicals can be inhaled (in the case of air fresheners) or applied (in the case of perfumes). The substances diffuse into the brain rapidly, as they are lipophilic and readily transported across the blood–brain barrier. This is a result of increased membrane solubility and local blood flow, with central nervous system (CNS) solvent uptake being further increased with high levels of physical activity. When they are not detoxified immediately, the symptoms of toxic encephalopathy begin to emerge. However, in chronic situations, these effects may not become severe enough to be noticed until much later. Increased exposure time and increased concentration of the chemicals will worsen the effects of toxic encephalopathy, due to the associated structural CNS damage and direct functional impairment consequences.Subacute toxic encephalopathies are challenging to identify due to their often insidious tempo of evolution, nonspecific manifestations, relative infrequency as individual entities, and frequent lack of specific diagnostic testing. Yet they are crucial to recognize in aggregate, subacute toxic encephalopathies are a common problem that can lead to severe, irreversible harm if not diagnosed and treated efficiently. This article reviews the clinically relevant aspects of some of the more important subacute toxic encephalopathy syndromes caused by inorganic toxins, carbon monoxide (CO), anti- biotics, antineoplastic agents, and psychiatric medications.All therapeutic interventions are double-edged swords with benefits and adverse effects, and pharmacotherapy is not an exception. Shortly after the introduction of conventional antipsychotic drugs into clinical practice, relatively rare but serious complications with hyperthermia, muscle rigidity, autonomic instability, and disturbed mental status were recognized to develop in some patients treated with antipsychotics. This type of encephalopathy induced by the use of antipsychotics was referred to as neuroleptic malignant syndrome (NMS), and almost all physicians prescribing antipsychotics are nowadays aware of this adverse phenomenon. Another well-known type of encephalopathy associated with psychotropic drug therapy is serotonin toxicity (ST) or serotonin syndrome (SS), which is characterized by autonomic and neuromuscular symptoms and altered mental status. In contrast with the idiosyncratic nature of NMS, ST is a spectrum pathophysiological state assumed to derive from excess serotonergic neural transmission caused by serotonin-related psychotropic agents. In these two decades, pharmacotherapy with psychotropic drugs for patients with mental illness has been dramatically changed, and classical prototypal antipsychotics and antidepressants have been replaced with atypical antipsychotics and selective serotonin reuptake inhibitors (SSRIs), respectively. These newly developed psychotropic drugs are generally safer and more tolerable than older drugs. However, atypical antipsychotics are not free of the risk of development of NMS, and the explosive prevalence of SSRIs prescribed not only for depression but also for a number of psychiatric diagnoses such as anxiety, eating, impulse-control, and personality disorders may increase the incidence of ST. Therefore, these two pathological states still remain as major adverse effects of psychotropic drugs involving altered functioning of the central nervous system (CNS), to which all clinicians prescribing psychoactive drugs should pay attention. The popularity of SSRIs also increased the case reports of patients with discontinuation syndrome, which sometimes includes CNS symptoms like anxiety and irritability. In this chapter, the author provides a comprehensive overview of the above- mentioned adverse effects affecting the CNS function associated with psychotropic pharmacotherapy. In addition, several other pathological conditions potentially causing encephalopathic symptoms in psychiatric patients treated with psychotropic drugs, e.g., hyponatremia, valproate-induced hyperammonemia, transient splenial lesion of the corpus callosum, and so on, are also described.Fume events on aircraft have been linked to cases of toxic encephalopathy, for example in the case of JetBlue Captain Andrew Myers, who as a result of exposure needed a cane to walk, experienced tremors and struggled to speak. In the following year, the Federal Aviation Administration revoked Myers medical certificate; the workers compensation board agreed that Myers experienced brain damage as a result of the event. Diagnosis Rapid diagnosis is important to attempt to prevent further damage to the brain and further neurologic deficits. It is a diagnosis of exclusion, so a full work up for other possible etiologies (hepatic, uremic, infectious, oncologic) should be performed. In addition to the neurological examination, diagnostic testing could include MRI, PET or SPECT brain imaging, EEG, QEEG and most importantly, neuropsychological testing. Screening for heavy metals, as well as other toxins, should be done immediately as those are some of the most common causes and the patient can then remove themselves from the dangerous environment. In addition, a full examination of blood (CBC) and metabolites (CMP) should be done. Treatment Treatment is mainly for the symptoms that toxic encephalopathy causes, and vary depending on how severe the case is. Diet changes and nutritional supplements may help some patients. To reduce or halt seizures, anticonvulsants may be prescribed. Dialysis or organ replacement surgery may be needed in some severe cases.Lifelong abstinence from alcohol, a well-established neurotoxin and cause of the condition, is encouraged to prevent further loss of neurocognitive function.Management of affected individuals consists of immediate removal from exposure to the toxic substance(s), treatment of the common clinical manifestation of depression if present, and counselling for the provision of life strategies to help cope with the potentially debilitating condition. Prognosis Toxic encephalopathy is often irreversible. If the source of the problem is treated by removing the toxic chemical from the system, further damage can be prevented, but prolonged exposure to toxic chemicals can quickly destroy the brain. Long-term studies have demonstrated residual cognitive impairment (primarily attention and information-processing impairment resulting in dysfunction in working memory) up to 10 years following cessation of exposure. Severe cases of toxic encephalopathy can be life-threatening. Research Research is being done by organizations such as NINDS (National Institute of Neurological Disorders and Stroke) on what substances can cause encephalopathy, why they do this, and eventually how to protect, treat, and cure the brain from this condition. See also Brain damage Chronic solvent-induced encephalopathy Encephalopathy Neurology Notes References Lerner, David P.; Tadevosyan, Aleksey; Burns, Joseph D. (November 2020). "Toxin-Induced Subacute Encephalopathy". Neurologic Clinics. 38 (4): 799–824. doi:10.1016/j.ncl.2020.07.006. PMID 33040862. S2CID 222301922. Bradley, Walter (2004). Neurology in Clinical Practice (4 ed.). Taylor & Francis. ISBN 9997625897. Baker, Edward L. (May 2008). "Chronic toxic encephalopathy caused by occupational solvent exposure". Annals of Neurology. 63 (5): 545–547. doi:10.1002/ana.21394. PMID 18409195. S2CID 41546628. Feldman, R G; Ratner, M H; Ptak, T (May 1999). "Chronic toxic encephalopathy in a painter exposed to mixed solvents". Environmental Health Perspectives. 107 (5): 417–422. doi:10.1289/ehp.99107417. PMC 1566426. PMID 10210698. Encephalopathy Associated with Psychotropic Drug Therapy Yuji Odagaki Department of Psychiatry, Faculty of Medicine, Saitama Medical University Department of Psychiatry, Moro Hospital Japan External links NINDS Encephalopathy Information Page Encephalopathy Causes, Symptoms, Diagnosis and Treatment by MedicineNet
You are a medical knowledge base. Your task is to elucidate medical terminology, offering insights into their meanings, origins, and applications.
I need a basic explanation for the medical term 'Purple glove syndrome.'
Purple glove syndrome (PGS) is a poorly understood skin disease in which the extremities become swollen, discoloured and painful. PGS is potentially serious, and may require amputation. PGS is most common among elderly patients and those receiving multiple large intravenous doses of the epilepsy drug phenytoin. Compartment syndrome is a complication of PGS. Cause Purple glove syndrome is caused by the intravenous anticonvulsant phenytoin. This medication has many already established neurological side effects, however glove syndrome is a rare, but very serious adverse effect that may lead to limb amputations. This may occur due to the administration of phenytoin with or without extravasation. The defining characteristic is a purplish to black discoloration of the extremity followed by peripheral edema and pain distal to the site of infusion. Onset is generally seen within the first few hours of administration. The true pathology of purple glove syndrome is not fully elucidated, however it is believed to be due to the crystallization of phenytoin within the blood and extravasates into the surrounding interstitium. Another mechanism may be due to the disruption of endothelial transcellular junctions followed by leaking of phenytoin into the surround soft tissues. Diagnosis Treatment Doctors recommend discontinuing the use of phenytoin. The application of heat can help to relieve pain. Oral phenytoin can also result in development of purple glove syndrome. References External links FDA: Potential Signals of Serious Risks/New Safety Information Identified by the Adverse Event Reporting System (AERS) -- January - March 2008
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
The term 'Micronychia' keeps coming up in medical discussions. What does it stand for?
The term Micronychia may refer to Micronychia (plant), a genus of plants in the family Anacardiaceae Micronychia (fly), a genus of flies in the family Tachinidae abnormally small finger- and toenails
You act as a medical communicator. Your goal is to translate medical terms into comprehensive, yet straightforward language for all readers.
I've come across the term 'Desquamative gingivitis' in a medical context, but I'm not sure what it means. Can you clarify?
Desquamative gingivitis is an erythematous (red), desquamatous (shedding) and ulcerated appearance of the gums. It is a descriptive term and can be caused by several different disorders. Signs and symptoms Desquamative gingivitis involves lesions of the free and attached gingiva. Unlike plaque-induced inflammation of the gums (normal marginal gingivitis), desquamative gingivitis extends beyond the marginal gingiva, involving the full width of the gingiva and sometimes the alveolar mucosa. The term "full width gingivitis" usually refers to the oral lesions of orofacial granulomatosis however. The color is another dissimilarity between typical marginal gingivitis and desquamative gingivitis, in the latter it is dusky red. Plasma cell gingivitis is another form of gingivitis which affects both the attached and free gingiva. Cause Caused by various autoimmune diseases as well as allergies. Erosive lichen planus, mucous membrane pemphigoid, pemphigus vulgaris, erythema exsudativum multiforme and lupus erythematosus. Diagnosis Differential diagnosis Desquamative gingivitis is a descriptive clinical term, not a diagnosis. Dermatologic conditions cause about 75% of cases of desquamative gingivitis, and over 95% of the dermatologic cases are accounted for by either oral lichen planus or cicatricial pemphigoid. The exact cause of desquamative gingivitis cannot be determined about a third of cases. Oral lichen planus Cicatricial pemphigoid or less commonly bullous pemphigoid Pemphigus vulgaris Linear immunoglobulin A disease Dermatitis herpetiformis Lupus erythematosus Chronic ulcerative stomatitis Chronic bacterial, fungal, and viral infections Reactions to medications, mouthwashes, and chewing gumRare causes include: Crohn’s disease Sarcoidosis Leukemia factitious (self inflicted) lesions Squamous cell carcinoma (can be mistaken for desquamative gingivitis) Treatment Improving oral hygiene Minimising irritation of the lesions Specific therapies for the underlying disease (where available) Local or systemic immunosuppressive or dapsone therapy (notably not corticosteroids) History This condition was first recognized and reported in 1894, but the term desquamative gingivitis was not coined until 1932. References == External links ==
You serve as a medical tutor. Your objective is to demystify medical terms, providing thorough explanations that cater to various levels of medical knowledge.
Could you please explain the term 'Epidural hematoma' in simple language?
Epidural hematoma is when bleeding occurs between the tough outer membrane covering the brain (dura mater) and the skull. Often there is loss of consciousness following a head injury, a brief regaining of consciousness, and then loss of consciousness again. Other symptoms may include headache, confusion, vomiting, and an inability to move parts of the body. Complications may include seizures.The cause is typically head injury that results in a break of the temporal bone and bleeding from the middle meningeal artery. Occasionally it can occur as a result of a bleeding disorder or blood vessel malformation. Diagnosis is typically by a CT scan or MRI. When this condition occurs in the spine it is known as a spinal epidural hematoma.Treatment is generally by urgent surgery in the form of a craniotomy or burr hole. Without treatment, death typically results. The condition occurs in one to four percent of head injuries. Typically it occurs in young adults. Males are more often affected than females. Signs and symptoms Many people with epidural hematomas experience a lucid period immediately following the injury, with a delay before symptoms become evident. Because of this initial period of lucidity, it has been called "Talk and Die" syndrome. As blood accumulates, it starts to compress intracranial structures, which may impinge on the third cranial nerve, causing a fixed and dilated pupil on the side of the injury. The eye will be positioned down and out due to unopposed innervation of the fourth and sixth cranial nerves.Other symptoms include severe headache; weakness of the extremities on the opposite side from the lesion due to compression of the crossed pyramid pathways; and vision loss, also on the opposite side, due to compression of the posterior cerebral artery. In rare cases, small hematomas may be asymptomatic.If not treated promptly, epidural hematomas can cause tonsillar herniation, resulting in respiratory arrest. The trigeminal nerve may be involved late in the process as the pons is compressed, but this is not an important presentation, because the person may already be dead by the time it occurs. In the case of epidural hematoma in the posterior cranial fossa, tonsillar herniation causes Cushings triad: hypertension, bradycardia, and irregular breathing. Causes The most common cause of intracranial epidural hematoma is head injury, although spontaneous hemorrhages have been known to occur. Epidural hematomas occur in about 10% of traumatic brain injuries, mostly due to car accidents, assaults, or falls. They are often caused by acceleration-deceleration trauma and transverse forces.Epidural hematoma commonly results from a blow to the side (temporal bone) of the head. The pterion region, which overlies the middle meningeal artery, is relatively weak and prone to injury. Only 20 to 30% of epidural hematomas occur outside the region of the temporal bone. The brain may be injured by prominences on the inside of the skull as it scrapes past them. Epidural hematoma is usually found on the same side of the brain which was impacted by the blow, but on very rare occasions it can be due to a contrecoup injury.A "heat hematoma" is an epidural hematoma caused by severe thermal burn, causing contraction and exfoliation of the dura mater and exfoliate from the skull, in turn causing exudation of blood from the venous sinuses. The hematoma can be seen on autopsy as brick red, or as radiolucent on CT scan, because of heat-induced coagulation of the hematoma. Pathophysiology The break of the temporal bone causes bleeding from the middle meningeal artery, hence epidural bleeding is often rapid as arteries are high-pressure flow. In 10% of cases, however, it comes from veins and can progress more slowly. A venous hematoma may be acute (occurring within a day of the injury and appearing as a swirling mass of blood without a clot), subacute (occurring in 2–4 days and appearing solid), or chronic (occurring in 7–20 days and appearing mixed or lucent).In adults, the temporal region accounts for 75% of cases. In children, however, they occur with similar frequency in the occipital, frontal, and posterior fossa regions. Epidural bleeds from arteries can grow until they reach their peak size 6–8 hours post-injury, spilling 25–75 cubic centimeters of blood into the intracranial space. As the hematoma expands, it strips the dura from the inside of the skull, causing an intense headache. It also increases intracranial pressure, causing the brain to shift, lose blood supply, be crushed against the skull, or herniate. Larger hematomas cause more damage. Epidural bleeds can quickly compress the brainstem, causing unconsciousness, abnormal posturing, and abnormal pupil responses to light. Diagnosis Diagnosis is typically by CT scan or MRI. MRIs have greater sensitivity and should be used if there is a high suspicion of epidural hematoma and a negative CT scan. Differential diagnoses include a transient ischemic attack, intracranial mass, or brain abscess.Epidural hematomas usually appear convex in shape because their expansion stops at the skulls sutures, where the dura mater is tightly attached to the skull. Thus, they expand inward toward the brain rather than along the inside of the skull, as occurs in subdural hematomas. Most people also have a skull fracture.Epidural hematomas may occur in combination with subdural hematomas, or either may occur alone. CT scans reveal subdural or epidural hematomas in 20% of unconscious people. In the hallmark of epidural hematoma, people may regain consciousness and appear completely normal during what is called a lucid interval, only to descend suddenly and rapidly into unconsciousness later. This lucid interval, which depends on the extent of the injury, is a key to diagnosing an epidural hematoma. Treatment Epidural hematoma is a surgical emergency. Delayed surgery can result in permanent brain damage or death. Without surgery, death usually follows, due to enlargement of the hematoma, causing a brain herniation. As with other types of intracranial hematomas, the blood almost always must be removed surgically to reduce the pressure on the brain. The hematoma is evacuated through a burr hole or craniotomy. If transfer to a facility with neurosurgery is unavailable, prolonged trephination (drilling a hole into the skull) may be performed in the emergency department. Large hematomas and blood clots may require an open craniotomy.Medications may be given after surgery. They may include antiseizure medications and hyperosmotic agents to reduce brain swelling and intracranial pressure.It is extremely rare to not require surgery. If the volume of the epidural hematoma is less than 30 mL, the clot diameter less than 15 mm, a Glasgow Coma Score above 8, and no visible neurological symptoms, then it may be possible to treat it conservatively. A CT scan should be performed, and watchful waiting should be done, as the hematoma may suddenly expand. Prognosis The prognosis is better if there was a lucid interval than if the person was comatose from the time of injury. Arterial epidural hematomas usually progress rapidly. However, venous epidural hematomas, caused by a dural sinus tear, are slower.Outcomes are worse if there is more than 50 mL of blood in the hematoma before surgery. Age, pupil abnormalities, and Glasgow Coma Scale score on arrival to the emergency department also influence the prognosis. In contrast to most forms of traumatic brain injury, people with epidural hematoma and a Glasgow Coma Score of 15 (the highest score, indicating the best prognosis) usually have a good outcome if they receive surgery quickly. Epidemiology About 2 percent of head injuries and 15 percent of fatal head injuries involve an epidural hematoma. The condition is more common in teenagers and young adults than in older people, because the dura mater sticks more to the skull as a person ages, reducing the probability of a hematoma forming. Males are affected more than females. References External links MedPix: Epidural hematoma Epidural hematoma | Radiology Case | Radiopaedia.org
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
I need a basic explanation for the medical term 'Human height.'
Human height or stature is the distance from the bottom of the feet to the top of the head in a human body, standing erect. It is measured using a stadiometer, in centimetres when using the metric system, or feet and inches when using United States customary units or the imperial system.In the early phase of anthropometric research history, questions about height techniques for measuring nutritional status often concerned genetic differences.Height is also important because it is closely correlated with other health components, such as life expectancy. Studies show that there is a correlation between small stature and a longer life expectancy. Individuals of small stature are also more likely to have lower blood pressure and are less likely to acquire cancer. The University of Hawaii has found that the "longevity gene" FOXO3 that reduces the effects of aging is more commonly found in individuals of small body size. Short stature decreases the risk of venous insufficiency.When populations share genetic backgrounds and environmental factors, average height is frequently characteristic within the group. Exceptional height variation (around 20% deviation from average) within such a population is sometimes due to gigantism or dwarfism, which are medical conditions caused by specific genes or endocrine abnormalities.The development of human height can serve as an indicator of two key welfare components, namely nutritional quality and health. In regions of poverty or warfare, environmental factors like chronic malnutrition during childhood or adolescence may result in delayed growth and/or marked reductions in adult stature even without the presence of any of these medical conditions. A study of 20th-century British natality trends indicated that while tall men tended to reproduce more than short men, women of below-average height had more children than taller women. Determinants of growth and height The study of height is known as auxology. Growth has long been recognized as a measure of the health of individuals, hence part of the reasoning for the use of growth charts. For individuals, as indicators of health problems, growth trends are tracked for significant deviations, and growth is also monitored for significant deficiency from genetic expectations. Genetics is a major factor in determining the height of individuals, though it is far less influential regarding differences among populations. Average height is relevant to the measurement of the health and wellness (standard of living and quality of life) of populations.Attributed as a significant reason for the trend of increasing height in parts of Europe are the egalitarian populations where proper medical care and adequate nutrition are relatively equally distributed. The uneven distribution of nutritional resources makes it more plausible for individuals with better access to resources to grow taller, while the other population group who does not have so much of a nutritious food availability height growth is not as promising. Average height in a nation is correlated with protein quality. Nations that consume more protein in the form of meat, dairy, eggs, and fish tend to be taller, while those that obtain more protein from cereals tend to be shorter. Therefore, populations with high cattle per capita and high consumption of dairy live longer and are taller. Historically, this can be seen in the cases of the United States, Argentina, New Zealand and Australia in the beginning of the 19th century. Moreover, when the production and consumption of milk and beef is taken to consideration, it can be seen why the Germanic people who lived outside of the “imperium Romanum” were taller than those who lived at the heart of the Empire.Changes in diet (nutrition) and a general rise in quality of health care and standard of living are the cited factors in the Asian populations. Malnutrition including chronic undernutrition and acute malnutrition is known to have caused stunted growth in various populations. This has been seen in North Korea, parts of Africa, certain historical Europe, and other populations. Developing countries such as Guatemala have rates of stunting in children under 5 living as high as 82.2% in Totonicapán, and 49.8% nationwide.Height measurements are by nature subject to statistical sampling errors even for a single individual. In a clinical situation, height measurements are seldom taken more often than once per office visit, which may mean sampling taking place a week to several months apart. The smooth 50th percentile male and female growth curves illustrated above are aggregate values from thousands of individuals sampled at ages from birth to age 20. In reality, a single individuals growth curve shows large upward and downward spikes, partly due to actual differences in growth velocity, and partly due to small measurement errors. For example, a typical measurement error of plus or minus 0.5 cm (0.20 in) may completely nullify 0.5 cm of actual growth resulting in either a "negative" 0.5 cm growth (due to overestimation in the previous visit combined with underestimation in the latter), up to a 1.5 cm (0.6 in) growth (the first visit underestimating and the second visit overestimating) in the same elapsed period between measurements. Note there is a discontinuity in the growth curves at age 2, which reflects the difference in recumbent length (with the child on his or her back), used in measuring infants and toddlers, and standing height typically measured from age 2 onwards. Height, like other phenotypic traits, is determined by a combination of genetics and environmental factors. A childs height based on parental heights is subject to regression toward the mean, therefore extremely tall or short parents will likely have correspondingly taller or shorter offspring, but their offspring will also likely be closer to average height than the parents themselves. Genetic potential and several hormones, minus illness, is a basic determinant for height. Other factors include the genetic response to external factors such as diet, exercise, environment, and life circumstances. Humans grow fastest (other than in the womb) as infants and toddlers, rapidly declining from a maximum at birth to roughly age 2, tapering to a slowly declining rate, and then, during the pubertal growth spurt (with an average girl starting her puberty and pubertal growth spurt at 10 years and an average boy starting his puberty and pubertal growth spurt at 12 years), a rapid rise to a second maximum (at around 11–12 years for an average female, and 13–14 years for an average male), followed by a steady decline to zero. The average female growth speed trails off to zero at about 15 or 16 years, whereas the average male curve continues for approximately 3 more years, going to zero at about 18–19. These are also critical periods where stressors such as malnutrition (or even severe child neglect) have the greatest effect. Moreover, the health of a mother throughout her life, especially during her critical period and pregnancy, has a role. A healthier child and adult develops a body that is better able to provide optimal prenatal conditions. The pregnant mothers health is essential for herself but also the fetus as gestation is itself a critical period for an embryo/fetus, though some problems affecting height during this period are resolved by catch-up growth assuming childhood conditions are good. Thus, there is a cumulative generation effect such that nutrition and health over generations influence the height of descendants to vary degrees. The age of the mother also has some influence on her childs height. Studies in modern times have observed a gradual increase in height with maternal age, though these early studies suggest that trend is due to various socio-economic situations that select certain demographics as being more likely to have a first birth early in the mothers life. These same studies show that children born to a young mother are more likely to have below-average educational and behavioural development, again suggesting an ultimate cause of resources and family status rather than a purely biological explanation.It has been observed that first-born males are shorter than later-born males. However, more recently the reverse observation was made. The study authors suggest that the cause may be socio-economic in nature. Nature versus nurture The precise relationship between genetics and environment is complex and uncertain. Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci. In a separate study of body proportion using sitting-height ratio, it reports that these 697 variants can be partitioned into 3 specific classes, (1) variants that primarily determine leg length, (2) variants that primarily determine spine and head length, or (3) variants that affect overall body size. This gives insights into the biological mechanisms underlying how these 697 genetic variants affect overall height. These loci do not only determine height, but other features or characteristics. As an example, 4 of the 7 loci identified for intracranial volume had previously been discovered for human height.The effect of environment on height is illustrated by studies performed by anthropologist Barry Bogin and coworkers of Guatemala Mayan children living in the United States. In the early 1970s, when Bogin first visited Guatemala, he observed that Mayan Indian men averaged 157.5 centimetres (5 ft 2 in) in height and the women averaged 142.2 centimetres (4 ft 8 in). Bogin took another series of measurements after the Guatemalan Civil War, during which up to a million Guatemalans fled to the United States. He discovered that Maya refugees, who ranged from six to twelve years old, were significantly taller than their Guatemalan counterparts. By 2000, the American Maya were 10.24 cm (4.03 in) taller than the Guatemalan Maya of the same age, largely due to better nutrition and health care. Bogin also noted that American Maya children had relatively longer legs, averaging 7.02 cm (2.76 in) longer than the Guatemalan Maya (a significantly lower sitting height ratio).The Nilotic peoples of Sudan such as the Shilluk and Dinka have been described as some of the tallest in the world. Dinka Ruweng males investigated by Roberts in 1953–54 were on average 181.3 centimetres (5 ft 11+1⁄2 in) tall, and Shilluk males averaged 182.6 centimetres (6 ft 0 in). The Nilotic people are characterized as having long legs, narrow bodies and short trunks, an adaptation to hot weather. However, male Dinka and Shilluk refugees measured in 1995 in Southwestern Ethiopia were on average only 176.4 cm (5 ft 9+1⁄2 in) and 172.6 cm (5 ft 8 in) tall, respectively. As the study points out, Nilotic people "may attain greater height if privileged with favourable environmental conditions during early childhood and adolescence, allowing full expression of the genetic material." Before fleeing, these refugees were subject to privation as a consequence of the succession of civil wars in their country from 1955 to the present. The tallest living married couple are ex-basketball players Yao Ming and Ye Li (both of China) who measure 228.6 cm (7 ft 6 in) and 190.5 cm (6 ft 3 in) respectively, giving a combined height of 419.1 cm (13 ft 9 in). They married in Shanghai, China, on 6 August 2007.In Tibet, the Khampas are known for their great height. Khampa males are on average 180 cm (5 ft 11 in). Role of an individuals height Height and health Studies show that there is a correlation between small stature and a longer life expectancy. Individuals of small stature are also more likely to have lower blood pressure and are less likely to acquire cancer. The University of Hawaii has found that the “longevity gene” FOXO3 that reduces the effects of aging is more commonly found in individuals of a small body size. Short stature decreases the risk of venous insufficiency. Certain studies have shown that height is a factor in overall health while some suggest tallness is associated with better cardiovascular health and shortness with longevity. Cancer risk has also been found to grow with height. Moreover, scientists have also observed a protective effect of height on risk for Alzheimers disease, although this fact could be a result of the genetic overlap between height and intracraneal volume and there are also genetic variants influencing height that could affect biological mechanisms involved in Alzheimers disease etiology, such as Insulin-like growth factor 1 (IGF-1).Nonetheless, modern westernized interpretations of the relationship between height and health fail to account for the observed height variations worldwide. Cavalli-Sforza and Cavalli-Sforza note that variations in height worldwide can be partly attributed to evolutionary pressures resulting from differing environments. These evolutionary pressures result in height-related health implications. While tallness is an adaptive benefit in colder climates such as those found in Europe, shortness helps dissipate body heat in warmer climatic regions. Consequently, the relationships between health and height cannot be easily generalized since tallness and shortness can both provide health benefits in different environmental settings. In the end, being excessively tall can cause various medical problems, including cardiovascular problems, because of the increased load on the heart to supply the body with blood, and problems resulting from the increased time it takes the brain to communicate with the extremities. For example, Robert Wadlow, the tallest man known to verifiable history, developed trouble walking as his height increased throughout his life. In many of the pictures of the latter portion of his life, Wadlow can be seen gripping something for support. Late in his life, although he died at age 22, he had to wear braces on his legs and walk with a cane; and he died after developing an infection in his legs because he was unable to feel the irritation and cutting caused by his leg braces. Sources are in disagreement about the overall relationship between height and longevity. Samaras and Elrick, in the Western Journal of Medicine, demonstrate an inverse correlation between height and longevity in several mammals including humans.Women whose height is under 150 cm (4 ft 11 in) may have a small pelvis, resulting in such complications during childbirth as shoulder dystocia.A study done in Sweden in 2005 has shown that there is a strong inverse correlation between height and suicide among Swedish men.A large body of human and animal evidence indicates that shorter, smaller bodies age more slowly, and have fewer chronic diseases and greater longevity. For example, a study found eight areas of support for the "smaller lives longer" thesis. These areas of evidence include studies involving longevity, life expectancy, centenarians, male vs. female longevity differences, mortality advantages of shorter people, survival findings, smaller body size due to calorie restriction, and within-species body size differences. They all support the conclusion that smaller individuals live longer in healthy environments and with good nutrition. However, the difference in longevity is modest. Several human studies have found a loss of 0.5 years/centimeter of increased height (1.2 yr/inch). But these findings do not mean that all tall people die young. Many live to advanced ages and some become centenarians.In medicine, height is measured to monitor child development, this is a better indicator of growth than weight in the long term. For older people, excessive height loss is a symptom of osteoporosis. Height is also used to compute indicators like body surface area or body mass index. Height and occupational success There is a large body of research in psychology, economics, and human biology that has assessed the relationship between several seemingly innocuous physical features (e.g., body height) and occupational success. The correlation between height and success was explored decades ago. Shorter people are considered to have an advantage in certain sports (e.g., gymnastics, race car driving, etc.), whereas in many other sports taller people have a major advantage. In most occupational fields, body height is not relevant to how well people are able to perform; nonetheless several studies found that success was positively correlated with body height, although there may be other factors such as gender or socioeconomic status that are correlated with height which may account for the difference in success.A demonstration of the height-success association can be found in the realm of politics. In the United States presidential elections, the taller candidate won 22 out of 25 times in the 20th century. Nevertheless, Ignatius Loyola, founder of the Jesuits, was 150 cm (4 ft 11 in) and several prominent world leaders of the 20th century, such as Vladimir Lenin, Benito Mussolini, Nicolae Ceaușescu and Joseph Stalin were of below-average height. These examples, however, were all before modern forms of multi-media, i.e., television, which may further height discrimination in modern society. Further, growing evidence suggests that height may be a proxy for confidence, which is likewise strongly correlated with occupational success. Sports History of human height In the 150 years since the mid-nineteenth century, the average human height in industrialised countries has increased by up to 10 centimetres (3.9 in). However, these increases appear to have largely levelled off. Before the mid-nineteenth century, there were cycles in height, with periods of increase and decrease; however, apart from the decline associated with the transition to agriculture, examinations of skeletons show no significant differences in height from the neolithic revolution through the early-1800s.In general, there were no significant differences in regional height levels throughout the nineteenth century. The only exceptions of this rather uniform height distribution were people in the Anglo-Saxon settlement regions who were taller than the average and people from Southeast Asia with below-average heights. However, at the end of the nineteenth century and in the middle of the first globalization period, heights between rich and poor countries began to diverge. These differences did not disappear in the deglobalization period of the two World wars. Baten and Blum (2014) find that in the nineteenth century, important determinants of height were the local availability of cattle, meat and milk as well as the local disease environment. In the late twentieth century, however, technologies and trade became more important, decreasing the impact of local availability of agricultural products. In the eighteenth and nineteenth centuries, people of European descent in North America were far taller than those in Europe and were the tallest in the world. The original indigenous population of Plains Native Americans was also among the tallest populations of the world at the time.Some studies also suggest that there existed the correlation between the height and the real wage, moreover, the correlation was higher among the less developed countries. The difference in height between children from different social classes was already observed by age two.In the late nineteenth century, the Netherlands was a land renowned for its short population, but today Dutch people are among the worlds tallest with young men averaging 183.8 cm (6 ft 0.4 in) tall.According to a study by economist John Komlos and Francesco Cinnirella, in the first half of the eighteenth century, the average height of an English male was 165 cm (5 ft 5 in), and the average height of an Irish male was 168 cm (5 ft 6 in). The estimated mean height of English, German, and Scottish soldiers was 163.6 cm (5 ft 4+1⁄2 in) – 165.9 cm (5 ft 5+1⁄2 in) for the period as a whole, while that of Irish was 167.9 cm (5 ft 6 in). The average height of male slaves and convicts in North America was 171 cm (5 ft 7+1⁄2 in).The average height of Americans and Europeans decreased during periods of rapid industrialization, possibly due to rapid population growth and broad decreases in economic status. This has become known as the early-industrial growth puzzle in the U.S. context the Antebellum Puzzle. In England during the early nineteenth century, the difference between the average height of English upper-class youth (students of Sandhurst Military Academy) and English working-class youth (Marine Society boys) reached 22 cm (8+1⁄2 in), the highest that has been observed.Data derived from burials show that before 1850, the mean stature of males and females in Leiden, The Netherlands was respectively 167.7 cm (5 ft 6 in) and 156.7 cm (5 ft 1+1⁄2 in). The average height of 19-year-old Dutch orphans in 1865 was 160 cm (5 ft 3 in).According to a study by J.W. Drukker and Vincent Tassenaar, the average height of a Dutch person decreased from 1830 to 1857, even while Dutch real GNP per capita was growing at an average rate of more than 0.5% per year. The worst decline was in urban areas that in 1847, the urban height penalty was 2.5 cm (0.98 in). Urban mortality was also much higher than in rural regions. In 1829, the average urban and rural Dutchman was 164 cm (5 ft 4+1⁄2 in). By 1856, the average rural Dutchman was 162 cm (5 ft 4 in) and urban Dutchman was 158.5 cm (5 ft 2+1⁄2 in).A 2004 report citing a 2003 UNICEF study on the effects of malnutrition in North Korea, due to "successive famines," found young adult males to be significantly shorter. In contrast South Koreans "feasting on an increasingly Western-influenced diet," without famine, were growing taller. The height difference is minimal for Koreans over forty years old, who grew up at a time when economic conditions in the North were roughly comparable to those in the South, while height disparities are most acute for Koreans who grew up in the mid-1990s – a demographic in which South Koreans are about 12 cm (4.7 in) taller than their North Korean counterparts – as this was a period during which the North was affected by a harsh famine where hundreds of thousands, if not millions, died of hunger. A study by South Korean anthropologists of North Korean children who had defected to China found that eighteen-year-old males were 13 centimetres (5 in) shorter than South Koreans their age due to malnutrition.The tallest living man is Sultan Kösen of Turkey, at 251 cm (8 ft 3 in). The tallest man in modern history was Robert Pershing Wadlow (1918–1940), from Illinois, United States, who was 272 cm (8 ft 11 in) at the time of his death. The tallest woman in medical history was Trijntje Keever of Edam, Netherlands, who stood 254 cm (8 ft 4 in) when she died at the age of seventeen. The shortest adult human on record was Chandra Bahadur Dangi of Nepal at 54.6 cm (1 ft 9+1⁄2 in). An anecdotal article titled "Ancient American Giants" from the 14 August 1880 edition of Scientific American notes a case from Brushcreek Township, Ohio, when Dr. J. F. Everhart supervised a team that discovered ancient clay coffins within a mound which were reported to contain skeletons of the following length: 8 ft 0 in (2.44 m) woman with a child 3.5 ft 0 in (1.07 m), a second coffin with a 9 ft 0 in (2.74 m) man and 8 ft 0 in (2.44 m) woman, a third coffin with a 9 ft 4 in (2.84 m) man and 8 ft 0 in (2.44 m) woman, and seven other independent skeletons measuring between 8 ft 0 in (2.44 m) and 10 ft 0 in (3.05 m). An image and stone tablet were found with the giants.Adult height between populations often differs significantly. For example, the average height of women from the Czech Republic is greater than that of men from Malawi. This may be caused by genetic differences, childhood lifestyle differences (nutrition, sleep patterns, physical labor), or both. Depending on sex, genetic and environmental factors, shrinkage of stature may begin in middle age in some individuals but tends to be universal in the extremely aged. This decrease in height is due to such factors as decreased height of inter-vertebral discs because of desiccation, atrophy of soft tissues, and postural changes secondary to degenerative disease. Working on data of Indonesia, the study by Baten, Stegl and van der Eng suggests a positive relationship of economic development and average height. In Indonesia, human height has decreased coincidentally with natural or political shocks. Average height around the world As with any statistical data, the accuracy of such data may be questionable for various reasons: Some studies may allow subjects to self-report values. Generally speaking, self-reported height tends to be taller than its measured height, although the overestimation of height depends on the reporting subjects height, age, gender and region. Test subjects may have been invited instead of chosen randomly, resulting in sampling bias. Some countries may have significant height gaps between different regions. For instance, one survey shows there is 10.8 cm (4+1⁄2 in) gap between the tallest state and the shortest state in Germany. Under such circumstances, the mean height may not represent the total population unless sample subjects are appropriately taken from all regions with using weighted average of the different regional groups. Different social groups can show different mean height. According to a study in France, executives and professionals are 2.6 cm (1 in) taller, and university students are 2.55 cm (1 in) taller than the national average. As this case shows, data taken from a particular social group may not represent a total population in some countries. A relatively small sample of the population may have been measured, which makes it uncertain whether this sample accurately represents the entire population. The height of persons can vary over a day, due to factors such as a height increase from exercise done directly before measurement (normally inversely correlated), or a height increase since lying down for a significant period (normally inversely correlated). For example, one study revealed a mean decrease of 1.54 centimetres (0.61 in) in the heights of 100 children from getting out of bed in the morning to between 4 and 5 p.m. that same day. Such factors may not have been controlled in some of the studies. Men from Bosnia and Herzegovina, the Netherlands, Croatia, Serbia and Montenegro have the tallest average height. Dinka people are sometimes noted for their height. With the Tutsi of Rwanda, they are believed to be the tallest people in Africa. Roberts and Bainbridge reported the average height of 182.6 cm (5 ft 11.9 in) in a sample of 52 Dinka Agaar and 181.3 cm (5 ft 11.4 in) in 227 Dinka Ruweng measured in 1953–1954. Other studies of comparative historical height data and nutrition place the Dinka as the tallest people in the world. Measurement Crown-rump length is the measurement of the length of human embryos and fetuses from the top of the head (crown) to the bottom of the buttocks (rump). It is typically determined from ultrasound imagery and can be used to estimate gestational age. Until two years old, recumbent length is used to measure infants. Length measures the same dimension as height, but height is measured standing up while the length is measured lying down. In developed nations, the average total body length of a newborn is about 50 cm (20 in), although premature newborns may be much smaller. Standing height is used to measure children over two years old and adults who can stand without assistance. Measure is done with a stadiometer. In general, standing height is about 0.7 cm (0.28 in) less than recumbent length.Surrogate height measurements are used when standing height and recumbent length are impractical. For sample Chumlea equation use knee height as indicator of stature. Other techniques include: arm span, sitting height, ulna length, etc. See also Anthropometry, the measurement of the human individual Body weight Economics and Human Biology (academic journal) History of anthropometry Human physical appearance Human variability Pygmy peoples Citations General bibliography Grandjean, Etienne (1987). Fitting the Task to the Man: An Ergonomic Approach. London, UK: Taylor & Francis. ISBN 978-0-85066-192-7. (for heights in U.S. and Japan) Eurostat Statistical Yearbook 2004. Luxembourg: Eurostat. 2014. ISBN 978-92-79-38906-1. (for heights in Germany) Netherlands Central Bureau for Statistics, 1996 (for average heights) Ogden, Cynthia L.; Fryar, Cheryl D.; Carroll, Margaret D. & Flegal, Katherine M. (27 October 2004). "Mean Body Weight, Height, and Body Mass Index, United States 1960–2002" (PDF). Advance Data from Vital and Health Statistics (347): 1–17. PMID 15544194. "Health Survey for England – trend data". Department of Health and Social Care. Archived from the original on 10 October 2004. Bilger, Burkhard (29 March 2004). "The Height Gap". The New Yorker. Archived from the original on 2 April 2004. A collection of data on human height, referred to here as "karube" but originally collected from other sources, is archived here. A copy is available here (an English translation of this Japanese page would make it easier to evaluate the quality of the data...) "Americans Slightly Taller, Much Heavier Than Four Decades Ago". National Center for Health Statistics. 27 October 2004. Aminorroaya, A.; Amini, M.; Naghdi, H. & Zadeh, A. H. (2003). "Growth charts of heights and weights of male children and adolescents of Isfahan, Iran" (PDF). Journal of Health, Population, and Nutrition. 21 (4): 341–346. PMID 15038589. S2CID 21907084. Archived from the original (PDF) on 19 September 2018. 6. Celostátní antropologický výzkum dětí a mládeže 2001, Česká republika [6th Nationwide anthropological research of children and youth 2001, Czech Republic] (in Czech). Prague: State Health Institute (SZÚ). 2005. ISBN 978-8-07071-251-1. Bogin, Barry (2001). The Growth of Humanity. Hoboken, NJ: Wiley-Liss. ISBN 978-0-471-35448-2. Eveleth, P.B.; Tanner, J.M. (1990). Worldwide Variation in Human Growth (2nd ed.). Cambridge University Press. ISBN 978-0-521-35916-0. Miura, K.; Nakagawa, H. & Greenland, P. (2002). "Invited commentary: Height-cardiovascular disease relation: where to go from here?". American Journal of Epidemiology. 155 (8): 688–689. doi:10.1093/aje/155.8.688. PMID 11943684. Ruff, Christopher (October 2002). "Variation in human body size and shape". Annual Review of Anthropology. 31: 211–232. doi:10.1146/annurev.anthro.31.040402.085407. "Los españoles somos 3,5 cm más altos que hace 20 años" [Spaniards are 3.5 cm taller than 20 years ago]. 20 minutos (in Spanish). 31 July 2006. Krishan, K. & Sharma, J. C. (2002). "Intra-individual difference between recumbent length and stature among growing children". Indian Journal of Pediatrics. 69 (7): 565–569. doi:10.1007/BF02722678. PMID 12173694. S2CID 22427304. Case, A. & Paxson, C. (2008). "Stature and Status: Height, ability, and labor market outcomes". The Journal of Political Economy. 116 (3): 499–532. doi:10.1086/589524. PMC 2709415. PMID 19603086. Sakamaki, R.; Amamoto, R.; Mochida, Y.; Shinfuku, N. & Toyama, K. (2005). "A comparative study of food habits and body shape perception of university students in Japan and Korea". Nutrition Journal. 4: 31. doi:10.1186/1475-2891-4-31. PMC 1298329. PMID 16255785. Habicht, Michael E.; Henneberg, Maciej; Öhrström, Lena M.; Staub, Kaspar & Rühli, Frank J. (27 April 2015). "Body height of mummified pharaohs supports historical suggestions of sibling marriages". American Journal of Physical Anthropology. 157 (3): 519–525. doi:10.1002/ajpa.22728. PMID 25916977. Further reading Marouli, Eirini]; et al. (9 February 2017). "Rare and low-frequency coding variants alter human adult height". Nature. 542 (7640): 186–190. Bibcode:2017Natur.542..186M. doi:10.1038/nature21039. PMC 5302847. PMID 28146470. External links CDC National Center for Health Statistics: Growth Charts of American Percentiles fao.org, Body Weights and Heights by Countries (given in percentiles) The Height Gap, Article discussing differences in height around the world Tallest in the World: Native Americans of the Great Plains in the Nineteenth Century European Heights in the Early eighteenth Century Spatial Convergence in Height in East-Central Europe, 1890–1910 The Biological Standard of Living in Europe During the Last Two Millennia HEALTH AND NUTRITION IN THE PREINDUSTRIAL ERA: INSIGHTS FROM A MILLENNIUM OF AVERAGE HEIGHTS IN NORTHERN EUROPE Our World In Data – Human Height – Visualizations of how human height around the world has changed historically (by Max Roser). Charts for all countries, world maps, and links to more data sources. What Has Happened to the Quality of Life in the Advanced Industrialized Nations? A century of trends in adult human height, NCD Risk Factor Collaboration (NCD-RISC), DOI: 10.7554/eLife.13410, 25 July 2016
You are a medical knowledge base. Your task is to elucidate medical terminology, offering insights into their meanings, origins, and applications.
Can you demystify the medical term 'Acute pericarditis' for me?
Acute pericarditis is a type of pericarditis (inflammation of the sac surrounding the heart, the pericardium) usually lasting less than 6 weeks. It is the most common condition affecting the pericardium. Signs and symptoms Chest pain is one of the common symptoms of acute pericarditis. It is usually of sudden onset, occurring in the anterior chest and often has a sharp quality that worsens with breathing in or coughing, due to inflammation of the pleural surface at the same time. The pain may be reduced with sitting up and leaning forward while worsened with lying down, and also may radiate to the back, to one or both trapezius ridges. However, the pain can also be dull and steady, resembling the chest pain in an acute myocardial infarction. As with any chest pain, other causes must also be ruled out, such as GERD, pulmonary embolism, muscular pain, etc. A pericardial friction rub is a very specific sign of acute pericarditis, meaning the presence of this sign invariably indicates presence of disease. However, absence of this sign does not rule out disease. This rub can be best heard by the diaphragm of the stethoscope at the left sternal border arising as a squeaky or scratching sound, resembling the sound of leather rubbing against each other. This sound should be distinguished from the sound of a murmur, which is similar but sounds more like a "swish" sound than a scratching sound. The pericardial rub is said to be generated from the friction generated by the two inflamed layers of the pericardium; however, even a large pericardial effusion does not necessarily present a rub. The rub is best heard during the maximal movement of the heart within the pericardial sac, namely, during atrial systole, ventricular systole, and the filling phase of early ventricular diastole. Fever may be present since this is an inflammatory process. Causes There are several causes of acute pericarditis. In developed nations, the cause of most (80–90%) cases of acute pericarditis is unknown but a viral cause is suspected in the majority of such cases. The other 10–20% of acute pericarditis cases have various causes including connective tissue diseases (e.g., systemic lupus erythematosus), cancer, or involve an inflammatory reaction of the pericardium following trauma to the heart such as after a heart attack such as Dresslers syndrome. Familial mediterranean fever and TNF receptor associated periodic syndrome are rare inherited autoimmune diseases capable of causing recurring episodes of acute pericarditis. Pathophysiology Clinical presentation of diseases of pericardium may vary between: Acute and recurrent pericarditis Pericardial effusion without major hemodynamic compromise Cardiac tamponade Constrictive pericarditis Effusive-constrictive pericarditis Diagnosis For acute pericarditis to formally be diagnosed, two or more of the following criteria must be present: chest pain consistent with a diagnosis of acute pericarditis (sharp chest pain worsened by breathing in or a cough), a pericardial friction rub, a pericardial effusion, and changes on electrocardiogram (ECG) consistent with acute pericarditis.A complete blood count may show an elevated white count and a serum C-reactive protein may be elevated. Acute pericarditis is associated with a modest increase in serum creatine kinase MB (CK-MB). and cardiac troponin I (cTnI), both of which are also markers for injury to the muscular layer of the heart. Therefore, it is imperative to also rule out acute myocardial infarction in the face of these biomarkers. The elevation of these substances may occur when inflammation of the hearts muscular layer in addition to acute pericarditis. Also, ST elevation on EKG (see below) is more common in those patients with a cTnI > 1.5 µg/L. Coronary angiography in those patients should indicate normal vascular perfusion. Troponin levels increase in 35-50% of people with pericarditis.Electrocardiogram (ECG) changes in acute pericarditis mainly indicates inflammation of the epicardium (the layer directly surrounding the heart), since the fibrous pericardium is electrically inert. For example, in uremia, there is no inflammation in the epicardium, only fibrin deposition, and therefore the EKG in uremic pericarditis will be normal. Typical EKG changes in acute pericarditis includes stage 1 -- diffuse, positive, ST elevations with reciprocal ST depression in aVR and V1. Elevation of PR segment in aVR and depression of PR in other leads especially left heart V5, V6 leads indicates atrial injury. stage 2 -- normalization of ST and PR deviations stage 3 -- diffuse T wave inversions (may not be present in all patients) stage 4 -- EKG becomes normal OR T waves may be indefinitely invertedThe two most common clinical conditions where ECG findings may mimic pericarditis are acute myocardial infarction (AMI) and generalized early repolarization. As opposed to pericarditis, AMI usually causes localized convex ST-elevation usually associated with reciprocal ST-depression which may also be frequently accompanied by Q-waves, T-wave inversions (while ST is still elevated unlike pericarditis), arrhythmias and conduction abnormalities. In AMI, PR-depressions are rarely present. Early repolarization usually occurs in young males (age <40 years) and ECG changes are characterized by terminal R-S slurring, temporal stability of ST-deviations and J-height/ T-amplitude ratio in V5 and V6 of <25% as opposed to pericarditis where terminal R-S slurring is very uncommon and J-height/ T-amplitude ratio is ≥ 25%. Very rarely, ECG changes in hypothermia may mimic pericarditis, however differentiation can be helpful by a detailed history and presence of an Osborne wave in hypothermia.Another important diagnostic electrocardiographic sign in acute pericarditis is the Spodick sign. It signifies to the PR-depressions in a usual (but not always) association with downsloping TP segment in patients with acute pericarditis and is present in up to 80% of the patients affected with acute pericarditis. The sign is often best visualized in lead II and lateral precordial leads. In addition, Spodicks sign may also serve as an important distinguishing electrocardiographic tool between the acute pericarditis and acute coronary syndrome. The presence of a classical Spodicks sign is often a giveaway to the diagnosis.Rarely, electrical alternans may be seen, depending on the size of the effusion.A chest x-ray is usually normal in acute pericarditis but can reveal the presence of an enlarged heart if a pericardial effusion is present and is greater than 200 mL in volume. Conversely, patients with unexplained new onset cardiomegaly should always be worked up for acute pericarditis.An echocardiogram is typically normal in acute pericarditis but can reveal pericardial effusion, the presence of which supports the diagnosis, although its absence does not exclude the diagnosis. Treatment Patients with uncomplicated acute pericarditis can generally be treated and followed up in an outpatient clinic. However, those with high risk factors for developing complications (see above) will need to be admitted to an inpatient service, most likely an ICU setting. High risk patients include the following: subacute onset high fever (> 100.4 F/38 C) and leukocytosis development of cardiac tamponade large pericardial effusion (echo-free space > 20 mm) resistant to NSAID treatment immunocompromised history of oral anticoagulation therapy acute trauma failure to respond to seven days of NSAID treatmentPericardiocentesis is a procedure whereby the fluid in a pericardial effusion is removed through a needle. It is performed under the following conditions: presence of moderate or severe cardiac tamponade diagnostic purpose for suspected purulent, tuberculosis, or neoplastic pericarditis persistent symptomatic pericardial effusionNSAIDs in viral or idiopathic pericarditis. In patients with underlying causes other than viral, the specific etiology should be treated. With idiopathic or viral pericarditis, NSAID is the mainstay treatment. Goal of therapy is to reduce pain and inflammation. The course of the disease may not be affected. The preferred NSAID is ibuprofen because of rare side effects, better effect on coronary flow, and larger dose range. Depending on severity, dosing is between 300 and 800 mg every 6–8 hours for days or weeks as needed. An alternative protocol is aspirin 800 mg every 6–8 hours. Dose tapering of NSAIDs may be needed. In pericarditis following acute myocardial infarction, NSAIDs other than aspirin should be avoided since they can impair scar formation. As with all NSAID use, GI protection should be engaged. Failure to respond to NSAIDs within one week (indicated by persistence of fever, worsening of condition, new pericardial effusion, or continuing chest pain) likely indicates that a cause other than viral or idiopathic is in process.Colchicine, which has been essential to treat recurrent pericarditis, has been supported for routine use in acute pericarditis by recent prospective studies. Colchicine can be given 0.6 mg twice a day (0.6 mg daily for patients <70 kg) for 3 months following an acute attack. It should be considered in all patients with acute pericarditis, preferably in combination with a short-course of NSAIDs. For patients with a first episode of acute idiopathic or viral pericarditis, they should be treated with an NSAID plus colchicine 1–2 mg on first day followed by 0.5 daily or twice daily for three months. It should be avoided or used with caution in patients with severe chronic kidney disease, hepatobiliary dysfunction, blood dyscrasias, and gastrointestinal motility disorders.Corticosteroids are usually used in those cases that are clearly refractory to NSAIDs and colchicine and a specific cause has not been found. Systemic corticosteroids are usually reserved for those with autoimmune disease. Prognosis One of the most feared complications of acute pericarditis is cardiac tamponade. Cardiac tamponade is accumulation of enough fluid in the pericardial space --- pericardial effusion --- to cause serious obstruction to the inflow of blood to the heart. Signs of cardiac tamponade include distended neck veins, muffled heart sounds when listening with a stethoscope, and low blood pressure (together known as Becks triad). This condition can be fatal if not immediately treated. Another longer term complication of pericarditis, if it recurs over a longer period of time (normally more than 3 months), is progression to constrictive pericarditis. Recent studies have shown this to be an uncommon complication. The definitive treatment for constrictive pericarditis is pericardial stripping, which is a surgical procedure where the entire pericardium is peeled away from the heart. References Further reading Chugh, S. N. (2014-05-14). Textbook of Clinical Electrocardiography. Jaypee Brothers Publishers. ISBN 9789350906088. == External links ==
You serve as an encyclopedia of medical terms. Deliver clear, detailed explanations, making medical language understandable to a diverse audience.
I'm looking for a concise explanation of the medical term 'Pelvis justo major.'
Pelvis justo major (also called "Giant Pelvis") is a rare condition of the adult female pelvis where the pelvis flairs above the Iliopectineal line. It is 1.5 or more times larger than an average pelvis in every direction and is at least 42 cm (16.5 inches) biiliac width. Even though this condition is classified as a congenital abnormality, it is not a medical disease or abnormality of the pelvis (as the pelvis is a true gynecoid shape, only larger). Women with this condition, at the time of delivery, may have a precipitous birth. There is virtually no resistance of the huge pelvic opening to the size of a newborn so only the soft parts resist the birth. With a huge Justo Major Pelvis, there is no pelvic bone "molding" of the fetal head. With the average pelvic size (2/3 or less Justo Major size) the usual pelvic molding process slows the birth, resulting in a slow and gradual stretching of the vaginal opening for primiparous women. When a huge Justo Major Pelvis allows such an extremely rapid vaginal birth, there can be tears of the perineal soft tissues. At the time of delivery the strong uterine contractions and maternal bearing down almost instantly overwhelm the integrity of a tightened and previously unstretched vaginal orifice. This is often the case if such women have not previously practiced vaginal stretching to the degree that allows such an instant birthing, especially so for a primiparous woman. This "instant delivery" problem causes many OBGYN doctors to stress the importance of women with a huge pelvis practicing pre-delivery vaginal stretching to avoid perineal injury. Such a large size for the female pelvis is present in less than one in a thousand adult women. When women reach their maximum pelvis size, often by 21 years of age, if they have a huge pelvis the resulting big hips will not go away—no matter how much they diet, as bone will not shrink. It is not unusual for such women, whose pear shape is due to a huge pelvis, to give up watching calories for smaller hips... leading to obesity. However, such pear shaped obese women do not necessarily have a huge pelvis and a measurement scan or anthropometry by calipers is required to diagnose the Justo Major condition. The incidence of Justo Major Pelvis is not found to be a strictly standard deviation type variation as it follows a tail skewed deviation to the right. Incidence varies with geographic regions of the world (e.g. Poland has a high incidence). Justo Major Pelvis is classified as congenital and thought to be partially inherited, especially from the maternal side. == References ==
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
Can you break down the meaning of the medical term 'Rabies vaccine' for me?
The rabies vaccine is a vaccine used to prevent rabies. There are a number of rabies vaccines available that are both safe and effective. They can be used to prevent rabies before, and, for a period of time, after exposure to the rabies virus, which is commonly caused by a dog bite or a bat bite.Doses are usually given by injection into the skin or muscle. After exposure, the vaccination is typically used along with rabies immunoglobulin. It is recommended that those who are at high risk of exposure be vaccinated before potential exposure. Rabies vaccines are effective in humans and other animals, and vaccinating dogs is very effective in preventing the spread of rabies to humans. A long-lasting immunity to the virus develops after a full course of treatment.Rabies vaccines may be used safely by all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. After exposure to rabies, there is no contraindication to its use, because the untreated virus is overwhelmingly fatal.The first rabies vaccine was introduced in 1885 and was followed by an improved version in 1908. Millions of people globally are vaccinated against the virus. It is on the World Health Organizations List of Essential Medicines. Medical uses Before exposure The World Health Organization (WHO) recommends vaccinating those who are at high risk of the disease, such as children who live in areas where it is common. Other groups may include veterinarians, researchers, or people planning to travel to regions where rabies is common. Three doses of the vaccine are given over a one-month period on days zero, seven, and either twenty-one or twenty-eight. After exposure For individuals who have been potentially exposed to the virus, four doses over two weeks are recommended, as well as an injection of rabies immunoglobulin with the first dose. This is known as post-exposure vaccination. For people who have previously been vaccinated, only a single dose of the rabies vaccine is required. However, vaccination after exposure is neither a treatment nor a cure for rabies; it can only prevent the development of rabies in a person if given before the virus reaches the brain. Because the rabies virus has a relatively long incubation period, post-exposure vaccinations are typically highly effective. Additional doses Immunity following a course of doses is typically long lasting, and additional doses are usually not needed unless the person has a high risk of contracting the virus. Those at risk may have tests done to measure the amount of rabies antibodies in the blood, and then get rabies boosters as needed. Following administration of a booster dose, one study found 97% of immunocompetent individuals demonstrated protective levels of neutralizing antibodies after ten years. Safety Rabies vaccines are safe in all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. Because of the certain fatality of the virus, receiving the vaccine is always advisable.Vaccines made from nerve tissue are used in a few countries, mainly in Asia and Latin America, but are less effective and have greater side effects. Their use is thus not recommended by the World Health Organization. Types The human diploid cell rabies vaccine (H.D.C.V.) was started in 1967. Human diploid cell rabies vaccines are inactivated vaccines made using the attenuated Pitman-Moore L503 strain of the virus.In addition to these developments, newer and less expensive purified chicken embryo cell vaccines (CCEEV) and purified Vero cell rabies vaccines are now available and are recommended for use by the WHO. The purified Vero cell rabies vaccine uses the attenuated Wistar strain of the rabies virus, and uses the Vero cell line as its host. CCEEVs can be used in both pre- and post-exposure vaccinations. CCEEVs use inactivated rabies virus grown from either embryonated eggs or in cell cultures and are safe for use in humans and animals.The vaccine was attenuated and prepared in the H.D.C. strain WI-38 which was gifted to Dr. Hilary Koprowski at the Wistar Institute by Dr. Leonard Hayflick, an Associate Member, who developed this normal human diploid cell strain. History Virtually all infections with rabies resulted in death until two French scientists, Louis Pasteur and Émile Roux, developed the first rabies vaccination in 1885. Nine-year-old Joseph Meister (1876–1940), who had been mauled by a rabid dog, was the first human to receive this vaccine. The treatment started with a subcutaneous injection on 6 July 1885, at 8:00 pm, which was followed with 12 additional doses administered over the following 10 days. The first injection was derived from the spinal cord of an inoculated rabbit which had died of rabies 15 days earlier. All the doses were obtained by attenuation, but later ones were progressively more virulent.The Pasteur-Roux vaccine attenuated the harvested virus samples by allowing them to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, and while they are much cheaper than modern cell culture vaccines, they are not as effective. Neural tissue vaccines also carry a certain risk of neurological complications. Cost When the modern cell-culture rabies vaccine was first introduced in the early 1980s, it cost $45 per dose, and was considered to be too expensive. The cost of the rabies vaccine continues to be a limitation to acquiring pre-exposure rabies immunization for travelers from developed countries. In 2015 in the United States, a course of three doses could cost over $1,000, while in Europe a course costs around €100. It is possible and more cost-effective to split one intramuscular dose of the vaccine into several intradermal doses. This method is recommended by the World Health Organisation in areas that are constrained by cost or with supply issues. The route is as safe and effective as intramuscular according to the WHO. Veterinary use Pre-exposure immunization has been used on domesticated and wild populations. In many jurisdictions, domestic dogs, cats, ferrets, and rabbits are required to be vaccinated.There are two main types of vaccines used for domesticated animals and pets (including pets from wildlife species): Inactivated rabies virus (similar technology to that given to humans) administered by injection Modified live viruses administered orally (by mouth): Live rabies virus from attenuated strains. Attenuated means strains that have developed mutations that cause them to be weaker and do not cause disease.Imrab is an example of a veterinary rabies vaccine containing the Pasteur strain of killed rabies virus. Several different types of Imrab exist, including Imrab, Imrab 3, and Imrab Large Animal. Imrab 3 has been approved for ferrets and, in some areas, pet skunks. Dogs Aside from vaccinating humans, another approach was also developed by vaccinating dogs to prevent the spread of the virus. In 1979, the Van Houweling Research Laboratory of the Silliman University Medical Center in Dumaguete in the Philippines developed and produced a dog vaccine that gave a three-year immunity from rabies. The development of the vaccine resulted in the elimination of rabies in many parts of the Visayas and Mindanao Islands. The successful program in the Philippines was later used as a model by other countries, such as Ecuador and the Yucatan State of Mexico, in their fight against rabies conducted in collaboration with the World Health Organization.In Tunisia, a rabies control program was initiated to give dog owners free vaccination to promote mass vaccination which was sponsored by their government. The vaccine is known as Rabisin (Mérial), which is a cell based rabies vaccine only used countrywide. Vaccinations are often administered when owners take in their dogs for check-ups and visits at the vet.Oral rabies vaccines (see below for details) have been trialled on feral/stray dogs in some areas with high rabies incidence, as it could potentially be more efficient than catching and injecting them. However these have not been deployed for dogs at large scale yet. Wild animals Wildlife species, primarily bats, raccoons, skunks, and foxes, act as reservoir species for different variants of the rabies virus. This results in the general occurrence of rabies as well as outbreaks in animal populations. Approximately 90% of all reported rabies cases in the US are from wildlife. Oral rabies vaccine Oral rabies vaccines in pellet form are intended to be given to wild animals to produce a herd immunity effect. The development of safe and effective rabies virus vaccines applied in attractive baits resulted in the first field trials in Switzerland in 1978 to immunize red foxes.There are currently two different types of oral wildlife rabies vaccine in use: Modified live virus: Attenuated vaccine strains of rabies virus such as SAG2 and SAD B19 Recombinant vaccinia virus expressing rabies glycoprotein: This is a strain of the vaccinia virus (originally a smallpox vaccine) that has been engineed to encode the gene for the rabies glycoprotein. It is mostly used in the USA (raccoons, foxes, and coyotes) and in western Europe (red foxes)Other oral rabies experimental vaccines in development include recombinant adenovirus vaccines. Oral rabies vaccination (ORV) programs have been used in many countries in an effort to control the spread of rabies and limit the risk of human contact with the rabies virus. ORV programs were initiated in Europe in the 1980s, Canada in 1985, and in the United States in 1990. ORV is a preventive measure to eradicate rabies in wild animal vectors of disease, mainly foxes, raccoons, raccoon dogs, coyotes and jackals, but also can be used for dogs in developing countries. ORV programs typically use edible baits to deliver the vaccine to targeted animals. ORV baits consist of a small packet containing the oral vaccine which is then either coated in a fishmeal paste or encased in a fishmeal-polymer block. When an animal bites into the bait, the packets burst and the vaccine is administered. Current research suggests that if adequate amounts of the vaccine is ingested, immunity to the virus should last for upwards of one year. By immunizing wild or stray animals, ORV programs work to create a buffer zone between the rabies virus and potential contact with humans, pets, or livestock. The effectiveness of ORV campaigns in specific areas is determined through trap-and-release methods. Titer tests are performed on the blood drawn from the sample animals in order to measure rabies antibody levels in the blood. Baits are usually distributed by aircraft to more efficiently cover large, rural regions. In order to place baits more precisely and to minimize human and pet contact with baits, however, they are distributed by hand in suburban or urban regions.ORV programs have seen success in preventing the westward spread of raccoon variant rabies in the United States and even eradicating rabies in red foxes in Switzerland.The oral vaccine does not contain the whole rabies virus and has been proven safe in over 60 animal species including cats and dogs. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. The potential for human contact with baits is a present concern for ORV programs, but the inactivated rabies vaccine cannot cause rabies, and the recombinant poxvirus vaccine is based on an attenuated poxvirus which is unlikely to cause serious disease in humans anyway. In the USA between 1990 and 2000, over 22 million doses of vaccinia-rabies were distributed, but there were only 160 incidents of people touching a vaccine bait, and only one resulted in a serious infection. The person in this case had been bitten by her dog while removing a bait from its mouth. References External links "Rabies Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 9 April 2021. "Imovax". U.S. Food and Drug Administration (FDA). 16 December 2019. STN: 103931. "RabAvert - Rabies Vaccine". U.S. Food and Drug Administration (FDA). 12 December 2019. STN: BL 103334. Rabies Vaccines at the US National Library of Medicine Medical Subject Headings (MeSH)
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
Can you break down the meaning of the medical term 'Endophthalmitis' for me?
Endophthalmitis is inflammation of the interior cavity of the eye, usually caused by infection. It is a possible complication of all intraocular surgeries, particularly cataract surgery, and can result in loss of vision or loss of the eye itself. Infection can be caused by bacteria or fungi, and is classified as exogenous (infection introduced by direct inoculation as in surgery or penetrating trauma), or endogenous (organisms carried by blood vessels to the eye from another site of infection). Other non-infectious causes include toxins, allergic reactions, and retained intraocular foreign bodies. Intravitreal injections are a rare cause, with an incidence rate usually less than .05%. Signs and symptoms There is usually a history of recent eye surgery or penetrating trauma to the eye. Symptoms include severe pain, vision loss, and intense redness of the conjunctiva. Hypopyon can be present and should be looked for on examination by a slit lamp. It can first present with the black dot sign (Martin-Farina sign), where patients may report a small area of loss of vision that resembles a black dot or fly. An eye exam should be considered in systemic candidiasis, as up to 3% of cases of candidal blood infections lead to endophthalmitis. Complications Panophthalmitis — Progression to involve all the coats of the eye. Corneal ulcer Orbital cellulitis Impairment of vision Complete loss of vision Loss of eye architecture Enucleation Cause Bacteria: N. meningitidis, Staphylococcus aureus, S. epidermidis, S. pneumoniae, other streptococcal spp., Cutibacterium acnes, Pseudomonas aeruginosa, other gram negative organisms. Viruses: Herpes simplex virus. Fungi: Candida spp. Fusarium Parasites: Toxoplasma gondii, Toxocara.A recent systematic review found that the most common source of infectious transmission following cataract surgery was attributed to a contaminated intaocular solution (i.e. irrigation solution, viscoelastic, or diluted antibiotic), although there is a large diversity of exogenous microorganisms that can travel via various routes including the operating room environment, phacoemulsifcation machine, surgical instruments, topical anesthetics, intraocular lens, autoclave solution, and cotton wool swabs.Late-onset endophthalmitis is mostly caused by Cutibacterium acnes.Causative organisms are not present in all cases. Endophthalmitis can emerge by entirely sterile means, e.g. an allergic reaction to a drug administered intravitreally. Diagnosis Diagnosis: Microbiology testing. PCR. TASS vs Infectious endophthalmitis. Prevention A Cochrane Review sought to evaluate the effects of perioperative antibiotic prophylaxis for endophthalmitis following cataract surgery. The review showed high-certainty evidence that antibiotic injections in the eye with cefuroxime at the end of surgery lowers the chance of endophthalmitis. Also, the review showed moderate evidence that antibiotic eye drops (levofloxacin or chloramphenicol) with antibiotic injections (cefuroxime or penicillin) probably lowers the chance of endophthalmitis compared with injections or eye drops alone. Separate studies from the research showed that a periocular injection of penicillin with chloramphenicol-suphadimidine eye drops, and an intracameral cefuroxime injection with topical levofloxacin resulted in a risk reduction of developing endophthalmitis following cataract surgery for subjects. In the case of intravitreal injections, however, antibiotics are not effective. Studies have demonstrated no difference between rates of infection with and without antibiotics when intravitreal injections are performed. The only consistent method of antibioprophylaxis in this instance is a solution of povidone-iodine applied pre-injection. Treatment The patient needs urgent examination by an ophthalmologist, preferably a vitreoretinal specialist who will usually decide for urgent intervention to provide intravitreal injection of potent antibiotics. Injections of vancomycin (to kill Gram-positive bacteria) and ceftazidime (to kill Gram-negative bacteria) are routine. Even though antibiotics can have negative impacts on the retina in high concentrations, the facts that visual acuity worsens in 65% of endophthalmitis patients and prognosis gets poorer the longer an infection goes untreated make immediate intervention necessary. Endophthalmitis patients may also require an urgent surgery (pars plana vitrectomy), and evisceration may be necessary to remove a severe and intractable infection which could result in a blind and painful eye. Steroids may be injected intravitreally if the cause is allergic. In patients with acute endophthalmitis, combined steroid treatment with antibiotics have been found to improve visual outcomes, versus patients only treated with antibiotics, but any improvements on the resolution acute endophthalmitis is unknown. References External links Endophthalmitis at eMedicine Fungal Endophthalmitis at eMedicine
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
Could you please explain the term 'Fenoldopam' in simple language?
Fenoldopam mesylate (Corlopam) is a drug and synthetic benzazepine derivative which acts as a selective D1 receptor partial agonist. Fenoldopam is used as an antihypertensive agent. It was approved by the Food and Drug Administration (FDA) in September 1997. Indications Fenoldopam is used as an antihypertensive agent postoperatively, and also intravenously (IV) to treat a hypertensive crisis. Since fenoldopam is an intravenous agent with minimal adrenergic effects that improves renal perfusion, in theory it could be beneficial in hypertensive patients with concomitant chronic kidney disease. It can cause reflex tachycardia, but it is dependent on the infusion of the drug. Pharmacology Fenoldopam causes arterial/arteriolar vasodilation leading to a decrease in blood pressure by activating peripheral D1 receptors. It decreases afterload and also promotes sodium excretion via specific dopamine receptors along the nephron. The renal effect of fenoldopam and dopamine may involve physiological antagonism of the renin-angiotensin system in the kidney. In contrast to dopamine, fenoldopam is a selective D1 receptor agonist with no effect on beta adrenoceptors, although there is evidence that it may have some alpha-1 and alpha-2 adrenoceptor antagonist activity. D1 receptor stimulation activates adenylyl cyclase and raises intracellular cyclic AMP, resulting in vasodilation of most arterial beds, including renal, mesenteric, and coronary arteries. to cause a reduction in systemic vascular resistance. Fenoldopam has a rapid onset of action (4 minutes) and short duration of action (< 10 minutes) and a linear dose–response relationship at usual clinical doses. Side effects Adverse effects include headache, flushing, nausea, hypotension, reflex tachycardia, and increased intraocular pressure. Contraindications, warnings and precautions Fenoldopam mesylate contains sodium metabisulfite, a sulfite that may rarely cause allergic-type reactions including anaphylactic symptoms and asthma in susceptible people. Fenoldopam mesylate administration should be undertaken with caution to patients with glaucoma or raised intraocular pressure as fenoldopam raises intraocular pressure. Concomitant use of fenoldopam with a beta-blocker should be avoided if possible, as unexpected hypotension can result from beta-blocker inhibition of sympathetic-mediated reflex tachycardia in response to fenoldopam. == References ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
I'd like to learn more about the medical term 'Anticholinergic.' Can you provide some details?
Anticholinergics (anticholinergic agents) are substances that block the action of the neurotransmitter called acetylcholine (ACh) at synapses in the central and peripheral nervous system.These agents inhibit the parasympathetic nervous system by selectively blocking the binding of ACh to its receptor in nerve cells. The nerve fibers of the parasympathetic system are responsible for the involuntary movement of smooth muscles present in the gastrointestinal tract, urinary tract, lungs, sweat glands, and many other parts of the body.In broad terms, anticholinergics are divided into two categories in accordance with their specific targets in the central and peripheral nervous system and at the neuromuscular junction: antimuscarinic agents, and antinicotinic agents (ganglionic blockers, neuromuscular blockers).The term "anticholinergic" is typically used to refer to antimuscarinics which competitively inhibit the binding of ACh to muscarinic acetylcholine receptors; such agents do not antagonize the binding at nicotinic acetylcholine receptors at the neuromuscular junction, although the term is sometimes used to refer to agents which do so. Medical uses Anticholinergic drugs are used to treat a variety of conditions: Dizziness (including vertigo and motion sickness-related symptoms) Extrapyramidal symptoms, a potential side-effect of antipsychotic medications Gastrointestinal disorders (e.g., peptic ulcers, diarrhea, pylorospasm, diverticulitis, ulcerative colitis, nausea, and vomiting) Genitourinary disorders (e.g., cystitis, urethritis, and prostatitis) Insomnia, although usually only on a short-term basis Respiratory disorders (e.g., asthma, chronic bronchitis, and chronic obstructive pulmonary disease [COPD]) Sinus bradycardia due to a hypersensitive vagus nerve Organophosphate based nerve agent poisoning, such as VX, sarin, tabun, and soman (atropine is favoured in conjunction with an oxime, usually pralidoxime)Anticholinergics generally have antisialagogue effects (decreasing saliva production), and most produce some level of sedation, both being advantageous in surgical procedures.Until the beginning of the 20th century anticholinergic drugs were widely used to treat psychiatric disorders. Physiological effects Delirium (often with hallucinations and delusions indistinguishable from reality) Ocular symptoms (from eye drops): mydriasis, pupil dilation, and acute angle-closure glaucoma in those with shallow anterior chamber Anhidrosis, dry mouth, dry skin Fever Constipation Tachycardia Urinary retention Cutaneous vasodilationClinically the most significant feature is delirium, particularly in the elderly, who are most likely to be affected by the toxidrome. Side effects Long-term use may increase the risk of both cognitive and physical decline. It is unclear whether they affect the risk of death generally. However, in older adults they do appear to increase the risk of death.Possible effects of anticholinergics include: Possible effects in the central nervous system resemble those associated with delirium, and may include: Older patients are at a higher risk of experiencing CNS side effects. Toxicity An acute anticholinergic syndrome is reversible and subsides once all of the causative agents have been excreted. Reversible acetylcholinesterase inhibitor agents such as physostigmine can be used as an antidote in life-threatening cases. Wider use is discouraged due to the significant side effects related to cholinergic excess including seizures, muscle weakness, bradycardia, bronchoconstriction, lacrimation, salivation, bronchorrhea, vomiting, and diarrhea. Even in documented cases of anticholinergic toxicity, seizures have been reported after the rapid administration of physostigmine. Asystole has occurred after physostigmine administration for tricyclic antidepressant overdose, so a conduction delay (QRS > 0.10 second) or suggestion of tricyclic antidepressant ingestion is generally considered a contraindication to physostigmine administration. Pharmacology Anticholinergics are classified according to the receptors that are affected: Antimuscarinic agents operate on the muscarinic acetylcholine receptors. The majority of anticholinergic drugs are antimuscarinics. Antinicotinic agents operate on the nicotinic acetylcholine receptors. The majority of these are non-depolarising skeletal muscle relaxants for surgical use that are structurally related to curare. Several are depolarizing agents. Examples Examples of common anticholinergics: Plants of the family Solanaceae contain various anticholinergic tropane alkaloids, such as scopolamine, atropine, and hyoscyamine. Physostigmine is one of only a few drugs that can be used as an antidote for anticholinergic poisoning. Nicotine also counteracts anticholinergics by activating nicotinic acetylcholine receptors. Caffeine (although an adenosine receptor antagonist) can counteract the anticholinergic symptoms by reducing sedation and increasing acetylcholine activity, thereby causing alertness and arousal. Psychoactive uses When a significant amount of an anticholinergic is taken into the body, a toxic reaction known as acute anticholinergic syndrome may result. This may happen accidentally or intentionally as a consequence of either recreational or entheogenic drug use, though many users find the side effects to be exceedingly unpleasant and not worth the recreational effects they experience. In the context of recreational use, anticholinergics are often called deliriants. Plant sources The most common plants containing anticholinergic alkaloids (including atropine, scopolamine, and hyoscyamine among others) are: Atropa belladonna (deadly nightshade) Brugmansia species Datura species Garrya species Hyoscyamus niger (henbane) Mandragora officinarum (mandrake) Use as a deterrent Several narcotic and opiate-containing drug preparations, such as those containing hydrocodone and codeine are combined with an anticholinergic agent to deter intentional misuse. Examples include Hydromet/Hycodan (hydrocodone/homatropine), Lomotil (diphenoxylate/atropine) and Tussionex (hydrocodone polistirex/chlorpheniramine). However, it is noted that opioid/antihistamine combinations are used clinically for their synergistic effect in the management of pain and maintenance of dissociative anesthesia (sedation) in such preparations as Meprozine (meperidine/promethazine) and Diconal (dipipanone/cyclizine), which act as strong anticholinergic agents. == References ==
You serve as a medical tutor. Your objective is to demystify medical terms, providing thorough explanations that cater to various levels of medical knowledge.
What does the medical term 'Apnea' encompass?
Apnea (BrE: apnoea) is the temporal cessation of breathing. During apnea, there is no movement of the muscles of inhalation, and the volume of the lungs initially remains unchanged. Depending on how blocked the airways are (patency), there may or may not be a flow of gas between the lungs and the environment, but if theres sufficient flow, gas exchange within the lungs and cellular respiration wouldnt be severely affected. Voluntarily doing this is called holding ones breath. Apnea may first be diagnosed in childhood, and it is recommended to consult an ENT specialist, allergist or sleep physician to discuss symptoms when noticed; malformation and/or malfunctioning of the upper airways may be observed by an orthodontist. Cause Apnea can be involuntary—for example, drug-induced (such as by opiate toxicity), mechanically / physiologically induced (for example, by strangulation or choking), or a consequence of neurological disease or trauma. During sleep, people with severe sleep apnea can have over thirty episodes of intermittent apnea per hour every night.Apnea can also be observed during periods of heightened emotion, such as during crying or accompanied by the Valsalva maneuver when a person laughs. Apnea is a common feature of sobbing while crying, characterised by slow but deep and erratic breathing followed by brief periods of breath holding when crying. Another example of apnea are breath-holding spells; these are sometimes emotional in cause and are usually observed in children as a result of frustration, emotional stress and other psychological extremes. Voluntary apnea can be achieved by closing the vocal cords, simultaneously keeping the mouth closed and blocking the nasal vestibule, or constantly activating expiratory muscles, not allowing any inspiration. Complications Under normal conditions, humans cannot store much oxygen in the body. Prolonged apnea leads to severe lack of oxygen in the blood circulation, leading to dysfunction of organ systems. Permanent brain damage can occur after as little as three minutes and death will inevitably ensue after a few more minutes unless ventilation is restored. However, under special circumstances such as hypothermia, hyperbaric oxygenation, apneic oxygenation (see below), or extracorporeal membrane oxygenation, much longer periods of apnea may be tolerated without severe detrimental consequences. Untrained humans usually cannot sustain voluntary apnea for more than one or two minutes, since the urge to breathe becomes unbearable. The reason for the time limit of voluntary apnea is that the rate of breathing and the volume of each breath are tightly regulated to maintain constant values of CO2 tension and pH of the blood more than oxygen levels. In apnea, CO2 is not removed through the lungs and accumulates in the blood. The consequent rise in CO2 tension and drop in pH result in stimulation of the respiratory centre in the brain which eventually cannot be overcome voluntarily. The accumulation of carbon dioxide in the lungs will eventually irritate and trigger impulses from the respiratory center part of the brain and the phrenic nerve. Rising levels of carbon dioxide signal the body to breathe and resume unconscious respiration forcibly. The lungs start to feel as if they are burning, and the signals the body receives from the brain when CO2 levels are too high include strong, painful, and involuntary contractions or spasms of the diaphragm and the muscles in between the ribs. At some point, the spasms become so frequent, intense and unbearable that continued holding of the breath is nearly impossible.When a person is immersed in water, physiological changes due to the mammalian diving reflex enable somewhat longer tolerance of apnea even in untrained persons as breathing isnt possible underwater. Tolerance can in addition be trained. The ancient technique of free-diving requires breath-holding, and world-class free-divers can hold their breath underwater up to depths of 214 metres (702 ft) and for more than four minutes. Apneists, in this context, are people who can hold their breath for a long time. Hyperventilation Voluntary hyperventilation before beginning voluntary apnea is commonly believed to allow the person involved to safely hold their breath for a longer period. In reality, it will give the impression that one does not need to breathe, while the body is actually experiencing a blood-oxygen level that would normally, and indirectly, invoke a strong dyspnea and eventually involuntary breathing. Some have incorrectly attributed the effect of hyperventilation to increased oxygen in the blood, not realizing that it is actually due to a decrease in CO2 in the blood and lungs. Blood leaving the lungs is normally fully saturated with oxygen, so hyperventilation of normal air cannot increase the amount of oxygen available, as oxygen in blood is the direct factor. Lowering the CO2 concentration increases the pH of the blood, thus increasing the time before blood becomes acidic enough so the respiratory center becomes stimulated, as described above. While hyperventilation will yield slightly longer breath-holding times, any small time increase is at the expense of possible hypoxia, though it might not be felt as easily. One using this method can suddenly lose consciousness unnoticed—a shallow water blackout—as a result. If a person loses consciousness underwater, there is considerable danger that they will drown. An alert diving partner or nearby lifeguard would be in the best position to rescue such a person. Static apnea blackout occurs at the surface when a motionless diver holds their breath long enough for the circulating oxygen in blood to fall below that required for the brain to maintain consciousness. It involves no pressure changes in the body and is usually performed to enhance breath-hold time. It should never be practiced alone, but under strict safety protocols with a safety guard or equipment beside the diver. Apneic oxygenation Because the exchange of gases between the blood and airspace of the lungs is independent of the movement of gas to and from the lungs, enough oxygen can be delivered to the circulation even if a person is apneic, even if the diaphragm doesnt move. With the onset of apnea, low pressure develops in the airspace of the lungs because more oxygen is absorbed than CO2 is released. With the airways closed or obstructed, this will lead to a gradual collapse of the lungs and suffocation. However, if the airways are open, any gas supplied to the upper airways will follow the pressure gradient and flow into the lungs to replace the oxygen consumed. If pure oxygen is supplied, this process will serve to replenish the oxygen stored in the lungs and resume sufficient ventilation. The uptake of oxygen into the blood will then remain at the usual level, and the normal functioning of the organs will not be affected. A detriment to this hyperoxygenation is the occurrence of nitrogen washout, which can lead to absorption atelectasis.However, no CO2 is removed during apnea. The partial pressure of CO2 in the airspace of the lungs will quickly equilibrate with that of the blood. As the blood is loaded with CO2 from the metabolism without a way to remove it, more and more CO2 will accumulate and eventually displace oxygen and other gases from the airspace. CO2 will also accumulate in the tissues of the body, resulting in respiratory acidosis. Under ideal conditions (i.e., if pure oxygen is breathed before onset of apnea to remove all nitrogen from the lungs, and pure supplemental oxygen is insufflated), apneic oxygenation could theoretically be sufficient to provide enough oxygen for survival of more than one hours duration in a healthy adult. However, accumulation of carbon dioxide (described above) would remain the limiting factor. Apneic oxygenation is more than a physiologic curiosity. It can be employed to provide a sufficient amount of oxygen in thoracic surgery when apnea cannot be avoided, and during manipulations of the airways such as bronchoscopy, intubation, and surgery of the upper airways. However, because of the limitations described above, apneic oxygenation is inferior to extracorporal circulation using a heart-lung machine and is therefore used only in emergencies, short procedures, or where extracorporal circulation cant be accessed. Use of PEEP valves is also an accepted alternative (5 cm H2O in average weight patients and 10 cm H2O significantly improved lung and chest wall compliance in morbidly obese patients).In 1959, Frumin described the use of apneic oxygenation during anesthesia and surgery. Of the eight test subjects in this landmark study, the highest recorded PaCO2 was 250 millimeters of mercury, and the lowest arterial pH was 6.72 after 53 minutes of apnea. Apnea scientific studies Studies found spleen volume is slightly reduced during short breath-hold apnea in healthy adults. Apnea test in determining brain death A recommended practice for the clinical diagnosis of brain death formulated by the American Academy of Neurology hinges on the conjunction of three diagnostic criteria: a coma, absence of brainstem reflexes, and apnea (defined as the inability of the patient to breathe unaided: that is, with no life support systems like ventilators). The apnea test follows a delineated protocol. Apnea testing is not suitable in patients who are hemodynamically unstable with increasing vasopressor needs, metabolic acidosis, or require high levels of ventilatory support. Apnea testing carries the risk of arrhythmias, worsening hemodynamic instability, or metabolic acidosis beyond the level of recovery and can potentially make the patient unsuitable for organ donation (see above). In this situation a confirmatory test is warranted as it is unsafe to perform the apnea test to the patient. Etymology and pronunciation The word apnea (or apnoea) uses combining forms of a- + -pnea, from Greek Greek: ἄπνοια, from ἀ-, privative, πνέειν, to breathe. See pronunciation information at dyspnea. See also Apnea and Work Apnea-hypopnea index References ‌ Nunn, J. F. (1993). Applied Respiratory Physiology (4th ed.). Butterworth-Heinemann. ISBN 0-7506-1336-X. External links Sleep Apnea, a resource from the Harvard Division of Sleep Medicine on Obstructive Sleep Apnea apneacalculator.com, information about Apnea and the apnea-calculator for clinical treatment of Obstructive Sleep Apnea Freediving Courses & Training in the UK, information about learning the sport of Freediving, the club is called Apneists UK DiveWise.Org Non-profit scientific and educational resource for apnea divers DAN Breath-Hold Workshop Divers Alert Network 2006 Breath-Hold Diving Workshop PDF
You serve as an encyclopedia of medical terms. Deliver clear, detailed explanations, making medical language understandable to a diverse audience.
Could you provide a brief overview of 'Primary biliary cholangitis' in a medical context?
Primary biliary cholangitis (PBC), previously known as primary biliary cirrhosis, is an autoimmune disease of the liver. It results from a slow, progressive destruction of the small bile ducts of the liver, causing bile and other toxins to build up in the liver, a condition called cholestasis. Further slow damage to the liver tissue can lead to scarring, fibrosis, and eventually cirrhosis. Common symptoms are tiredness, itching, and in more advanced cases, jaundice. In early cases, the only changes may be those seen in blood tests.PBC is a relatively rare disease, affecting up to one in 3,000–4,000 people. It is much more common in women, with a sex ratio of at least 9:1 female to male.The condition has been recognised since at least 1851, and was named "primary biliary cirrhosis" in 1949. Because cirrhosis is a feature only of advanced disease, a change of its name to "primary biliary cholangitis" was proposed by patient advocacy groups in 2014. Signs and symptoms People with PBC experience fatigue (80%); this is a nonspecific symptom and can be debilitating, with a huge impact on quality of life. Its pathogenesis is still unknown, and is quite challenging to explore its specificity and to treat. Comorbidities that could contribute or worsen fatigue, such as depression, hypothyroidism, anaemia, obesity, or medication side effects, should be promptly identified and treated. Dry skin and dry eyes are also common. Itching (pruritus) occurs in 20–70% of cases, and can develop at any stage of the disease; it does not correlate with progression of liver disease, and may even improve or disappear as the disease advances. It is typically mild-to-moderate in intensity. Given the impact on quality of life and night sleep, pruritus is correlated with fatigue. It can rarely be severe, nonresponsive to medical therapy, and requiring liver transplant. Pruritus is characteristically intermittent, worse at night, and improves during summer. People with more severe PBC may have jaundice (yellowing of the eyes and skin). PBC impairs bone density and the risk of fracture increases. Xanthelasma (skin lesions around the eyes) or other xanthoma may be present as a result of increased cholesterol levels.PBC can eventually progress to cirrhosis of the liver. This, in turn, may lead to a number of symptoms or complications, including: Fluid retention in the abdomen (ascites) in more advanced disease Enlarged spleen in more advanced disease Oesophageal varices in more advanced disease Hepatic encephalopathy, including coma in extreme cases in more advanced disease.People with PBC may also sometimes have the findings of an associated extrahepatic autoimmune disorder such as thyroid disease or rheumatoid arthritis or Sjögrens syndrome (in up to 80% of cases). Causes PBC has an immunological basis, and is classified as an autoimmune disorder. It results from a slow, progressive destruction of the small bile ducts of the liver, with the intralobular ducts and the canals of Hering (intrahepatic ductules) being affected early in the disease. Most people with PBC (more than 90%) have antimitochondrial antibodies (AMAs) against pyruvate dehydrogenase complex (PDC-E2), an enzyme complex found in the mitochondria. People who are negative for AMAs are usually found to be positive when more sensitive methods of detection are used.People with PBC may also have been diagnosed with another autoimmune disease, such as a rheumatological, endocrinological, gastrointestinal, pulmonary, or dermatological condition, suggesting shared genetic and immune abnormalities. Common associations include Sjögrens syndrome, systemic sclerosis, rheumatoid arthritis, lupus, hypothyroidism, and coeliac disease.A genetic predisposition to disease has been thought to be important for some time. Evidence for this includes cases of PBC in family members, identical twins both having the condition (concordance), and clustering of PBC with other autoimmune diseases. In 2009, a Canadian-led group of investigators reported in the New England Journal of Medicine results from the first PBC genome-wide association study. This research revealed parts of the IL12 signaling cascade, particularly IL12A and IL12RB2 polymorphisms, to be important in the aetiology of the disease in addition to the HLA region. In 2012, two independent PBC association studies increased the total number of genomic regions associated to 26, implicating many genes involved in cytokine regulation such as TYK2, SH2B3 and TNFSF11.A study of over 2,000 patients identified a gene, POGLUT1, that appeared to be associated with this condition. Earlier studies have also suggested that this gene may be involved. The implicated protein is an endoplasmic reticulum O-glucosyltransferase. An environmental Gram-negative Alphaproteobacterium — Novosphingobium aromaticivorans has been associated with this disease, with several reports suggesting an aetiological role for this organism. The mechanism appears to be a cross-reaction between the proteins of the bacterium and the mitochondrial proteins of the liver cells. The gene encoding CD101 may also play a role in host susceptibility to this disease.A failure of immune tolerance against the mitochondrial pyruvate dehydrogenase complex (PDC-E2) is a primary cause, with shedding of the antigen into apoptotic bodies or "apotopes" leading to the anatomic localization. Such autoreactivity may also be the case with other proteins, including the gp210 and p62 nuclear pore proteins. Gp210 has increased expression in the bile duct of anti-gp210 positive patients, and these proteins may be associated with prognosis. Diagnosis Most patients are currently diagnosed when asymptomatic, having been referred to the hepatologist for abnormal liver function tests (mostly raised GGT or alkaline phosphatase) performed for annual screening blood tests. Other frequent scenarios include screening of patients with nonliver autoimmune diseases, e.g. rheumatoid arthritis, or investigation of elevated cholesterol, evaluation of itch or unresolved cholestasis post partum. Diagnosing PBC is generally straightforward. The basis for a definite diagnosis are: Abnormalities in liver enzyme tests are usually present and elevated gamma-glutamyl transferase and alkaline phosphatase are found in early disease. Elevations in bilirubin occur in advanced disease. Antimitochondrial antibodies are the characteristic serological marker for PBC, being found in 90–95% of patients and only 1% of controls. PBC patients have AMA against pyruvate dehydrogenase complex (PDC-E2), an enzyme complex that is found in the mitochondria. Those people who are AMA negative but with disease similar to PBC have been found to have AMAs when more sensitive detection methods are employed. Other auto-antibodies may be present:Antinuclear antibody measurements are not diagnostic for PBC because they are not specific, but may have a role in prognosis. Anti-glycoprotein-210 antibodies, and to a lesser degree anti-p62 antibodies, correlate with the diseases progression toward end-stage liver failure. Anti-gp210 antibodies are found in 47% of PBC patients. Anti-centromere antibodies often correlate with developing portal hypertension. Anti-np62 and anti-sp100 are also found in association with PBC.Abdominal ultrasound, magnetic resonance cholangiopancreatography or a CT scan is usually performed to rule out blockage to the bile ducts. This may be needed if a condition causing secondary biliary cirrhosis, such as other biliary duct disease or gallstones, needs to be excluded. A liver biopsy may help, and if uncertainty remains as in some patients, an endoscopic retrograde cholangiopancreatography, an endoscopic investigation of the bile duct, may be performed.Given the high specificity of serological markers, liver biopsy is not necessary for the diagnosis of PBC; however, it is still necessary when PBC-specific antibodies are absent, or when co-existent autoimmune hepatitis or nonalcoholic steatohepatitis is suspected. Liver biopsy can be useful to stage the disease for fibrosis and ductopenia. Finally, it may also be appropriate in the presence of other extrahepatic comorbidities. Liver biopsy On microscopic examination of liver biopsy specimens, PBC is characterized by chronic, nonsuppurative inflammation, which surrounds and destroys interlobular and septal bile ducts. These histopathologic findings in primary biliary cholangitis include: Inflammation of the bile ducts, characterized by intraepithelial lymphocytes Periductal epithelioid granulomas. Proliferation of bile ductules Fibrosis (scarring)The Ludwig and Scheuer scoring systems have historically been used to stratify four stages of PBC, with stage 4 indicating the presence of cirrhosis. In the new system of Nakanuma, the stage of disease is based on fibrosis, bile duct loss, and features of cholestasis, i.e. deposition of orcein-positive granules, whereas the grade of necroinflammatory activity is based on cholangitis and interface hepatitis. The accumulation of orcein-positive granules occurs evenly across the PBC liver, which means that staging using the Nakanuma system is more reliable regarding sampling variability. Liver biopsy for the diagnosis and staging of PBC lost favour after the evidence of a patchy distribution of the duct lesions and fibrosis across the organ. The widespread availability of noninvasive measures of fibrosis means that liver biopsy for staging of PBC is somewhat obsolete. Liver biopsy does, however, remain useful in certain settings. The main indications are to confirm the diagnosis of PBC when PBC-specific antibodies are absent and confirm a diagnosis of PBC with AIH features (i.e. overlap PBC-AIH). Liver biopsy is also useful to assess the relative contribution of each liver injury when a comorbid liver disease is present, such as non-alcoholic steatohepatitis. In patients with inadequate response to UDCA, liver biopsy may provide the explanation and could undoubtedly inform risk stratification. For example, it may identify a previously unsuspected variant syndrome, steatohepatitis, or interface hepatitis of moderate or greater severity. It is also useful in AMA and ANA-specific antibody negative cholestatic patients to indicate an alternative process, e.g. sarcoidosis, small duct PSC, adult idiopathic ductopenia. Histopathology stages (by Ludwig and Scheuer systems) Stage 1 – portal stage: Normal-sized triads, portal inflammation, subtle bile duct damage: Granulomas are often detected in this stage. Stage 2 – periportal stage: Enlarged triads, periportal fibrosis and/or inflammation, typically characterized by the finding of a proliferation of small bile ducts Stage 3 – septal stage: Active and/or passive fibrous septa Stage 4 – biliary cirrhosis: Nodules present, garland or jigsaw puzzle pattern Treatment Cholestasis Medical therapy of PBC targets disease progression and symptom control. The first-line treatment for PBC is ursodeoxycholic acid (UDCA). UDCA has been the only drug available for two decades and more recently obeticholic acid (OCA), a semi-synthetic hydrophobic bile acid analogue, has been licensed in patients failing UDCA response or intolerant to UDCA. Several other agents have been studied, including immunosuppressants, but robust evidence of benefit is lacking.UDCA improves liver enzyme levels, slows down histological progression, and improves liver transplant-free survival. UDCA also reduces the need for liver transplantation. UDCA should be taken at a dose of 13 to 15 mg per kg of body weight per day, usually in two divided doses each day. Liver chemistries usually improve within a few weeks of starting UDCA, and 90% of any benefit is observed after 6–9 months of therapy. Liver chemistries should be re-evaluated after 1 year of treatment. UDCA is usually continued lifelong. Up to 40% of people do not respond to treatment with UDCA. Patients with PBC who have an inadequate response to UDCA or those few (less than 3%) who are intolerant to UDCA are candidates for second-line therapies. Obeticholic acid (OCA) is FDA-approved for the treatment of PBC in individuals intolerant or unresponsive to UDCA. OCA is a farnesoid X receptor agonist, and results in increased bile flow (choleresis). OCA is started at 5 mg daily, and liver chemistries should be rechecked after 3 months of treatment. If the liver chemistries remain elevated, then the dose of OCA may be increased to 10 mg per day. The most common side effect of OCA is pruritus. Fibric acid derivatives, or fibrates, are agonists of the peroxisome proliferator activator receptor (PPAR), a nuclear receptor involved in several metabolic pathways. While fibrates are approved for the treatment of hypertriglyceridemia, they exert anticholestatic effects and have been studied for the treatment of PBC. Among the fibrates, bezafibrate and fenofibrate, PPAR-alpha selective agonists, have been extensively studied as therapeutic agents because of their potential ability to decrease bile acid synthesis and bile acid-related hepatic inflammation. A randomized, controlled trial in 2018 showed its efficacy in patients with inadequate response to UDCA. While fibrates can be considered as off-label treatment for PBC that does not respond to UDCA, they should not be used in decompensated cirrhosis.Several additional medications have been investigated as potential treatments for PBC, and found to be ineffective as single agents (monotherapy), including: chlorambucil, colchicine, cyclosporine, corticosteroids, azathioprine, malotilate, methotrexate, mycophenolate mofetil, penicillamine, and thalidomide. Budesonide may be used as an off-label treatment for PBC, although its efficacy is controversial. Itching Pruritus is a common symptom in people with PBC. First-line treatment of pruritus consists of anion-exchange resins, such as cholestyramine, colestipol, or colesevalam. These anion-exchange resins are nonabsorbed, highly positively charged substances that bind bile acids, which are negatively charged anions. Anion-exchange resins relieve itching caused by excess bile acids in circulation by binding bile acids in the gut and facilitating elimination. Bloating or constipation may occur with anion-exchange resins. Cholestyramine may affect absorption of UDCA; if cholestyramine is necessary, it should be taken at least 60 minutes before or 4 hours after UDCA is taken.Treatment options for pruritus that does not improve with anion-exchange resins include: rifampicin, naltrexone, or sertraline. Rifampicin may rarely cause drug induced liver injury and should be avoided if serum bilirubin is elevated (greater than 2.5 mg/dL). Liver enzymes should be monitored after starting rifampin. Rifampicin induces enzymes, resulting in numerous potential drug-drug interactions. Opioid antagonists may cause a self-limited opioid withdrawal like reaction, with abdominal pain, elevated blood pressure, tachycardia, goose bumps, nightmares, and depersonalization. To avoid such reactions, the dose should start low and gradually be increased. Other therapies Fatigue is a nonspecific but often-reported symptom in PBC, and represents an unmet need since no therapies are licensed. A structured approach to management, quantifying fatigue and its impacts (through the use of disease-specific tools such as the PBC-40 quality-of-life measures), addressing contributing and exacerbating factors, and supporting patients to cope with its impact is effective. Drugs such as coenzyme Q and rituximab have been shown to be ineffective. A graded programme of exercise helps some individuals. People with PBC may have poor lipid-dependent absorption of oil-soluble vitamins (A, D, E, and K). Appropriate supplementation is recommended when bilirubin is elevated. People with PBC are at elevated risk of developing osteoporosis as compared to the general population and others with liver disease. Screening and treatment of this complication is an important part of the management of PBC. As in all liver diseases, consumption of alcohol should be restricted or eliminated. In patients with advanced liver disease, the only curative therapy is liver transplant. Outcomes are favourable, with five-year patient survival rates better than for most other indications for LT (80–85%). Prognosis The introduction of UDCA has dramatically changed the pattern and the course of the disease. Numerous trials and observational studies have demonstrated its efficacy on liver biochemistry, histological progression, and transplant-free survival.Among the UDCA-treated patients, the degree of the liver biochemistry improvement, i.e. the UDCA-response, identifies patients with different long-term prognosis. In the absence of cirrhosis, people who experience an improvement of liver enzymes to the normal range on treatment with UDCA have excellent survival, which may be similar to the general population. Survival is significantly reduced though, in those with abnormal liver biochemistry on treatment. The two most important parameters in evaluating response to UDCA are alkaline phosphatase and total bilirubin. Qualitative and quantitative definitions of UDCA-response have been developed, based on changes of bilirubin, transaminases and ALP, after a period of 6 to 24 months of treatment with UDCA at 13–15 mg/kg/day.Patients at diagnosis can be risk-stratified based on the probability of UDCA-response. This is relevant to identify patients who would be eligible for second-line therapies before waiting for the treatment failure under UDCA, with potential impact on disease course.Hepatocellular carcinoma (HCC) is infrequent in PBC. Recent large-scale cohort studies highlighted that the lack of UDCA-response after 12 months of therapy and male sex are associated with increased future risk of developing HCC in PBC. After liver transplant, the recurrence of disease may be as high as 18% at five years, and up to 30% at 10 years. No consensus exists on risk factors for recurrence of the disease. Epidemiology Epidemiologic studies report heterogeneous incidence rates of 0.33 to 5.8 per 100,000 inhabitants per year, and prevalence rates of 1.9 to 40.2 per 100,000 inhabitants. Such figures, in particular the prevalence, have shown some increase in the last decades. Improvement of diagnostic tools, increasing disease awareness, and digitised patient registration with facilitation of case-findings, along with improved survival, likely contributed to the rising prevalence rates. The disease has been described worldwide, though North America and Northern Europe have shown the highest incidence and prevalence rates. Whether a true variation in disease prevalence exists among populations of different geographical areas and of different ethnicity or if this is a consequence of a difference in study quality is unknown. PBC is more common in women, with a female:male ratio of at least 9:1. The peak incidence of PBC is in the fifth decade of life. In some areas of the US and UK, the prevalence is estimated to be as high as one in 4,000. This is much more common than in South America or Africa, which may be due to better recognition in the US and UK. First-degree relatives may have as much as a 500 times increase in prevalence, but if this risk is greater in the same-generation relatives or the one that follows is debated. PBC is considered a prime example of the female preponderance in autoimmunity with a female to male ratio of up to 9:1, confirmed by large cohort studies, although some recent data, using administrative registries, suggest an increasing male prevalence. Major defects of sex chromosomes, i.e. enhanced monosomy X in female patients and an enhanced Y chromosome loss in male patients, have been described and might well explain the greater female predisposition to develop PBC.An association of a greater incidence of PBC at latitudes more distant from the Equator is similar to the pattern seen in multiple sclerosis.Typical disease onset is between 30 and 60 years, though cases have been reported of patients diagnosed at the ages of 15 and 93. Prevalence of PBC in women over the age of 45 years could exceed one in an estimated 800 individuals. History The first report of the disease dates back 1851 by Addison and Gull, who described a clinical picture of progressive jaundice in the absence of mechanical obstruction of the large bile ducts. Ahrens et al. in 1950 published the first detailed description of 17 patients with this condition, and coined the term "primary biliary cirrhosis". In 1959, Dame Sheila Sherlock reported a further series of PBC patients and recognised that the disease could be diagnosed in a precirrhotic stage and proposed the term "chronic intrahepatic cholestasis" as more appropriate description of this disease, but this nomenclature failed to gain acceptance, and the term "primary biliary cirrhosis" lasted for decades. In 2014, to correct the inaccuracy and remove the social stigmata of cirrhosis, as well as all the misunderstanding, disadvantages, and discrimination emanating from this misnomer in daily life for patients, international liver associations agreed to rename the disease "primary biliary cholangitis", as it is now known. Society and culture Support groups PBC Foundation The PBC Foundation is a UK-based international charity offering support and information to people with PBC and their families and friends. It campaigns for increasing recognition of the disorder, improved diagnosis, and treatments, and estimates over 8,000 people are undiagnosed in the UK. The Foundation has supported research into PBC including the development of the PBC-40 quality of life measure published in 2004 and helped establish the PBC Genetics Study. It was founded by Collette Thain in 1996, after she was diagnosed with the condition. Thain was awarded an MBE Order of the British Empire in 2004 for her work with the Foundation. The PBC Foundation helped initiate the name change campaign in 2014. PBCers Organization The PBCers Organization is a US-based nonprofit patient support group that was founded by Linie Moore in 1996; it advocates for greater awareness of the disease and new treatments. It supported the name change initiative. References External links Primary Biliary Cirrhosis page from the National Digestive Diseases Information Clearinghouse Alagille syndrome
You are a medical interpreter. Your duty is to translate medical terms into easily digestible information, maintaining accuracy and detail.
I've come across the term 'Thin basement membrane disease' in a medical context, but I'm not sure what it means. Can you clarify?
Thin basement membrane disease is, along with IgA nephropathy, the most common cause of hematuria without other symptoms. The only abnormal finding in this disease is a thinning of the basement membrane of the glomeruli in the kidneys. Its importance lies in the fact that it has a benign prognosis, with patients maintaining a normal kidney function throughout their lives. Signs and symptoms Most patients with thin basement membrane disease are incidentally discovered to have microscopic hematuria on urinalysis. The blood pressure, kidney function, and the urinary protein excretion are usually normal. Mild proteinuria (less than 1.5 g/day) and hypertension are seen in a small minority of patients. Frank hematuria and loin pain should prompt a search for another cause, such as kidney stones or loin pain-hematuria syndrome. Also, there are no systemic manifestations, so presence of hearing impairment or visual impairment should prompt a search for hereditary nephritis such as Alport syndrome. Genetics The molecular basis for thin basement membrane disease has yet to be elucidated fully; however, defects in type IV collagen have been reported in some families.Some individuals with TBMD are thought to be carriers for genes that cause Alport syndrome. Diagnosis Thin basement membrane disease must be differentiated from the other two common causes of glomerular hematuria, IgA nephropathy and Alport syndrome. The history and presentation are helpful in this regard: In Alport syndrome, there is often a family history of kidney failure, which may be associated with hearing impairment. Also, males tend to be more affected as Alport syndrome is X-linked in most cases. In IgA nephropathy, episodes of frank hematuria are more common, and a family history is less common.A kidney biopsy is the only way to diagnose thin basement membrane disease. It reveals thinning of the glomerular basement membrane from the normal 300 to 400 nanometers (nm) to 150 to 250 nm. However, a biopsy is rarely done in cases where the patient has isolated microscopic hematuria, normal kidney function, and no proteinuria. The prognosis is excellent in this setting unless the clinical manifestations progress, as occurs in most males and some females with Alport syndrome and many patients with IgA nephropathy. Treatment Most patients with thin basement membrane disease need only reassurance. Indeed, this disease was previously referred to as "benign familial hematuria" because of its usually benign course. Angiotensin converting enzyme inhibitors have been suggested to reduce the episodes of hematuria, though controlled studies are lacking. Treating co-existing hypercalciuria and hyperuricosuria will also be helpful in reducing hematuria.The molecular basis for thin basement membrane disease has yet to be elucidated fully; however, defects in the gene encoding the a4 chain of type IV collagen have been reported in some families. Prognosis Overall, most people with thin basement membrane disease have an excellent prognosis. Some reports, however, suggest that a minority might develop hypertension.Thin basement membrane disease may co-exist with other kidney diseases, which may in part be explained by the high prevalence of thin basement membrane disease. References Further reading GeneReviews/NCBI/NIH/UW entry on Collagen IV-Related Nephropathies (Alport Syndrome and Thin Basement Membrane Nephropathy) == External links ==
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
I've encountered the term 'Cholinergic crisis' while reading about medical topics. What does it refer to exactly?
A cholinergic crisis is an over-stimulation at a neuromuscular junction due to an excess of acetylcholine (ACh), as a result of the inactivity of the AChE enzyme, which normally breaks down acetylcholine. Symptoms and diagnosis As a result of cholinergic crisis, the muscles stop responding to the high synaptic levels of ACh, leading to flaccid paralysis, respiratory failure, and other signs and symptoms reminiscent of organophosphate poisoning. Other symptoms include increased sweating, salivation, bronchial secretions along with miosis (constricted pupils). This crisis may be masked by the concomitant use of atropine along with cholinesterase inhibitor medication in order to prevent side effects. Flaccid paralysis resulting from cholinergic crisis can be distinguished from myasthenia gravis by the use of the drug edrophonium (Tensilon), as it only worsens the paralysis caused by cholinergic crisis but strengthens the muscle response in the case of myasthenia gravis. (Edrophonium is a cholinesterase inhibitor, hence increases the concentration of acetylcholine present). Some of the symptoms of increased cholinergic stimulation include: Salivation: stimulation of the salivary glands Lacrimation: stimulation of the lacrimal glands (tearing) Urination: relaxation of the internal sphincter muscle of urethra, and contraction of the detrusor muscles Defecation Gastrointestinal distress: Smooth muscle tone changes causing gastrointestinal problems, including cramping Emesis: Vomiting Miosis constriction of the pupils of the eye via stimulation of the pupillary constrictor muscles Muscle spasm: stimulation of skeletal muscle (due to nicotinic acetylcholine receptor stimulation) Cause Cholinergic crisis, sometimes known by the mnemonic "SLUDGE syndrome" ( Salivation, Lacrimation, Urination, Defecation, Gastrointestinal distress and Emesis), can be a consequence of: Contamination with - or excessive exposure to - certain chemicals including: nerve agents, (e.g. sarin, VX, Novichok agents). organophosphorus insecticides (e.g. parathion, aldicarb) nicotine poisoning can be thought of as a subset of cholinergic crisis, as it also involves excessive parasympathetic stimulation. Ingestion of certain poisonous fungi (particularly the muscarine-containing members of the genera Amanita, Inocybe and Clitocybe). In medicine, this is seen in patients with myasthenia gravis who take too high a dose of medications such as cholinesterase inhibitors, or seen following general anaesthesia, when too high a dose of a cholinesterase inhibitor drug is given to reverse surgical muscle paralysis. Treatment Some elements of the cholinergic crisis can be reversed with antimuscarinic drugs like atropine or diphenhydramine, but the most dangerous effect - respiratory depression, cannot. The neuromuscular junction, where the brain communicates with muscles (like the diaphragm, the main breathing muscle), works by acetylcholine activating nicotinic acetylcholine receptors and leading to muscle contraction. Atropine only blocks muscarinic acetylcholine receptors (a different receptor class than the nicotinic receptors at the neuromuscular junction), so atropine will not improve the muscle strength and ability to breathe in someone with cholinergic crisis. Such a patient will require neuromuscular blocking drugs and mechanical ventilation until the crisis resolves on its own. See also Physostigmine == References ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
I'd like to learn more about the medical term 'Genital leiomyoma.' Can you provide some details?
Genital leiomyomas (also known as "Dartoic leiomyomas") are leiomyomas that originate in the dartos muscles, or smooth muscles, of the genitalia, areola, and nipple. They are a subtype of cutaneous leiomyomas that affect smooth muscle found in the scrotum, labia, or nipple. They are benign tumors, but may cause pain and discomfort to patients. Signs and symptoms of Genital leiomyoma can be symptomatic or asymptomatic and is dependent on the type of leiomyoma. In most cases, pain in the affected area or region is most common. For vaginal leiomyoma, vaginal bleeding and pain may occur. Uterine leiomyoma may exhibit pain in the area as well as painful bowel movement and/or sexual intercourse. Nipple pain, enlargement, and tenderness can be a symptom of nipple-areolar leiomyomas. Genital leiomyomas can be caused by multiple factors, one can be genetic mutations that affect hormones such as estrogen and progesterone. Moreover, risk factors to the development of genital leiomyomas include age, race, and gender. Ultrasound and imaging procedures are used to diagnose genital leiomyomas, while surgically removing the tumor is the most common treatment of these diseases. Case studies for nipple areolar, scrotal, and uterine leiomyoma were used, since there were not enough secondary resources to provide more evidence. Types of genital leiomyomas Uterine Uterine Leiomyomas are benign tumors that affect 70% of European people with uteri and more than 80% African descent people with uteri by the time they turn 50 years of age. Although, only 30% of people with uteri experience symptoms. Of those with uterine leiomyomas, 29% result in hospitalizations. One-third of patients with these fibroids experience life-threatening anemia, a condition where the body does not have enough oxygen due to lack of red blood cells to carry oxygen throughout the body. These tumors are mainly treated by performing hysterectomies, a procedure in which the uterus is removed, and account for approximately 40-60% of all performed hysterectomies. Symptoms are dependent on the location of the tumor, which may occur in the submucosal (under the mucous membranes and lines the inner part of some organs), intramural (within the walls of the organs), or subserosal areas (under the serosa and lines the outer part of some organs). Nipple-areolar Nipple-Areolar leiomyoma is a rare type of genital leiomyoma. It presents as either unilateral or bilateral growth of benign tumor of the smooth muscle that can be painful, tender, and inflamed. They are typically less than 2 cm in length. Since this is an extremely rare tumor, with only 50 cases reported in literature, it often only reported to physicians due to chronic nipple pain. Vaginal Vaginal paraurethral leiomyoma is another type of genital leiomyoma that is also less common compared to other types of leiomyoma. It presents as a benign tumor of the smooth muscle in the genitourinary tract, which includes urinary and genital organs, that can grow rapidly during pregnancy. On the other hand, the tumor tend to decrease in size upon menopause. This may be due to the growth of tumor that is dependent on hormones. There is not a definite cause for the development of the disease, but one that was hypothesized is that it originates from a blood vessel tissue and smooth muscle fiber residue in an embryo. Imaging and histopathological examination is used for diagnosis of the disease. Furthermore, treatment of the disease is to surgically remove the tumor.Vulvar Leiomyomas are one the most prominent types of genital leiomyomas. Lesions to the vulva may be up to 15 cm in length and they are reported to be acutely painful. Enlargement of these leiomyomas may occur during pregnancy. Scrotal Scrotal leiomyoma is considered to be an extremely rare type of genital leiomyoma. Because leiomyomas in the scrotum are usually painless and slowly grows over time, there is a delay in physician referral, with an average of 6–7 years. Physician referral usually occurs when people notice their testicles growing and getting heavier. A review of 11,000 cases of benign and malignant tumors of the scrotum found 11 cases of scrotal leiomyoma. Scrotal leiomyomas can affect males of any age and race, but are more common in Caucasians from the ages of 40-60. The tumor in the scrotum has an average diameter of 6.4 cm. Signs and symptoms People with leiomyoma can be presented as asymptomatic, or having no symptoms. However, some people may experience severe symptoms that can interfere with daily activities. Common symptoms are recurrent pain and pressure in the affected region. People with uterine leiomyoma can experience pain during urination, bowel movements, and sexual intercourse. Other symptoms are abnormal vaginal bleeding and severe menstrual cramps.Nipple-Areolar leiomyomas can affect one or both nipples, presenting with symptoms of nipple tenderness. People with leiomyomas in the scrotum generally notice a growing testicle over a span of multiple years, where it can grow and become heavy to the point of discomfort. Due to the painless and slow, progressive growth of the tumor, the time frame between recognizing the tumor and surgical removal can be anywhere from 2 to 20 years. Causes Uterine Genetics Development and progression of uterine leiomyomas may be contributed by changes in gene regulation or mutation of genes found to be associated with uterine fibroids. Abnormalities of these genes may initiate the formation or growth of these tumors. Modification of signaling pathways and genes (e.g. CYP1A1, CYP1b1, and MED12) exhibits a correlation with the development and growth of tumors in the uterus. Hormones The occurrence of uterine leiomyomas is mostly common during reproductive years. This suggests that the role of ovarian hormones, estrogen and progesterone, is important in the development of this disease. Studies have shown that the development of tumors rely on these hormones and that tumors have shown to affect estrogen metabolism as it can increase the amount both its estrogen and progesterone receptors. Diet and nutrition Long-term results suggests that diets that are mostly plant-based, composed of fruits and vegetables, and rich in Vitamin D have a positive effect on the development of diseases, including uterine leiomyomas. On the other hand, alcohol, coffee, and red meat may have an effect on the progression or growth of these diseases based on observational and epidemiological studies. Risk Factors Uterine Race At the age of 35, incidence is reported to be 60% in African-American with-uterus persons and 40% in Caucasian with-uterus persons. By the age of 50, the incidence of uterine fibroids was >80% in African-American with uterus persons and >70% of Caucasian with uterus persons.Recurrence of uterine leiomyomas 4–5 years after removal occurs up to 59% of the time for with-uterus persons of African origin. Age People with uteri who delay their first pregnancy past the age of 30 are at a higher risk for uterine fibroids. Genetic factors Specific genetic alterations may play a role in the development of uterine leiomyomas. A mutation of a single mesenchymal cell, a stem cell that plays an important role in making and repairing bone, and fat - found in the bone marrow and adipose tissues, with the involvement of progesterone and 17 b-estrodiol can lead to these fibroids. Early menarche Some early studies report early age onset of menstruation increases the risk of developing fibroids. However, the biological mechanism of how this occurs is not well understood and further investigation is needed. Nipple-Areolar Age The occurrence of benign tumors of the nipple commonly starts at the age of 20 and peaks around the age of 40 to 50. Growth of nipple-areolar leiomyomas may increase even after menopause. Diagnosis There are many ways genital leiomyomas can be diagnosed. Those who have genital leiomyomas can be asymptomatic or symptomatic. Symptoms including but not limited to pelvic pain or abnormal menstrual bleeding are used to assess fibroids. Imaging are often used to detect the presence of fibroids, particularly uterine fibroids. This includes ultrasonography, a procedure that uses high-frequency sound waves to capture tissue and organ images; Sonohysterography, a painless procedure similar to ultrasonography to capture images inside the uterus; Hysteroscopy, which examines the inside of the uterus and cervix using a flexible tube called hysteroscope. Treatment Treatment for genital leiomyomas primarily consist of removal by surgeries. However, genital leiomyomas typically re-occur and may reappear from 6 weeks to over 15 years post-removal. When managing leiomyomas, radiation treatment should be avoided due to the inducing effect of malignant transformation in the smooth muscle of the tumor.For uterine leiomyomas, complete removal of the uterus is required. There is minimal evidence to support the use of myomectomy to preserve fertility. Evidence shows that preoperative use of gonadotropin-releasing hormone agonists, which prevents or lessen the production of hormones like progesterone, estrogen, and testosterone, can reduce surgical complications.Subareolar leiomyomas require surgical removals. Precise surgical margins are needed to prevent re-occurrences.Leiomyomas in the scrotum require an orchidectomy, or surgical removal of one or two testicles. To manage pain that arises from the fibroids, drugs that affect smooth muscle contraction such as nitroglycerin, nifedipine, phenoxybenzamine and doxazosin can be employed to ease the pain. For nerve pain or tenderness, gabapentin and topical analgesics may be employed. Clinical cases Nipple-areolar leiomyomas 41-year-old-male A 41-year-old-male presented with a yellow nodule in the upper left areola. He reported mild pain and itching, but denied other symptoms. Sebaceous glands, epidermal hyperplasia, and tumor nests were among the numerous findings that lead to preceded a diagnosis of diagnosis of areolar leiomyoma with sebaceous hyperplasia. Characteristics of the leiomyoma included positive for estrogen and progesterone receptors and high expression of epidermal growth factor, insulin-like growth factor 1, and fibroblast growth factor-2. It is suspected that these growth factors led to the growth of the leiomyoma through an autocrine process.. The patient declined resection and the region has remained stable since. 67-year-old-female A 67-year-old-female presented with a growing mass on the left breast areolar region. The patient had been taking methotrexate to treat her rheumatoid arthritis. Magnetic resonance imaging revealed led to a conclusion that the tumor arose from the areola. A biopsy led to a diagnosis of diffuse large, non GC B-cell lymphoma that was suspected to be associated with methotrexate. The tumor reduced in size following the withdrawal of methotrexate. Three months later, another tumor developed in the areolar region of the opposite breast. 35-year-old-female A 35-year-old-female presented with a painful lump in the right nipple. The patient reported the lump occurred after a breastfeeding injury 3 years prior to coming into the outpatient center. The tumor has been growing ever since. A biopsy was performed to confirm the leiomyoma in the nipple. Uterine Leiomyomas 48-year-old-female A 48-year-old-female presented with several uterine fibroids that were asymptomatic. The tumor was removed vaginally which revealed to be a vaginal leiomyoma. Vaginal leiomyomas are rare and removal by vaginal route is the preferred treatment option. Scrotal leiomyomas 39-year-old-male A 39-year-old-male presented with a dull aching pain in the right scrotum. The patient had a history of his right scrotum slowly growing for the past year. There were no other symptoms of urinary tract infections (UTIs), cough, fever, weight loss, or night sweats. Further examination and an ultrasound scan found a firm mass in the right scrotum with a size of 6 cm x 4 cm that was inseparable from the testis. The patient underwent a right radical orchiectomy, or a surgical operation to remove the one or more testicles, since malignancy of the tumor could not be determined. The patient recovered and was discharged home after the operation. 71-year-old-male A 71-year-old-male presented with a large and heavy left scrotum that has been growing for 10 years. Further examination confirmed a firm tumor in the left scrotum that was attached to the testis. The tumor was measured to be 11 cm in diameter. The patient underwent orchidectomy, or surgical removal of the testicle. See also List of cutaneous conditions == References ==
You act as a bridge between medical jargon and everyday language. Explain medical terms in a manner that is comprehensive yet approachable for non-experts.
The term 'Disopyramide' keeps coming up in medical discussions. What does it stand for?
Disopyramide (INN, trade names Norpace and Rythmodan) is an antiarrhythmic medication used in the treatment of ventricular tachycardia. It is a sodium channel blocker and therefore classified as a Class 1a anti-arrhythmic agent. Disopyramide has a negative inotropic effect on the ventricular myocardium, significantly decreasing the contractility. Disopyramide also has an anticholinergic effect on the heart which accounts for many adverse side effects. Disopyramide is available in both oral and intravenous forms, and has a low degree of toxicity. Mechanism of action Disopyramides Class 1a activity is similar to that of quinidine in that it targets sodium channels to inhibit conduction. Disopyramide depresses the increase in sodium permeability of the cardiac myocyte during Phase 0 of the cardiac action potential, in turn decreasing the inward sodium current. This results in an increased threshold for excitation and a decreased upstroke velocity. Disopyramide prolongs the PR interval by lengthening both the QRS and P wave duration. This effect is particularly well suited in the treatment of ventricular tachycardia as it slows the action potential propagation through the atria to the ventricles. Disopyramide does not act as a blocking agent for beta or alpha adrenergic receptors, but does have a significant negative inotropic effect on the ventricular myocardium. As a result, the use of disopyramide may reduce contractile force up to 42% at low doses and up to 100% in higher doses compared to quinidine.Levites proposed a possible secondary mode of action for disopyramide, against reentrant arrhythmias after an ischemic insult. Disopyramide decreases the inhomogeneity between infarcted and normal myocardium refractory periods; in addition to lengthening the refractory period. This decreases the chance of re-entry depolarization, because signals are more likely to encounter tissue in a refractory state which cannot be excited. This provides a possible treatment for atrial and ventricular fibrillation, as it restores pacemaker control of the tissue to the SA and AV nodes. Obstructive hypertrophic cardiomyopathy Hypertrophic cardiomyopathy (HCM) is the most common inherited cardiac disease, occurring in 1:500 individuals in the general population. It is estimated that there are 600,000 individuals in the United States with hypertrophic cardiomyopathy. The most common variant of HCM presents with left ventricular (LV) intracavitary obstruction due to systolic anterior motion of the mitral valve, and mitral-septal contact, diagnosed readily with echocardiography. Pharmacologic treatment with negative inotropic drugs is first-line therapy. Beta-blockers are used first, and while they improve symptoms of shortness of breath, chest pain and exercise intolerance, they do not reduce resting LV intraventricular pressure gradients and often are inadequate to control symptoms. Many investigators and clinicians believe that disopyramide controlled release is the most potent agent available for reducing resting pressure gradients and improving symptoms. Disopyramide has been actively used for more than 30 years. Disopyramide administration for obstructive HCM has a IB recommendation in the 2020 American Heart Association/American College of Cardiology Foundation guidelines for treatment of obstructive HCM. A IB treatment recommendation indicates that a treatment is recommended, and may be useful, and beneficial. Negative inotropes improve LV obstruction by decreasing LV ejection acceleration and hydrodynamic forces on the mitral valve. Disopyramides particular efficacy is due to its potent negative inotropic effects; in head-to-head comparison, it is more effective for gradient reduction than either beta-blocker or verapamil. Disopyramide is most often administered with beta-blockade. When used in patients resistant to beta-blockade, disopyramide is effective in 60% of cases, reducing symptoms and gradient to the extent that invasive procedures such as surgical septal myectomy are not required.Disopyramide, despite its efficacy, has one main side effect that has limited its use in the US, though it has seen wider application in Canada, UK and Japan. Vagal blockade predictably causes dry mouth, and in men with prostatism, may cause urinary retention. Teichman et al. showed that pyridostigmine used in combination with disopyramide substantially alleviates vagolytic side effects without compromising antiarrhythmic efficacy. This combination has also been shown to be effective and safe in obstructive HCM in a large cohort of patients. Some clinicians prescribe pyridostigmine sustained release (marketed in the US as Mestinon Timespan) to every patient begun on disopyramide. This combination increases acceptance of higher disopyramide dosing, important since there is a dose-response correlation in obstructive HCM, higher doses yielding lower gradients. Another concern about disopyramide has been the hypothetical potential for inducing sudden death from its type 1 anti-arrhythmic effects. However, a multicenter registry and two recent cohort registries have largely reduced this concern, by showing sudden death rates lower than that observed from the disease itself.These concerns about the drug must be viewed from the clinical perspective that disopyramide is generally the last agent that is tried for patients before they are referred for invasive septal reduction with surgical septal myectomy (an open-heart operation) or alcohol septal ablation (a controlled heart attack). Both of these invasive procedures have risk of morbidity and mortality. For selected patients, a trial of oral disopyramide is a reasonable approach before proceeding to invasive septal reduction. Patients who respond to disopyramide are continued on the drug. Those who continue to have disabling symptoms or who experience side effects are promptly referred for septal reduction. Using such a stepped strategy, investigators have reported that survival does not differ from that observed in the age-matched normal United States population. Cardiac adverse effects Acute heart failure – Disopyramide should not be given to patients with impaired LV systolic function and low ejection fraction. Heart failure is not seen when disopyramide is used in patients with normal or supernormal LV systolic function. Severe hypotension – Disopyramide should not be given to patients with impaired LV systolic function and low ejection fraction. Hypotension is not seen in patients with normal or supernormal LV systolic function. Extracardiac effects Atropine like effects (anticholinergic) Dry mouth Constipation Urinary retention – Disopyramide should not be given to patients with symptomatic prostatism. Blurred vision Glaucoma Rash AgranulocytosisAdditionally, disopyramide may enhance the hypoglycaemic effect of gliclazide, insulin, and metformin. See also Actisomide References External links Disopyramide on RxList
You serve as an encyclopedia of medical terms. Deliver clear, detailed explanations, making medical language understandable to a diverse audience.
What is the significance of the term 'Neonatal lupus erythematosus' in the medical field?
Neonatal lupus erythematosus is an autoimmune disease in an infant born to a mother with anti-Ro/SSA and with or without anti-La/SSB antibodies. The disease most commonly presents with a rash resembling subacute cutaneous lupus erythematosus and can have systemic abnormalities such as complete heart block or hepatosplenomegaly. Neonatal lupus is usually benign and self-limited. Many of the clinical manifestations are transient, but certain heart problems can be permanent. Diagnosis is based on maternal antibodies and clinical manifestations. Treatment and management is mainly supportive and focused on preventing complete heart block if possible. Pathogenesis Ro/SSA and La/SSB are proteins found inside cells. Anti-Ro/SSA and anti-La/SSB are antibodies that form against these proteins. These antibodies can be seen in autoimmune diseases, the most common being Lupus and Sjögrens. Mothers can have these antibodies circulating in their blood without having any signs or symptoms of an autoimmune disease. Babies born to mothers with these antibodies have a chance of developing neonatal lupus erythematosus. This occurs when maternal anti-Ro/SSA and anti-La/SSB antibodies enter fetal circulation and affect neonatal organs by crossing the placenta using FcRn receptors. The mechanism in which the antibodies affect organs is not yet completely understood. For the heart, it is thought that these antibodies bind to cells in the heart that go through physiologic cell death during embryogenesis. This leads to heart cell injury which causes secondary fibrosis in the conduction system, ultimately causing heart block. These antibodies can also affect calcium channels which are needed to initiate action potentials. Disruption to action potential propagation can affect the AV and SA nodes which are part of the conduction system of the heart.Fetal susceptibility and environmental factors could also play a role in pathogenesis since not all infants develop congenital heart block even when exposed to antibodies. Clinical manifestations Neonatal lupus can present with several signs and symptoms. The most common manifestations involve the heart and skin. Problems involving the liver, gallbladder, brain, and blood can be seen but are usually transient. Heart Cardiac manifestations present more commonly in utero, but can also present after birth. The most common complications are varying degrees of heart block and endocardial fibroelastosis. A large percentage of infants diagnosed with isolated congenital heart block are associated to neonatal lupus. Heart block occurs when there is dysfunction in the conduction system, preventing impulses from traveling from the atria to the ventricles. Heart block can initially present with bradycardia in the prenatal period, usually around the second trimester. Infants with lower grades of heart block can progress to higher grades, but they can also present with rapid onset of complete heart block. Endocardial fibroelastosis is considered a type of cardiomyopathy that occurs in response to heart cell injury and can be seen with or without conduction system dysfunction.Common complications First degree heart block Second degree heart block Third degree heart block (Complete heart block) Endocardial fibroelastosisOther complications seen with neonatal lupus Patent ductus arteriosus Patent foramen ovale Pulmonic stenosis Pulmonary valvular dysplasia Fusion of chordae tendineae of the tricuspid valve Ostium secundum type atrial septal defects Skin A rash can be seen upon delivery. It is commonly found on the head and face, but can also be found on other parts of the body. It is most commonly seen around the eyes. The rash can be described as raised, red, and ring-shaped. The rash is not always visible at birth and can become more prominent after UV light exposure. Antibodies coming from the mother have a certain life span. Because of this, the rash usually lasts 6–8 months, resolving after the maternal antibodies are no longer in circulation. Telangiectasia has also been seen and can occur with or without the ring-shaped rash. Red, ring-shaped rash of face and head Telangiectasia Liver and gallbladder Severity in which the liver is affected can range from mildly elevated liver enzymes to liver failure. Elevated transaminases Hyperbilirubinemia Cholestasis Hepatitis Blood The conditions listed below have been reported with no issues of bleeding or sepsis. Anemia Neutropenia Thrombocytopenia Aplastic anemia Brain Although the conditions below have been reported, it is still uncertain that these manifestations are related to anti-Ro/SSA and anti-La/SSB antibodies. Majority of the neurologic conditions were found incidentally with no neurological signs or symptoms present and did not lead to physical disability or need for surgery. Hydrocephalus Macrocephaly Vasculopathy Hypocalcemic seizures Spastic diplegia Diagnosis An infant is diagnosed with neonatal lupus if maternal antibodies, anti-Ro/SSA, anti-La/SSB, or less commonly anti-ribonucleoprotein, are present and if any of the clinical manifestations are present without any other explanation. Screening Screening includes testing for maternal antibodies and evaluating for heart block in utero. Universal screening is not recommended. Screening is usually performed when there is a higher likelihood for neonatal lupus such as individuals who are more likely to have antibodies due to autoimmune diseases or individuals who have had prior pregnancies complicated with neonatal lupus. If a fetus develops heart block, screening for maternal antibodies can be considered. Monitoring for heart block can be done using a fetal echocardiogram. Management Infants with neonatal lupus are managed with supportive care. This means treating or monitoring the symptoms that can occur from this disease. For example, avoiding sunlight so that the infants rash wont worsen. Many of the manifestations are transient, but once complete heart block occurs, it is irreversible. Heart block can be managed in utero if diagnosed during pregnancy. Infants born to mothers with anti-Ro/SSA and anti-La/SSB should have an ECG performed to check for heart abnormalities if none were seen while in the uterus. In utero Fetal heart block treatment varies based on the degree. First degree heart block is usually treated with glucocorticoids, but it can also reverse on its own. As of right now, treatment guidelines for first-degree heart block is controversial due to lack of evidence. Second degree heart block commonly progresses to complete heart block. Second degree heart block can also reverse on its own. Treatment includes fluorinated glucocorticoids and immunoglobulin therapy. Third degree heart block is irreversible, and many treatments have been attempted without success. Management is mainly expectant. Early delivery should be avoided unless other complications arise. In third degree heart block, if the ventricular heart rate drops below 50-55 beats per minute, maternal beta-antagonists can be given. Glucocorticoids and immunoglobulin therapy can be used for endocardial fibroelastosis, but effectiveness is still unclear. See also Congenital heart block Lupus erythematosus References == External links ==
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
Could you offer a clear explanation of the term 'Slipping' as used in the medical field?
Slipping is a technique used in boxing that is similar to bobbing. It is considered one of the four basic defensive strategies, along with blocking, holding, and clinching. It is performed by moving the head to either side so that the opponents punches "slip" by the boxer.Slipping punches allows the fighter to recover quicker and counter punches faster than the opponent can reset into proper fighting stance. In boxing, timing is known to be a key factor in the outcome. Timing your slips correctly is essential in protecting yourself and saving energy. Slipping, if done incorrectly, can be dangerous as it exposes you to a counter-punch and an unbalanced stance. Which can lead to an opening for the opponent. Muhammad Ali is considered to be, pound for pound, one of the greatest fighters of all time. But what made him so lethal? Was it his power, speed, or technique of slipping punches? Many fighters and analysts will say it was his slipping capability. How to slip punches There are multiple ways to slip punches in boxing, but the most basic types are slipping the inside jab, and outside jab. When slipping an outside jab, your body weight needs to be centered. And as your opponent throws the jab, rotate your body clockwise and lean slightly to your right. Which would then shift weight to the rear of your leg. Pivot both your feet in the same direction. Now youre on the outside of your opponents jab which gives the ability to counter punch over their jab. For the inside jab, as the opponent throws the jab, rotate your body counter-clockwise, lean slightly to your left putting more weight on your lead leg. Its possible to just lean without rotating, but rotating helps the movement of your guard. Raise your rear hand ready for the opponent to throw a left hook. Common mistakes There are many different mistakes you can make when trying to slip a punch: Slipping too early Slipping too wide Slipping inside the cross Only moving your head Dropping your guard How to master slipping The best method to mastering slipping is practice. Now just saying practice seems simple. But the practice needs to be with a worthy opponent. Preferably someone that is taller than you and has a longer reach. Another method is a slip bag you can hang up and move back and forth. This helps you improve movement, timing, and eye coordination while performing a slip. Repetition and patience is key to mastering slipping. References External links https://www.mightyfighter.com/how-to-slip-punches/ https://lawofthefist.com/complete-guide-to-slipping-punches-in-boxing/
You are a guide to medical language. Please provide in-depth and understandable explanations of medical terms, highlighting their significance in healthcare.
Could you provide a brief overview of 'Traumatic spondylopathy' in a medical context?
Traumatic spondylopathy is a form of dorsopathy. References == External links ==
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
I'm trying to understand 'Transplant glomerulopathy' within a medical context. Could you shed some light on it?
Transplant glomerulopathy (TG) is a disease of the glomeruli in transplanted kidneys. It is a type of renal injury often associated with chronic antibody-mediated rejection. However, transplant glomerulopathy is not specific for chronic antibody-mediated rejection; it may be the result of a number of disease processes affecting the glomerular endothelium. Pathology It is characterized by glomerular basement membrane thickening (referred to as tram-tracking of the basement membrane), increased mesangial matrix and segmental and global glomerulosclerosis.The differential diagnosis of tram-tracking includes membranoproliferative glomerulonephritis (especially hepatitis C), and thrombotic microangiopathies. See also Kidney transplant Chronic rejection == References ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
Can you demystify the medical term 'Ocular dysmetria' for me?
Ocular dysmetria is a form of dysmetria that involves the constant under- or over-shooting of the eyes when attempting to focus gaze on something.Ocular dysmetria indicates lesions in the cerebellum, which is the brain region responsible for coordinating movement. It is a symptom of several neurological conditions including multiple sclerosis. It is a condition that can cause symptoms similar to sea sickness. == References ==
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
Could you offer a clear explanation of the term 'Hoarse voice' as used in the medical field?
A hoarse voice, also known as dysphonia or hoarseness, is when the voice involuntarily sounds breathy, raspy, or strained, or is softer in volume or lower in pitch. A hoarse voice, can be associated with a feeling of unease or scratchiness in the throat. Hoarseness is often a symptom of problems in the vocal folds of the larynx. It may be caused by laryngitis, which in turn may be caused by an upper respiratory infection, a cold, or allergies. Cheering at sporting events, speaking loudly in noisy situations, talking for too long without resting ones voice, singing loudly, or speaking with a voice thats too high or too low can also cause temporary hoarseness. A number of other causes for losing ones voice exist, and treatment is generally by resting the voice and treating the underlying cause. If the cause is misuse or overuse of the voice, drinking plenty of water may alleviate the problems.It appears to occur more commonly in females and the elderly. Furthermore, certain occupational groups, such as teachers and singers, are at an increased risk.Long-term hoarseness, or hoarseness that persists over three weeks, especially when not associated with a cold or flu should be assessed by a medical doctor. It is also recommended to see a doctor if hoarseness is associated with coughing up blood, difficulties swallowing, a lump in the neck, pain when speaking or swallowing, difficulty breathing, or complete loss of voice for more than a few days. For voice to be classified as "dysphonic", abnormalities must be present in one or more vocal parameters: pitch, loudness, quality, or variability. Perceptually, dysphonia can be characterised by hoarse, breathy, harsh, or rough vocal qualities, but some kind of phonation remains.Dysphonia can be categorized into two broad main types: organic and functional, and classification is based on the underlying pathology. While the causes of dysphonia can be divided into five basic categories, all of them result in an interruption of the ability of the vocal folds to vibrate normally during exhalation, which affects the voice. The assessment and diagnosis of dysphonia is done by a multidisciplinary team, and involves the use of a variety of subjective and objective measures, which look at both the quality of the voice as well as the physical state of the larynx. Multiple treatments have been developed to address organic and functional causes of dysphonia. Dysphonia can be targeted through direct therapy, indirect therapy, medical treatments, and surgery. Functional dysphonias may be treated through direct and indirect voice therapies, whereas surgeries are recommended for chronic, organic dysphonias. Types Voice disorders can be divided into 2 broad categories: organic and functional. The distinction between these broad classes stems from their cause, whereby organic dysphonia results from some sort of physiological change in one of the subsystems of speech (for voice, usually respiration, laryngeal anatomy, and/or other parts of the vocal tract are affected). Conversely, functional dysphonia refers to hoarseness resulting from vocal use (i.e. overuse/abuse). Furthermore, according to ASHA, organic dysphonia can be subdivided into structural and neurogenic; neurogenic dysphonia is defined as impaired functioning of the vocal structure due to a neurological problem (in the central nervous system or peripheral nervous system); in contrast, structural dysphonia is defined as impaired functioning of the vocal mechanism that is caused by some sort of physical change (e.g. a lesion on the vocal folds). Notably, an additional subcategory of functional dysphonia recognized by professionals is psychogenic dysphonia, which can be defined as a type of voice disorder that has no known cause and can be presumed to be a product of some sort of psychological stressors in ones environment. It is important to note that these types are not mutually exclusive and much overlap occurs. For example, Muscle Tension Dysphonia (MTD) has been found to be a result of many different causes including the following: MTD in the presence of an organic pathology (i.e. organic type), MTD stemming from vocal use (i.e. functional type), and MTD as a result of personality and/or psychological factors (i.e. psychogenic type). Causes The most common causes of hoarseness is laryngitis (acute 42.1%; chronic 9.7%) and functional dysphonia (30%). Hoarseness can also be caused by laryngeal tumours (benign 10.7 - 31%; malignant 2.2 - 3.0%). Causes that are overall less common include neurogenic conditions (2.8 - 8.0%), psychogenic conditions (2.0 - 2.2%), and aging (2%).A variety of different causes, which result in abnormal vibrations of the vocal folds, can cause dysphonia. These causes can range from vocal abuse and misuse to systemic diseases. Causes of dysphonia can be divided into five basic categories, although overlap may occur between categories. (Note that this list is not exhaustive): Employment It has been suggested that certain occupational groups may be at increased risk of developing dysphonia due to the excessive or intense vocal demands of their work. Research on this topic has primarily focused on teachers and singers, although some studies have examined other groups of heavy voice users (e.g. actors, cheerleaders, aerobic instructors, etc.). At present, it is known that teachers and singers are likely to report dysphonia. Moreover, physical education teachers, teachers in noisy environments, and those who habitually use a loud speaking voice are at increased risk. The term clergymans throat or dysphonia clericorum was previously used for painful dysphonia associated with public speaking, particularly among preachers. However, the exact prevalence rates for occupational voice users are unclear, as individual studies have varied widely in the methodologies used to obtain data (e.g. employing different operational definitions for "singer"). Mechanism Located in the anterior portion of the neck is the larynx (also known as the voice box), a structure made up of several supporting cartilages and ligaments, which houses the vocal folds. In normal voice production, exhaled air moves out of the lungs and passes upward through the vocal tract. At the level of the larynx, the exhaled air causes the vocal folds to move toward the midline of the tract (a process called adduction). The adducted vocal folds do not close completely but instead remain partially open. The narrow opening between the folds is referred to as the glottis. As air moves through the glottis, it causes a distortion of the air particles which sets the vocal folds into vibratory motion. It is this vibratory motion that produces phonation or voice. In dysphonia, there is an impairment in the ability to produce an appropriate level of phonation. More specifically, it results from an impairment in vocal fold vibration or the nerve supply of the larynx. Diagnosis The assessment and diagnosis of a dysphonic voice is completed by a multidisciplinary team, such as an otolaryngologist (ear, nose and throat doctor) and Speech-Language Pathologist, involving the use of both objective and subjective measures to evaluate the quality of the voice as well as the condition of the vocal fold tissue and vibration patterns. Definition Dysphonia is a broad clinical term which refers to abnormal functioning of the voice. More specifically, a voice can be classified as "dysphonic" when there are abnormalities or impairments in one or more of the following parameters of voice: pitch, loudness, quality, and variability. For example, abnormal pitch can be characterized by a voice that is too high or low whereas abnormal loudness can be characterized by a voice that is too quiet or loud. Similarly, a voice that has frequent, inappropriate breaks characterizes abnormal quality while a voice that is monotone (i.e., very flat) or inappropriately fluctuates characterizes abnormal variability. While hoarseness is used interchangeably with the term dysphonia, it is important to note that the two are not synonymous. Hoarseness is merely a subjective term to explain the perceptual quality (or sound) of a dysphonic voice. While hoarseness is a common symptom (or complaint) of dysphonia, there are several other signs and symptoms that can be present such as: breathiness, roughness, and dryness. Furthermore, a voice can be classified as dysphonic when it poses problems in the functional or occupational needs of the individual or is inappropriate for their age or sex. Auditory-perceptual measures Auditory-perceptual measures are the most commonly used tool by clinicians to evaluate the voice quality due to its quick and non-invasive nature. Additionally, these measures have been proven to be reliable in a clinical setting. Ratings are used to evaluate the quality of a patients voice for a variety of voice features, including overall severity, roughness, breathiness, strain, loudness and pitch. These evaluations are done during spontaneous speech, sentence or passage reading or sustained vowel productions. The GRBAS (Grade, Roughness, Breathiness, Asthenia, Strain) and the CAPE-V (Consensus Auditory Perceptual Evaluation—Voice) are two formal voice rating scales commonly used for this purpose. Vocal fold imaging Vocal fold imaging techniques are used by clinicians to examine the vocal folds and allows them to detect vocal pathology and assess the quality of the vocal fold vibrations. Laryngeal stroboscopy is the primary clinical tool used for this purpose. Laryngeal stroboscopy uses a synchronized flashing light passed through either a rigid or flexible laryngoscope to provide an image of the vocal fold motion; the image is created by averaging over several vibratory cycles and is thus not provided in real-time. As this technique relies on periodic vocal fold vibration, it cannot be used in patients with moderate to severe dysphonia. High speed digital imaging of the vocal folds (videokymography), another imaging technique, is not subject to the same limitations as laryngeal stroboscopy. A rigid endoscope is used to take images at a rate of 8000 frames per second, and the image is displayed in real time. As well, this technique allows imaging of aperiodic vibrations and can thus be used with patients presenting with all severities of dysphonia. Acoustic measures Acoustic measures can be used to provide objective measures of vocal function. Signal processing algorithms are applied to voice recordings made during sustained phonation or during spontaneous speech. The acoustic parameters which can then be examined include fundamental frequency, signal amplitude, jitter, shimmer, and noise-to-harmonic ratios. However, due to limitations imposed by the algorithms employed, these measures cannot be used with patients who exhibit severe dysphonia. Aerodynamic measures Aerodynamic measures of voice include measures of air volume, air flow and sub glottal air pressure. The normal aerodynamic parameters of voice vary considerably among individuals, which leads to a large overlapping range of values between dysphonic and non-dysphonic patients. This limits the use of these measures as a diagnostic tool. Nonetheless, they are useful when used in adjunct with other voice assessment measures, or as a tool for monitoring therapeutic effects over time. Prevention Given that certain occupations are more at risk for developing dysphonia (e.g. teachers) research into prevention studies have been conducted. Research into the effectiveness of prevention strategies for dysphonia have yet to produce definitive results, however, research is still ongoing. Primarily, there are two types of vocal training recognized by professionals to help with prevention: direct and indirect. Direct prevention describes efforts to reduce conditions that may serve to increase vocal strain (such as patient education, relaxation strategies, etc.), while indirect prevention strategies refer to changes in the underlying physiological mechanism for voice production (e.g., adjustments to the manner in which vocal fold adduction occurs, respiratory training, shifting postural habits, etc.). Treatment Although there is no universal classification of voice problems, voice disorders can be separated into certain categories: organic (structural or neurogenic), functional, neurological (psychogenic) or iatrogenic, for example. Depending on the diagnosis and severity of the voice problem, and depending on the category that the voice disorder falls into, there are various treatment methods that can be suggested to the patient. The professional has to keep in mind there is not one universal treatment, but rather the clinical approach must find what the optimal effective course of action for that particular patient is.There are three main type of treatments: medical treatments, voice therapy and surgical treatments. When necessary, certain voice disorders use a combination of treatment approaches. A medical treatment involves the use of botulinum toxin (botox) or anti-reflux medicines, for example. Botox is a key treatment for voice disorders such as Spasmodic Dysphonia. Voice therapy is mainly used with patients who have an underlying cause of voice misuse or abuse. Laryngologists also recommend this type of treatment to patients who have an organic voice disorder - such as vocal fold nodules, cysts or polyps as well as to treat functional dysphonia. Certain surgical treatments can be implemented as well - phono microsurgery (removal of vocal fold lesions performed with a microscope), laryngeal framework surgery (the manipulation of the voice box), as well as injection augmentation (injection of substance to vocal folds to improve closure). Surgical treatments may be recommended for patients having an organic dysphonia.A combination of both an indirect treatment method (an approach used to change external factors affecting the vocal folds) and a direct treatment method (an approach used where the mechanisms functioning during the use of the vocal folds, such as phonation or respiration, are the main focus) may be used to treat dysphonia. Direct therapies Direct therapies address the physical aspects of vocal production. Techniques work to either modify vocal fold contact, manage breathing patterns, and/or change the tension at level of the larynx. Notable techniques include, but are not limited to, the yawn-sigh method, optimal pitch, laryngeal manipulation, humming, the accent method, and the Lee Silverman Voice Treatment. An example of a direct therapy is circumlaryngeal manual therapy, which has been used to reduce tension and massage hyoid-laryngeal muscles. This area is often tense from chronic elevation of the larynx. Pressure is applied to these areas as the patient hums or sustains a vowel. Traditional voice therapy is often used to treat muscular tension dysphonia. Indirect therapies Indirect therapies take into account external factors that may influence vocal production. This incorporates maintenance of vocal hygiene practices, as well as the prevention of harmful vocal behaviours. Vocal hygiene includes adequate hydration of the vocal folds, monitoring the amount of voice use and rest, avoidance of vocal abuse (e.g., shouting, clearing of the throat), and taking into consideration lifestyle choices that may affect vocal health (e.g., smoking, sleeping habits). Vocal warm-ups and cool-downs may be employed to improve muscle tension and decrease risk of injury before strenuous vocal activities. It should be taken into account that vocal hygiene practices alone are minimally effective in treating dysphonia, and thus should be paired with other therapies. Medication and surgery Medical and surgical treatments have been recommended to treat organic dysphonias. An effective treatment for spasmodic dysphonia (hoarseness resulting from periodic breaks in phonation due to hyperadduction of the vocal folds) is botulinum toxin injection. The toxin acts by blocking acetylcholine release at the thyro-arytenoid muscle. Although the use of botulinum toxin injections is considered relatively safe, patients responses to treatment differ in the initial stages; some have reported experiencing swallowing problems and breathy voice quality as a side-effect to the injections. Breathiness may last for a longer period of time for males than females.Surgeries involve myoectomies of the laryngeal muscles to reduce voice breaks, and laryngoplasties, in which laryngeal cartilage is altered to reduce tension. Epidemiology Dysphonia is a general term for voice impairment that is sometimes used synonymously with the perceptual voice quality of hoarseness. It is the reason for 1% of all visits to primary care providers. The lifetime risk of hoarse voice complaints among primary care patients is 30%. Since hoarseness is a general symptom, it is associated with a number of laryngeal diagnoses.There is an interplay of sex and age differences associated with dysphonia. The point prevalence of dysphonia in adults under the age of 65 is 6.6%. Dysphonia is more common in adult females than males, possibly due to sex-related anatomical differences of the vocal mechanism. In childhood, however, dysphonia is more often found in boys than girls. As there are no anatomical differences in larynges of boys and girls prior to puberty, it has been proposed that the higher rate of voice impairment found in boys arises from louder social activities, personality factors, or more frequent inappropriate vocal use. The most common laryngeal diagnosis among children is vocal fold nodules, a condition known to be associated with vocally damaging behaviours. However, a causal relationship has not yet been definitively proven. The overall prevalence of dysphonia in children ranges from 3.9% - 23.4%, most commonly affecting children between the ages of 8 - 14. Among the elderly, dysphonia is associated with both naturally occurring anatomical and physiological changes as well as higher rates of pathological conditions. The point prevalence of dysphonia among the elderly is 29%. Findings regarding the prevalence of geriatric dysphonia in the general population are very variable, ranging from 4 - 29.1%. This variability is likely due to different methodology used in obtaining information from participants. The most common laryngeal diagnoses among the elderly are polyps, laryngopharyngeal reflux, muscle tension dysphonia, vocal fold paresis or paralysis, vocal fold mass, glottic insufficiency, malignant lesions, and neurologic conditions affecting the larynx. References External links ASHA: Voice Disorders ASHA: Clinical Topics - Voice Disorders Overview
You serve as a guide in the medical field. Explain medical terms thoroughly, ensuring the information is both insightful and comprehensible.
I've encountered the term 'Blastic plasmacytoid dendritic cell neoplasm' while reading about medical topics. What does it refer to exactly?
Blastic plasmacytoid dendritic cell neoplasm (BPDCN) is a rare hematologic malignancy. It was initially regarded as a form of lymphocyte-derived cutaneous lymphoma and alternatively named CD4+CD56+ hematodermic tumor, blastic NK cell lymphoma, and agranular CD4+ NK cell leukemia. Later, however, the disease was determined to be a malignancy of plasmacytoid dendritic cells rather than lymphocytes and therefore termed blastic plasmacytoid dendritic cell neoplasm. In 2016, the World Health Organization designated BPDCN to be in its own separate category within the myeloid class of neoplasms. It is estimated that BPDCN constitutes 0.44% of all hematological malignancies.Blastic plasmacytoid dendritic cell neoplasm is an aggressive malignancy with features of cutaneous lymphoma (e.g. malignant plasmacytoid dendritic cell infiltrations into the skin to form single or multiple lesions) and/or leukemia (i.e. malignant plasmacytoid dendritic cells in blood and bone marrow). While commonly presenting with these clinical features, BPDCN, particularly in its more advanced stages, may also involve malignant plasmacytoid dendritic cell infiltrations in and thereby injury to the liver, spleen, lymph nodes, central nervous system, or other tissues. The neoplasm occurs in individuals of all ages but predominates in the elderly; in children, it afflicts males and females equally but in adults is far more common (~75% of cases) in males.Blastic plasmacytoid dendritic cell neoplasm typically responds to chemotherapy regimens used to treat hematological malignancies. All too often, however, the disease rapidly recurs and does so in a more drug-resistant form. Furthermore, the disease may occur in association with the myelodysplastic syndrome or transform to acute myeloid leukemia. Consequently, BPDCN has a very low 5 year survival rate. Current translational research studies on treating BPDCN have therefore focused on non-chemotherapeutic regimens that target the molecular pathways which may promote the disease. Presentation Blastic plasmacytoid dendritic cell neoplasm occurs in children, including neonates, but is more common in adults, particularly those between the ages 60–80. BPDCN usually (i.e. 61% to 90% of cases) presents with skin lesions, i.e. nodules, tumors, red or purple papules, bruise-like patches, and/or ulcers that most often occur on the head, face, and upper torso. The lesions are due to diffuse infiltrations of the skin by malignant pDC. In one large study, this presentation was accompanied by swollen lymph nodes, usually in the neck, due to malignant pDC infiltrations (~50% of cases); enlarged liver (~16% of cases) and/or spleen (26% of cases), also due to malignant pDC infiltrations; increased levels of malignant pDC in blood (i.e. >2% of nucleated cells) (~40% of cases), bone marrow (~65% of cases) and cerebrospinal fluid (47% of childhood cases but less often detected in adult cases). More advanced or severe cases may present with extreme organ and/or lymph node enlargements, skin lesions in virtually any site, and clinical evidence of malignant pDC infiltrations in the breasts, eyes, kidneys, lungs, gastrointestinal tract, bone, sinuses, ears, or testes. About 10% of individuals with BPDCN present with a leukemia-like disease, i.e. they exhibit circulating malignant pDC, anemia, thrombocytopenia, and/or leukopenia due to extensive malignant pDC infiltrations in the bone marrow. A leukemic phase of the disease is a common feature of end stage and post-therapy relapsing BPDCN. Pathophysiology There are three types of dendritic cells, plasmacytic dendritic cells (pDC) and two types of conventional dendritic cells (cDC), myeloid cDC1 and myeloid cDC2. pDC circulate in the blood, representing <0.4% of all nucleated blood cells, and are present in various hematological tissues such as lymph nodes and spleen. Their major function is to detect and then initiate immune responses to intracellular pathogens, particularly viruses such as the cold sore-causing Herpes simplex viruses, HIV, and hepatitis viruses but also bacteria such as the tuberculosis-causing Mycobacterium tuberculosis, fungi such as the aspergillosis-causing Aspergillus fumigatus and parasites such as malaria-causing Plasmodium falciparum. Following detection of these intracellular pathogens, pCD initiate immune responses by producing massive amounts of type I and type III interferons as well as by differentiating (i.e. maturing) into conventional dendritic cells that further promote immune responses by, e.g. functioning as antigen-presenting cells. The malignant pDC in BPDCN have the appearance of immature plasmacytoid dendritic cells. They are distinguished from other dendritic, myeloid, lymphoid and NK cell types by exhibiting at least several of the following properties: 1) plasmacytoid morphology; 2) production of large amounts of type I interferons when properly stimulated; 3) ability to differentiate into conventional dendritic cells when properly stimulated; 4) the expression of key marker proteins such as granzyme B, TCF4, interleukin-3 receptor (i.e. CD123), CLEC4C, and Neuropilin, and 5) failure to express certain marker proteins that are commonly expressed by myeloid, lymphoid, and NK cell lineages.Blastic plasmacytoid dendritic cell neoplasm typically arises after the serial acquisition of multiple genetic abnormalities in pDC or their precursor cells. Inactivating mutations (i.e. mutations which cause the gene to make no or a less active product) in the TET2 gene are the most common genetic abnormality in the disease, occurring in 32–67% of all BPDCN cases and often accompanied by mutations in either the NPM1 or SRSF2 gene. Numerous other genetic abnormalities are associated with the disease: 1) mutations in NRAS, ASXL1, and TP53; 2) deletions of the CDKN2A-ARF-CDKN2B locus on the short arm of chromosome 9, CDKN1B locus on the short arm of chromosome 12, RB1 locus on the long arm of chromosome 13, or NRC1 locus on the long arm of chromosome 5; 3) fusions of KMT2A on the long arm of chromosome 11 with MLLT1 on the short arm of chromosome 10, SUPT3H on the short arm of chromosome with MYC on the long arm of chromosome 8, or KMT2A on the long arm of chromosome 11 with MLLT1 on the long arm of chromosome 19; and 4) duplication or loss of entire chromosomes, particularly chromosomes 9, 13, or 15. Laboratory studies indicate that malignant pDC have a pathologically overactive NF-κB pathway that promotes their survival and production of various cytokines) that stimulate their own proliferation. Presumably, these genetic abnormalities lead to the activation of the NF-κB pathway and/or other cellular activation pathways which promote the survival, proliferation, and/or other malignant phenotypic traits in pDC and thereby cause BPDCN. Diagnosis BPDCN is suggested by a biopsy of skin lesions which reveals the infiltration by medium-sized blast (i.e. immature) cells into the dermis while sparing the epidermis. These cells exhibit irregular nuclei, fine chromatin, and at least one small nucleolus. Such blast cells may also be observed in the circulation, bone marrow, or other tissues and suggest BPDCN. However, the diagnosis of this disease requires determination that these cells are pDC blast cells rather than AML, T-cell lymphoblastic lymphoma (TCLL), or aggressive NK-cell leukemia (NKL) blast cells. Various studies have offered similar but not identical criteria to make this determination. All studies agree that pDC should have a typical plasmacytoid morphology and express a particular profile of marker proteins as detected by immunoassay and/or flow cytometry. However, the studies disagree on which marker proteins to profile. One studys profile assayed 1) CD4, CD56, CD123 (i.e. Interleukin-3 receptor, and TLC1, which are expressed on 80–100% of pDC but uncommon on AML, TCLL, or NKL blasts); 2) CD2AP and CLEC4C which are unique to pDC; and 3) myeloperoxidase, lysozyme, CD34, CD14, CD11c, and CD163 which are unique to AML, TCLL, or NKL blasts. Two other studies recommended assaying somewhat different sets of marker proteins. Treatment There have been no controlled studies to define the optimal treatment for BPDCN. Studies on small numbers of individuals with the disease have found that the standard chemotherapy regimens used for the initial induction treatments of AML, acute lymphoblastic leukemia, and high-grade lymphoma give complete remission rates of 77%, 93%, and 80%, respectively, in childhood PBDN and 47%, 77%, and 53%, respectively, in adult PBDN. However, these remissions were short-lived: post-treatment mean times to relapse or death were 12 months for children and 6.8 months for adults. Given these poor remission and survival rates, other treatments have been added to the initial treatment regimens. Studies have shown that the addition of intrathecally administered drugs (administered directly into the spinal canal) as prophylaxis prolongs the period of CNS-free disease and increases overall survival. Hematopoietic stem cell transplantation following initial chemotherapy-induced remission also prolongs these remissions and, it is suggested, offers potential for curing the disease. (A graft-versus-leukemia effect may have contributed to the benefits seen after transplantation.) Studies have not yet determined whether allogenic (i.e. taken from others) or autologous (i.e. taken from self) stem cells achieve better results, although one retrospective study in Japan found that autologous stem cells gave significantly better overall and progression-free survival rates. A phase I clinical research study to test the safety and efficacy of a combination chemotherapy regimen consisting of methotrexate, L-asparaginase, idarubicin, and dexamethasone followed by allogenic or autologous bone marrow transplantation in 26 participants newly diagnosed with BPDCN is planned but not yet in its recruiting phase.While few studies have reported on the treatment of BPDCN that has recurred following initial therapy, donor lymphocyte infusions coupled with alternative chemotherapy treatments have induced second complete or partial remissions in a few patients. Tagraxofusp-erzs Tagraxofusp-erzs (trade name Elzonris; formerly SL-401 and DT388-IL3) was approved in the United States in December 2018 for the treatment of BPDCN. Tagraxofusp-erzs is a fusion protein consisting of interleukin 3 (i.e. IL-3) fused to diphtheria toxin. The fusion protein readily kills cultured pDC by binding to their IL-3 receptors to thereby gain entrance to the cells and then blocking these cells protein synthesis (due to diphtheria toxin-mediated inhibition of eukaryotic elongation factor 2). Prognosis Due to the high rates of recurrence following initial therapy and the short overall survival times of individuals with BPDCN, prognosis of the disease is poor. However, further study of treatment regimens that include intrathecal chemotherapy and hematological stem cell transplantation in initial treatment regimens (see previous section) and newer non-chemotherapeutic drug treatments (see next section) may improve this situation. Research UCART123 UCART123 are chimeric T cell receptor-bearing cells, i.e. T lymphocytes engineered to bear a monoclonal antibody that directs them to attack and kill BPDCN cells. The intravenous infusion of these cells in patients with BPDCN is in phase 1 clinical trials but in September 2017, the Federal Drug Administration suspended these because one patient developed a Grade 5 (i.e. lethal) cytokine release syndrome (see UCART123#CAR-T cancer treatment). The suspension was lifted in November 2017 after the trial used reduced amounts of the cells and with additional conditions were applied. A new phase 1 clinical trial is now recruiting 76 new patients to study the safety and efficacy of UCAR123 in treating BPDCN. The study began in June 2017 and is scheduled to end in December 2021. Venetoclax BCL-2 is a cellular protein that can act to inhibit cell death due to apoptosis. The BCL-2 gene appears to be one of the most up-regulated (i.e. overactive) genes in BPDCN. Venetoclax inhibits the apoptosis-inhibiting action of BCL-2 and proved active in treating two patients with relapsed or refractory BPDCN. A phase I clinical trial testing the safety and efficacy of the drug in BPDCN is planned but not yet in its recruiting phase. == References ==
You are a medical lexicon. Explain medical terminology with depth and clarity, making sure the information is both accurate and easy to grasp.
Could you please explain the term 'Natamycin' in simple language?
Natamycin, also known as pimaricin, is an antifungal medication used to treat fungal infections around the eye. This includes infections of the eyelids, conjunctiva, and cornea. It is used as eyedrops. Natamycin is also used in the food industry as a preservative.Allergic reactions may occur. It is unclear if medical use during pregnancy or breastfeeding is safe. It is in the macrolide and polyene families of medications. It results in fungal death by altering the cell membrane.Natamycin was discovered in 1955 and approved for medical use in the United States in 1978. It is on the World Health Organizations List of Essential Medicines. It is produced by fermentation of certain types of the bacterium Streptomyces. Uses Medical Natamycin is used to treat fungal infections, including Candida, Aspergillus, Cephalosporium, Fusarium, and Penicillium. It is applied topically as a cream, in eye drops, or (for oral infections) in a lozenge. Natamycin shows negligible absorption into the body when administered in these ways. When taken orally, little or none is absorbed from the gastrointestinal tract, making it inappropriate for systemic infections. Natamycin lozenges are also prescribed to treat yeast infections and oral thrush. Food Natamycin has been used for decades in the food industry as a hurdle to fungal outgrowth in dairy products and other foods. Potential advantages for the usage of natamycin might include the replacement of traditional chemical preservatives, a neutral flavor impact, and less dependence on pH for efficacy, as is common with chemical preservatives. It can be applied in a variety of ways: as an aqueous suspension (such as mixed into a brine) sprayed on the product or into which the product is dipped, or in powdered form (along with an anticaking agent such as cellulose) sprinkled on or mixed into the product. While not currently approved for use on meats in the United States, some countries allow natamycin to be applied to the surface of dry and fermented sausages to prevent mold growth on the casing. Also, natamycin is approved for various dairy applications in the United States. More specifically, natamycin is commonly used in products such as cream cheeses, cottage cheese, sour cream, yogurt, shredded cheeses, cheese slices, and packaged salad mixes. One of the reasons for food producers to use natamycin is to replace the artificial preservative sorbic acid.As a food additive, it has E number E235. Throughout the European Union, it is approved only as a surface preservative for certain cheese and dried sausage products. It must not be detectable 5 mm below the rind. While natamycin is approved in different applications at different levels in the world, it is approved in over 150 countries worldwide.The European Food Safety Authority (EFSA) panel took over the responsibilities of providing scientific food safety advice to the EU from the Scientific Committee on Food in 2002. In 2009, the EFSA considered the proposed use levels of natamycin are safe if it is used for the surface treatment for these cheese and sausage types. Safety Natamycin does not have acute toxicity. In animal studies, the lowest LD50 found was 2.5-4.5 g/kg. In rats, the LD50 is ≥2300 mg/kg, and doses of 500 mg/kg/day over 2 years caused no detectable differences in survival rate, growth, or incidence of tumors. The metabolites of natamycin also lack toxicity. The breakdown products of natamycin under various storage conditions may have a lower LD50 than natamycin, but in all cases, the numbers are quite high. In humans, a dose of 500 mg/kg/day repeated over multiple days caused nausea, vomiting, and diarrhea.No evidence shows natamycin, at either pharmacological levels or levels encountered as a food additive, can harm normal intestinal flora, but definitive research may not be available. However, some people are allergic to natamycin.The EFSA has concluded that the use of natamycin as a food additive has no relevant risk for the development of resistant fungi. Mechanism of action Natamycin inhibits the growth of fungi by specifically binding to ergosterol present in fungal cell membranes. Natamycin inhibits amino acid and glucose transport proteins leading to a loss of nutrient transport across the plasma membrane. While this binding is reversible, ergosterol binding acts as a universal mechanism of fungal inhibition, allowing natamycin to act on diverse fungal pathogens from Saccharomyces yeast to Aspergillus moulds. Natamycin is unique amongst related antifungals specifically because it does not directly cause membrane permeabilization. Structurally-related antibiotics with similar binding properties are thought to produce hydrophilic channels that allow leakage of potassium and sodium ions from the cell.Natamycin has very low solubility in water; however, natamycin is effective at very low levels. Its minimum inhibitory concentration is less than 10 ppm for most molds. Biochemistry Natamycin is produced as a secondary metabolite by some Streptomyces species: S. natalensis, S. lydicus, S. chattanoogensis and S. gilvosporeus. Structurally, its core is a macrolide containing a polyene segment, with carboxylic acid and mycosamine groups attached. As with other polyene antimycotics, the biosynthesis begins with a series of polyketide synthase modules, followed by additional enzymatic processes for oxidation and attachment of the substituents.Natamycin is produced on an industrial scale by fermentation of various Streptomyces strains, including S. chattanoogensis L10. History Natamycin was first isolated in 1955 from fermentation broth of a Streptomyces natalensis cell culture. It was originally named pimaricin to honor Pietermaritzburg, where Streptomyces natalensis was acquired. Pimaricin was later renamed after the World Health Organization (WHO) mandated that antibiotics produced by Streptomyces end in –mycin. The name natamycin was chosen in reference to the natalensis species name. Society and culture Natamycin appears on Whole Foods "Unacceptable Ingredients for Food" list. References External links Natacyn Side Effects Center "Natamycin". Drug Information Portal. U.S. National Library of Medicine.
You function as a medical explainer. Your task is to convey the meanings of medical terms in a detailed, yet easy-to-understand manner.
I'm trying to expand my medical knowledge. Can you elucidate the term 'Stress fracture'?
A stress fracture is a fatigue-induced bone fracture caused by repeated stress over time. Instead of resulting from a single severe impact, stress fractures are the result of accumulated injury from repeated submaximal loading, such as running or jumping. Because of this mechanism, stress fractures are common overuse injuries in athletes.Stress fractures can be described as small cracks in the bone, or hairline fractures. Stress fractures of the foot are sometimes called "march fractures" because of the injurys prevalence among heavily marching soldiers. Stress fractures most frequently occur in weight-bearing bones of the lower extremities, such as the tibia and fibula (bones of the lower leg), metatarsal and navicular bones (bones of the foot). Less common are stress fractures to the femur, pelvis, and sacrum. Treatment usually consists of rest followed by a gradual return to exercise over a period of months. Signs and symptoms Stress fractures are typically discovered after a rapid increase in exercise. Symptoms usually have a gradual onset, with complaints that include isolated pain along the shaft of the bone and during activity, decreased muscular strength and cramping. In cases of fibular stress fractures, pain occurs proximal to the lateral malleolus, that increases with activity and subsides with rest. If pain is constantly present it may indicate a more serious bone injury. There is usually an area of localized tenderness on or near the bone and generalized swelling in the area. Pressure applied to the bone may reproduce symptoms and reveal crepitus in well-developed stress fractures. Anterior tibial stress fractures elicit focal tenderness on the anterior tibial crest, while posterior medial stress fractures can be tender at the posterior tibial border. Causes Bones are constantly attempting to remodel and repair themselves, especially during a sport where extraordinary stress is applied to the bone. Over time, if enough stress is placed on the bone that it exhausts the capacity of the bone to remodel, a weakened site—a stress fracture—may appear on the bone. The fracture does not appear suddenly. It occurs from repeated traumas, none of which is sufficient to cause a sudden break, but which, when added together, overwhelm the osteoblasts that remodel the bone. Potential causes include overload caused by muscle contraction, amenorrhea, an altered stress distribution in the bone accompanying muscle fatigue, a change in ground reaction force (concrete to grass) or the performance of a rhythmically repetitive stress that leads up to a vibratory summation point.Stress fractures commonly occur in sedentary people who suddenly undertake a burst of exercise (whose bones are not used to the task) They may also occur in athletes completing high volume, high impact training, such as running or jumping sports. Stress fractures are also commonly reported in soldiers who march long distances. Muscle fatigue can also play a role in the occurrence of stress fractures. In a runner, each stride normally exerts large forces at various points in the legs. Each shock—a rapid acceleration and energy transfer—must be absorbed. Muscles and bones serve as shock absorbers. However, the muscles, usually those in the lower leg, become fatigued after running a long distance and lose their ability to absorb shock. As the bones now experience larger stresses, this increases the risk of fracture. Previous stress fractures have been identified as a risk factor. Along with history of stress fractures, a narrow tibial shaft, high degree of hip external rotation, osteopenia, osteoporosis, and pes cavus are common predisposing factors for stress fracturesCommon causes in sport that result in stress fractures include Over training Going back to competition too soon after an injury or illness Going from one event to another without proper training for the second event Starting initial training too quickly Changing habits or the environment like training surface or shoes Diagnosis X-rays usually do not show evidence of new stress fractures, but can be used approximately three weeks after onset of pain when the bone begins to remodel. A CT scan, MRI, or 3-phase bone scan may be more effective for early diagnosis.MRI appears to be the most accurate diagnostic test.Tuning forks have been advocated as an inexpensive alternative for identifying the presence of stress fractures. The clinician places a vibrating tuning fork along the shaft of the suspected bone. If a stress fracture is present, the vibration would cause pain. This test has a low positive likelihood ratio and a high negative likelihood ratio meaning it should not be used as the only diagnostic method. Prevention Altering the biomechanics of training and training schedules may reduce the prevalence of stress fractures. Orthotic insoles have been found to decrease the rate of stress fractures in military recruits, but it is unclear whether this can be extrapolated to the general population or athletes. On the other hand, some athletes have argued that cushioning in shoes actually causes more stress by reducing the bodys natural shock-absorbing action, thus increasing the frequency of running injuries. During exercise that applies more stress to the bones, it may help to increase daily calcium (2,000 mg) and vitamin D (800 IU) intake, depending on the individual. Treatment For low-risk stress fractures, rest is the best management option. The amount of recovery time varies greatly depending upon the location and severity of the fracture, and the bodys healing response. Complete rest and a stirrup leg brace or walking boot are usually used for a period of four to eight weeks, although periods of rest of twelve weeks or more are not uncommon for more-severe stress fractures. After this period, activities may be gradually resumed as long as the activities do not cause pain. While the bone may feel healed and not hurt during daily activity, the process of bone remodeling may take place for many months after the injury feels healed. Incidences of refracturing the bone are still a significant risk. Activities such as running or sports that place additional stress on the bone should only gradually be resumed. Rehabilitation usually includes muscle strength training to help dissipate the forces transmitted to the bones.With severe stress fractures (see "prognosis"), surgery may be needed for proper healing. The procedure may involve pinning the fracture site, and rehabilitation can take up to six months. Prognosis Anterior tibial stress fractures can have a particularly poor prognosis and can require surgery. On radiographic imaging, these stress fractures are referred to as the "dreaded black line." When compared to other stress fractures, anterior tibial fractures are more likely to progress to complete fracture of the tibia and displacement. Superior femoral neck stress fractures, if left untreated, can progress to become complete fractures with avascular necrosis, and should also be managed surgically. Proximal metadiaphyseal fractures of the fifth metatarsal (middle of the outside edge of the foot) are also notorious for poor bone healing. These stress fractures heal slowly with significant risk of refracture. Epidemiology In the United States, the annual incidence of stress fractures in athletes and military recruits ranges from 5% to 30%, depending on the sport and other risk factors. Women and highly active individuals are also at a higher risk. The incidence probably also increases with age due to age-related reductions in bone mass density (BMD). Children may also be at risk because their bones have yet to reach full density and strength. The female athlete triad also can put women at risk as disordered eating and osteoporosis can cause the bones to be severely weakened.This type of injury is mostly seen in lower extremities, due to the constant weight-bearing (WB). The bones commonly affected by stress fractures are the tibia, tarsals, metatarsals (MT), fibula, femur, pelvis and spine. Upper extremity stress fractures do occur, but they are uncommon. When stress fractures occur in the upper extremity its commonly in the upper torso and is caused by muscle forces.The population that has the highest risk for stress fractures is athletes and military recruits who are participating in repetitive, high intensity training. Sports and activities that have excessive, repetitive ground reaction forces have the highest incidence of stress fractures. The site at which the stress fracture occurs depends on the activity/sports that the individual participates in.Women are more at risk for stress fractures than men due to factors such as lower aerobic capacity, reduced muscle mass, lower bone mineral density, among other anatomical and hormone-related elements. Women also have a two- to four-times increased risk of stress fractures when they have amenorrhea compared to women who are eumenorrheic. Reduced bone health increases the risk of stress fractures and studies have shown an inverse relationship between bone mineral density and stress fracture occurrences. This condition is most notable and commonly seen on the femoral neck. Other animals Dinosaurs In 2001, Bruce Rothschild and other paleontologists published a study examining evidence for stress fractures in theropod dinosaurs and analyzed the implications such injuries would have for reconstructing their behavior. Since stress fractures are due to repeated events they are probably caused by expressions of regular behavior rather than chance trauma. The researchers paid special attention to evidence of injuries to the hand since dinosaurs hind feet would be more prone to injuries received while running or migrating. Hand injuries, meanwhile, were more likely to be caused by struggling prey. Stress fractures in dinosaur bones can be identified by looking for bulges on the shafts of bones that face toward the front of the animal. When X-rayed, these bulges often show lines of clear space where the X-rays have a harder time traveling through the bone. Rothschild and the other researchers noted that this "zone of attenuation" seen under the X-ray typically cannot be seen with the naked eye.The researchers described theropod phalanges as being "pathognomonic" for stress fractures, this means they are "characteristic and unequivocal diagnostically." Rothschild and the other researchers examined and dismissed other kinds of injury and sickness as causes of the lesions they found on the dinosaurs bones. Lesions left by stress fractures can be distinguished from osteomyelitis without difficulty because of a lack of bone destruction in stress fracture lesions. They can be distinguished from benign bone tumors like osteoid osteoma by the lack of a sclerotic perimeter. No disturbance of the internal bony architecture of the sort caused by malignant bone tumors was encountered among the stress fracture candidates. No evidence of metabolic disorders like hyperparathyroidism or hyperthyroidism was found in the specimens, either.After examining the bones of many kinds of dinosaur the researchers noted that Allosaurus had a significantly greater number of bulges on the shafts of its hand and foot bones than the tyrannosaur Albertosaurus, or the ostrich dinosaurs Ornithomimus and Archaeornithomimus. Most of the stress fractures observed along the lengths of Allosaurus toe bones were confined to the ends closest to the hind foot, but were spread across all three major digits in "statistically indistinguishable" numbers. Since the lower end of the third metatarsal would have contacted the ground first while a theropod was running it would have borne the most stress and should be most predisposed to develop stress fractures. The lack of such a bias in the examined fossils indicates an origin for the stress fractures from a source other than running. The authors conclude that these fractures occurred during interaction with prey. They suggest that such injuries could occur as a result of the theropod trying to hold struggling prey with its feet. The presence of stress fractures provide evidence for very active predation-based feeding rather than scavenging diets. References == External links ==
You serve as a guide in the medical field. Explain medical terms thoroughly, ensuring the information is both insightful and comprehensible.
I'm seeking clarification on the medical term 'Arrhythmia.' Could you explain it?
Arrhythmias, also known as cardiac arrhythmias, heart arrhythmias, or dysrhythmias, are irregularities in the heartbeat, including when it is too fast or too slow. A resting heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia, and a resting heart rate that is too slow – below 60 beats per minute – is called bradycardia. Some types of arrhythmias have no symptoms. Symptoms, when present, may include palpitations or feeling a pause between heartbeats. In more serious cases, there may be lightheadedness, passing out, shortness of breath or chest pain. While most cases of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in sudden death.Arrhythmias are often categorized into four groups: extra beats, supraventricular tachycardias, ventricular arrhythmias and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Bradyarrhythmias are due to sinus node dysfunction or atrioventricular conduction disturbances. Arrhythmias are due to problems with the electrical conduction system of the heart. A number of tests can help with diagnosis, including an electrocardiogram (ECG) and Holter monitor.Many arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as inserting a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers, or antiarrhythmic agents such as procainamide, which attempt to restore a normal heart rhythm. This latter group may have more significant side effects, especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia or are medically unstable may receive urgent treatment with a controlled electric shock in the form of cardioversion or defibrillation.Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. However, in most recent cases concerning the SARS-CoV‑2 pandemic, cardiac arrhythmias are commonly developed and associated with high morbidity and mortality among patients hospitalized with the COVID-19 infection, due to the infections ability to cause myocardial injury. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease and about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people. Arrhythmias may also occur in children; however, the normal range for the heart rate varies with age. Classification Arrhythmia may be classified by rate (tachycardia, bradycardia), mechanism (automaticity, re-entry, triggered) or duration (isolated premature beats; couplets; runs, that is 3 or more beats; non-sustained = less than 30 seconds or sustained= over 30 seconds).Arrhythmias are also classified by site of origin: Atrial arrhythmia Sinus bradycardia Sinus arrhythmia Sinus tachycardia Premature atrial contractions (PACs) Wandering atrial pacemaker Atrial tachycardia Multifocal atrial tachycardia Supraventricular tachycardia (SVT) Atrial flutter Atrial fibrillation (Afib) AV nodal reentrant tachycardia Junctional arrhythmia AV nodal reentrant tachycardia Junctional rhythm Junctional tachycardia Premature junctional contraction Ventricular arrhythmia Premature ventricular contractions (PVCs), sometimes called ventricular extra beats (VEBs) Premature ventricular beats occurring after every normal beat are termed ventricular bigeminy PVCs that occur at intervals of 2 normal beats to 1 PVC, or 1 normal beat to 2 PVCs, are termed "PVCs in trigeminy" Groups of three premature ventricular beats are called triplets and are considered a brief run of non-sustained ventricular tachycardia (NSVT); if the grouping lasts for more than 30 seconds, it is considered sustained ventricular tachycardia (VT). Accelerated idioventricular rhythm Monomorphic ventricular tachycardia Polymorphic ventricular tachycardia Ventricular fibrillation Torsades de pointes Arrhythmogenic right ventricular dysplasia Re-entry ventricular arrhythmia Heart blocks These are also known as AV blocks, because the vast majority of them arise from pathology at the atrioventricular node. They are the most common causes of bradycardia: First-degree heart block, which manifests as PR prolongation Second-degree heart block Type 1 Second degree heart block, also known as Mobitz I or Wenckebach Type 2 Second degree heart block, also known as Mobitz II Third-degree heart block, also known as complete heart blockFirst, second, and third-degree blocks also can occur at the level of the sinoatrial junction. This is referred to as sinoatrial block typically manifesting with various degrees and patterns of sinus bradycardia. Sudden arrhythmic death syndrome Sudden arrhythmic death syndrome (SADS), is a term used as part of sudden unexpected death syndrome to describe sudden death because of cardiac arrest occasioned by an arrhythmia in the presence or absence of any structural heart disease on autopsy. The most common cause of sudden death in the US is coronary artery disease specifically because of poor oxygenation of the heart muscle, that is myocardial ischemia or a heart attack Approximately 180,000 to 250,000 people die suddenly of this cause every year in the US. SADS may occur from other causes. There are many inherited conditions and heart diseases that can affect young people which can subsequently cause sudden death without advance symptoms.Causes of SADS in young people include viral myocarditis, long QT syndrome, Brugada syndrome, Catecholaminergic polymorphic ventricular tachycardia, hypertrophic cardiomyopathy and arrhythmogenic right ventricular dysplasia. Fetal arrhythmia Arrhythmias may also occur in the fetus. The normal heart rate of the fetus is between 110 and 160 beats per minute. Any rhythm beyond these limits is abnormal and classed as a fetal arrhythmia. These are mainly the result of premature atrial contractions, usually give no symptoms, and have little consequence. However, around one percent of these will be the result of significant structural damage to the heart. Signs and symptoms The term cardiac arrhythmia covers a very large number of very different conditions.The most common symptom of arrhythmia is an awareness of an abnormal heartbeat, called palpitations. These may be infrequent, frequent, or continuous. Some of these arrhythmias are harmless (though distracting for patients) but some of them predispose to adverse outcomes.Some arrhythmias do not cause symptoms and are not associated with increased mortality. However, some asymptomatic arrhythmias are associated with adverse events. Examples include a higher risk of blood clotting within the heart and a higher risk of insufficient blood being transported to the heart because of a weak heartbeat. Other increased risks are of embolization and stroke, heart failure, and sudden cardiac death.If an arrhythmia results in a heartbeat that is too fast, too slow, or too weak to supply the bodys needs, this manifests as lower blood pressure and may cause lightheadedness, dizziness, syncope or brain death due to insufficient supply of blood to the brain.Some types of arrhythmia result in cardiac arrest, or sudden death.Medical assessment of the abnormality using an electrocardiogram is one way to diagnose and assess the risk of any given arrhythmia. Mechanism Cardiac arrhythmia are caused by one of two major mechanism. The first of arrhythmia is a result of enhanced or abnormal impulse formation originating at the pacemaker or the His-Purkinje network. The second is due to reentry conduction disturbances. Diagnostic Cardiac arrhythmia is often first detected by simple but nonspecific means: auscultation of the heartbeat with a stethoscope, or feeling for peripheral pulses. These cannot usually diagnose specific arrhythmia but can give a general indication of the heart rate and whether it is regular or irregular. Not all the electrical impulses of the heart produce audible or palpable beats; in many cardiac arrhythmias, the premature or abnormal beats do not produce an effective pumping action and are experienced as "skipped" beats.The simplest specific diagnostic test for assessment of heart rhythm is the electrocardiogram (abbreviated ECG or EKG). A Holter monitor is an EKG recorded over a 24-hour period, to detect arrhythmias that may happen briefly and unpredictably throughout the day.A more advanced study of the hearts electrical activity can be performed to assess the source of the aberrant heart beats. This can be accomplished in an electrophysiology study, an endovascular procedure that uses a catheter to "listen" to the electrical activity from within the heart, additionally if the source of the arrhythmias is found, often the abnormal cells can be ablated and the arrhythmia can be permanently corrected. Transesophageal atrial stimulation (TAS) instead uses an electrode inserted through the esophagus to a part where the distance to the posterior wall of the left atrium is only approximately 5–6 mm (remaining constant in people of different age and weight). Transesophageal atrial stimulation can differentiate between atrial flutter, AV nodal reentrant tachycardia and orthodromic atrioventricular reentrant tachycardia. It can also evaluate the risk in people with Wolff–Parkinson–White syndrome, as well as terminate supraventricular tachycardia caused by re-entry. Differential diagnosis Normal electrical activity Each heartbeat originates as an electrical impulse from a small area of tissue in the right atrium of the heart called the sinus node or sinoatrial node (SA node). The impulse initially causes both atria to contract, then activates the atrioventricular node (AV node), which is normally the only electrical connection between the atria and the ventricles (main pumping chambers). The impulse then spreads through both ventricles via the bundle of His and the Purkinje fibers causing a synchronized contraction of the heart muscle and, thus, the pulse.In adults, the normal resting heart rate ranges from 60 to 90 beats per minute. The resting heart rate in children is much faster. In athletes, however, the resting heart rate can be as slow as 40 beats per minute, and be considered normal.The term sinus arrhythmia refers to a normal phenomenon of alternating mild acceleration and slowing of the heart rate that occurs with breathing in and out respectively. It is usually quite pronounced in children and steadily decreases with age. This can also be present during meditation breathing exercises that involve deep inhaling and breath holding patterns. Bradycardias A slow rhythm (less than 60 beats/min) is labelled bradycardia. This may be caused by a slowed signal from the sinus node (sinus bradycardia), by a pause in the normal activity of the sinus node (sinus arrest), or by blocking of the electrical impulse on its way from the atria to the ventricles (AV block or heart block). Heart block comes in varying degrees and severity. It may be caused by reversible poisoning of the AV node (with drugs that impair conduction) or by irreversible damage to the node. Bradycardias may also be present in the normally functioning heart of endurance athletes or other well-conditioned persons. Bradycardia may also occur in some types of seizures. Tachycardias In adults and children over 15, resting heart rate faster than 100 beats per minute is labeled tachycardia. Tachycardia may result in palpitation; however, tachycardia is not necessarily an arrhythmia. Increased heart rate is a normal response to physical exercise or emotional stress. This is mediated by the sympathetic nervous system on the sinus node and called sinus tachycardia. Other conditions that increase sympathetic nervous system activity in the heart include ingested or injected substances, such as caffeine or amphetamines, and an overactive thyroid gland (hyperthyroidism) or anemia.Tachycardia that is not sinus tachycardia usually results from the addition of abnormal impulses to the normal cardiac cycle. Abnormal impulses can begin by one of three mechanisms: automaticity, re-entry, or triggered activity. A specialized form of re-entry which is both common and problematic is termed fibrillation.Although the term "tachycardia" has been known for over 160 years, bases for the classification of arrhythmias are still being discussed. Heart defects Congenital heart defects are structural or electrical pathway problems in the heart that are present at birth. Anyone can be affected by this because overall health does not play a role in the problem. Problems with the electrical pathway of the heart can cause very fast or even deadly arrhythmias. Wolff–Parkinson–White syndrome is due to an extra pathway in the heart that is made up of electrical muscle tissue. This tissue allows the electrical impulse, which stimulates the heartbeat, to happen very rapidly. Right ventricular outflow tract tachycardia is the most common type of ventricular tachycardia in otherwise healthy individuals. This defect is due to an electrical node in the right ventricle just before the pulmonary artery. When the node is stimulated, the patient will go into ventricular tachycardia, which does not allow the heart to fill with blood before beating again. Long QT syndrome is another complex problem in the heart and has been labeled as an independent factor in mortality. There are multiple methods of treatment for these including cardiac ablations, medication treatment, or lifestyle changes to have less stress and exercise. Automaticity Automaticity refers to a cardiac muscle cell firing off an impulse on its own. All of the cells in the heart have the ability to initiate an action potential; however, only some of these cells are designed to routinely trigger heartbeats. These cells are found in the conduction system of the heart and include the SA node, AV node, Bundle of His, and Purkinje fibers. The sinoatrial node is a single specialized location in the atrium that has a higher automaticity (a faster pacemaker) than the rest of the heart and, therefore, is usually responsible for setting the heart rate and initiating each heartbeat.Any part of the heart that initiates an impulse without waiting for the sinoatrial node is called an ectopic focus and is, by definition, a pathological phenomenon. This may cause a single premature beat now and then, or, if the ectopic focus fires more often than the sinoatrial node, it can produce a sustained abnormal rhythm. Rhythms produced by an ectopic focus in the atria, or by the atrioventricular node, are the least dangerous dysrhythmias; but they can still produce a decrease in the hearts pumping efficiency because the signal reaches the various parts of the heart muscle with different timing than usual and can be responsible for poorly coordinated contraction.Conditions that increase automaticity include sympathetic nervous system stimulation and hypoxia. The resulting heart rhythm depends on where the first signal begins: If it is the sinoatrial node, the rhythm remains normal but rapid; if it is an ectopic focus, many types of dysrhythmia may ensue. Re-entry Re-entrant arrhythmias occur when an electrical impulse recurrently travels in a tight circle within the heart, rather than moving from one end of the heart to the other and then stopping.Every cardiac cell can transmit impulses of excitation in every direction but will do so only once within a short time. Normally, the action potential impulse will spread through the heart quickly enough that each cell will respond only once. However, if there is some essential heterogeneity of refractory period or if conduction is abnormally slow in some areas (for example in heart damage) so the myocardial cells are unable to activate the fast sodium channel, part of the impulse will arrive late and potentially be treated as a new impulse. Depending on the timing, this can produce a sustained abnormal circuit rhythm. As a sort of re-entry, vortices of excitation in the myocardium (autowave vortices) are considered to be the main mechanism of life-threatening cardiac arrhythmias. In particular, the autowave reverberator is common in the thin walls of the atria, sometimes resulting in atrial flutter. Re-entry is also responsible for most paroxysmal supraventricular tachycardia, and dangerous ventricular tachycardia. These types of re-entry circuits are different from WPW syndromes, which utilize abnormal conduction pathways. Although omega-3 fatty acids from fish oil can be protective against arrhythmias, they can facilitate re-entrant arrhythmias. Fibrillation When an entire chamber of the heart is involved in multiple micro-reentry circuits and is, therefore, quivering with chaotic electrical impulses, it is said to be in fibrillation. Fibrillation can affect the atrium (atrial fibrillation) or the ventricle (ventricular fibrillation): ventricular fibrillation is imminently life-threatening. Atrial fibrillation affects the upper chambers of the heart, known as the atria. Atrial fibrillation may be due to serious underlying medical conditions and should be evaluated by a physician. It is not typically a medical emergency. Ventricular fibrillation occurs in the ventricles (lower chambers) of the heart; it is always a medical emergency. If left untreated, ventricular fibrillation (VF, or V-fib) can lead to death within minutes. When a heart goes into V-fib, effective pumping of the blood stops. V-fib is considered a form of cardiac arrest. An affected individual will not survive unless cardiopulmonary resuscitation (CPR) and defibrillation are provided immediately.CPR can prolong the survival of the brain in the lack of a normal pulse, but defibrillation is the only intervention that can restore a healthy heart rhythm. Defibrillation is performed by applying an electric shock to the heart, which resets the cells, permitting a normal beat to re-establish itself. Triggered beats Triggered beats occur when problems at the level of the ion channels in individual heart cells result in abnormal propagation of electrical activity and can lead to a sustained abnormal rhythm. They are relatively rare and can result from the action of anti-arrhythmic drugs, or after depolarizations. Management The method of cardiac rhythm management depends firstly on whether the affected person is stable or unstable. Treatments may include physical maneuvers, medications, electricity conversion, or electro- or cryo-cautery.In the United States, people admitted to the hospital with cardiac arrhythmia and conduction disorders with and without complications were admitted to the intensive care unit more than half the time in 2011. Physical maneuvers Several physical acts can increase parasympathetic nervous supply to the heart, resulting in blocking of electrical conduction through the AV node. This can slow down or stop several arrhythmias that originate above or at the AV node (see main article: supraventricular tachycardias). Parasympathetic nervous supply to the heart is via the vagus nerve, and these maneuvers are collectively known as vagal maneuvers. Antiarrhythmic drugs There are many classes of antiarrhythmic medications, with different mechanisms of action and many different individual drugs within these classes. Although the goal of drug therapy is to prevent arrhythmia, nearly every antiarrhythmic drug has the potential to act as a pro-arrhythmic, and so must be carefully selected and used under medical supervision. Other drugs Several groups of drugs slow conduction through the heart, without actually preventing an arrhythmia. These drugs can be used to "rate control" a fast rhythm and make it physically tolerable for the patient.Some arrhythmias promote blood clotting within the heart and increase the risk of embolus and stroke. Anticoagulant medications such as warfarin and heparins, and anti-platelet drugs such as aspirin can reduce the risk of clotting. Electricity Arrhythmias may also be treated electrically, by applying a shock across the heart – either externally to the chest wall, or internally to the heart via implanted electrodes.Cardioversion is either achieved pharmacologically or via the application of a shock synchronized to the underlying heartbeat. It is used for the treatment of supraventricular tachycardias. In elective cardioversion, the recipient is usually sedated or lightly anesthetized for the procedure. Defibrillation differs in that the shock is not synchronized. It is needed for the chaotic rhythm of ventricular fibrillation and is also used for pulseless ventricular tachycardia. Often, more electricity is required for defibrillation than for cardioversion. In most defibrillation, the recipient has lost consciousness so there is no need for sedation. Defibrillation or cardioversion may be accomplished by an implantable cardioverter-defibrillator (ICD). Electrical treatment of arrhythmias also includes cardiac pacing. Temporary pacing may be necessary for reversible causes of very slow heartbeats, or bradycardia (for example, from drug overdose or myocardial infarction). A permanent pacemaker may be placed in situations where the bradycardia is not expected to recover. Electrical cautery Some cardiologists further sub-specialize into electrophysiology. In specialized catheter laboratories, they use fine probes inserted through the blood vessels to map electrical activity from within the heart. This allows abnormal areas of conduction to be located very accurately and subsequently destroyed by heat, cold, electrical, or laser probes in a process called catheter ablation. This procedure may be completely curative for some forms of arrhythmia, but for others, the success rate remains disappointing. AV nodal reentrant tachycardia is often curable by ablating one of the pathways in the AV node (usually the slow pathway). Atrial fibrillation can also be treated, by performing a pulmonary vein isolation, but the results are less reliable. Research Arrhythmias due to medications have been reported since the 1920s with the use of quinine. In the 1960s and 1970s problems with antihistamines and antipsychotics were discovered. It was not until the 1980s that the underlying issue, QTc prolongation was determined. See also Pre-excitation syndrome Holiday heart syndrome References External links Arrhythmia at Curlie
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
Please help me grasp the concept behind the medical term 'Diabetic bulla.'
A diabetic bulla is a cutaneous condition characterized by a noninflammatory, spontaneous, painless blister, often in acral locations (peripheral body parts, such as feet, toes, hands, fingers, ears or nose), seen in diabetic patients.: 681 : 467–8 See also Diabetic dermadromes Skin lesion List of cutaneous conditions References == External links ==
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
Please help me grasp the concept behind the medical term 'Emotional disturbance.'
Emotional disturbance may refer to: Emotional and behavioral disorders, most frequently used in an educational context Mental disorders, especially those involving emotions or emotional disorders Emotional trauma Emotional distress Mania and depression Anxiety, fear, extreme sadness or anger, and other emotions that may disturb the mental state of an individual
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
What is the significance of the term 'Trifascicular block' in the medical field?
Trifascicular block is a problem with the electrical conduction of the heart, specifically the three fascicles of the bundle branches that carry electrical signals from the atrioventricular node to the ventricles. The three fascicles are one in the right bundle branch, and two in the left bundle branch the left anterior fascicle and the left posterior fascicle. A block at any of these levels can cause an abnormality to show on an electrocardiogram The most literal meaning of trifascicular block is complete heart block: all three fascicles are blocked. A second, and clinically distinct, definition of trifascicular block is a circumstance in which right bundle branch block (RBBB) and left bundle branch block occur in the same patient, but at distinct points in time. For example, a patient that is found to have a RBBB one day and a LBBB another can be said to have "alternating bundle branch blocks". In this context, because all three fascicles show evidence of block at different points in time, the term trifascicular block is often used. Finally, the third meaning of trifascicular block refers to a specific finding on an electrocardiogram in which bifascicular block is observed in a patient with a prolonged PR interval (first degree AV block). The treatment of trifascicular block is highly dependent on which clinical entity (one of the three above) is being described. Diagnosis An electrophysiology study of the conduction system can help discern the severity of conduction system disease. In an electrophysiology study, trifascicular block due to AV nodal disease is represented by a prolonged AH interval (denoting prolonged time from impulse generation in the atria and conduction to the bundle of His) with a relatively preserved HV interval (denoting normal conduction from the bundle of His to the ventricles). Trifascicular block due to distal conduction system disease is represented by a normal AH interval and a prolonged HV interval. In the absence of symptoms, a prolonged AH interval is likely benign while a prolonged HV interval is almost always pathologic. Treatment An implantable cardiac pacemaker or permanent pacemaker is recommended in the following clinical circumstances. Class 1 recommendation is the strongest recommendation. Level A evidence is the highest level of evidence.Class I Bifascicular block + complete heart block, even in the absence of symptoms (1b) Bifascicular block + 2nd degree AV Block Type 2, even in the absence of symptoms (1b) Alternating bundle branch blocks, even in the absence of symptoms (1c)Class II Bifascicular block + syncope + alternative causes ruled out (e.g. orthostasis, arrhythmia) (2a)Class III (i.e. pacemaker not recommended) Bifascicular block without symptoms Bifascicular block + 1st degree AV Block, without symptoms References External links http://www.ecglibrary.com/trifas.html http://circ.ahajournals.org/content/97/13/1325.long - new guidelines in which trifascicular block terminology continues to be used.
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
What is the significance of the term 'Fungal keratitis' in the medical field?
Fungal keratitis is a fungal infection of the cornea, which can lead to blindness. It generally presents with a red, painful eye and blurred vision. There is also increased sensitivity to light, and excessive tears or discharge.It is caused by fungal organisms such as Fusarium, Aspergillus or Candida.Fungal keratitis has a worldwide distribution, but is more common in the tropics. Around 1 million people become blind every year due to fungal keratitis. Theodor Leber first described a case of fungal keratitis caused by Aspergillus in 1879. Signs and symptoms The symptoms of fungal keratitis typically emerge over 5-10 days and present with a painful eye, blurred vision, and redness of eye. There is increased sensitivity to light, and excessive tears or discharge. The symptoms are markedly less as compared to a similar bacterial ulcer. Symptoms may be noted to persist after contact lenses are removed, or following antibiotic treatment.Signs: The eyelids and adnexa involved shows edema and redness, conjunctiva is chemosed. Ulcer may be present. It is a dry looking corneal ulcer with satellite lesions in the surrounding cornea. Usually associated with fungal ulcer is hypopyon, which is mostly white fluffy in appearance. Rarely, it may extend to the posterior segment to cause endophthalmitis in later stages, leading to the destruction of the eye. (Note: Fungal endophthalmitis is extremely rare) Causes Fungal keratitis has been reported to be caused by more than 70 different fungi, of which Fusarium, Aspergillus and Candida are responsible for 95% of cases.A. flavus and A. fumigatus are the most common types of Aspergillus to cause fungal keratitis. F. Solani is the most common type of Fusarium and others include Curvularia and Acremonium. C. albicans, C. guilliermondii and C. parapsilosis are the main types of Candida to cause fungal keratitis. Pathophysiology The precipitating event for fungal keratitis is trauma with a vegetable / organic matter. A thorn injury, or in agriculture workers, trauma with a wheat plant while cutting the harvest is typical. This implants the fungus directly in the cornea. The fungus grows slowly in the cornea and proliferates to involve the anterior and posterior stromal layers. The fungus can break through the descemets membrane and pass into the anterior chamber. The patient presents a few days or weeks later with fungal keratitis. Diagnosis The diagnosis is made by an ophthalmologist/optometrist correlating typical history, symptoms and signs. Many times it may be missed and misdiagnosed as bacterial ulcer. A definitive diagnosis is established only after a positive culture report (lactophenol cotton blue, calcoflour medium), typically taking a week, from the corneal scraping. Recent advances have been made in PCR ref 3./immunologic tests which can give a much quicker result. Classification Infectious keratitis can be bacterial, fungal, viral, or protozoal. Remarkable differences in presentation of the patient allows presumptive diagnosis by the eye care professional, helping in institution of appropriate anti-infective therapy. Prevention Prevention of trauma with vegetable / organic matter, particularly in agricultural workers while harvesting can reduce the incidence of fungal keratitis. Wearing of broad protective glasses with side shields is recommended for people at risk for such injuries. Treatment A presumptive diagnosis of fungal keratitis requires immediate empirical therapy. Natamycin ophthalmic suspension is the drug of choice for filamentous fungal infection. Fluconazole ophthalmic solution is recommended for Candida infection of the cornea. Amphotericin B eye drops may be required for non-responding cases, but can be quite toxic and requires expert pharmacist for preparation. Other medications have also been tried with moderate success. An updated Cochrane Review published in 2015 looking at the best treatment for fungal keratitis could not draw any conclusions as the studies included used different medications. The review did find that "people receiving natamycin were less likely to develop a hole in their cornea and need a transplant." Prognosis The infection typically takes a long time to heal, since the fungus itself is slow growing. Corneal perforation can occur in patients with untreated or partially treated infectious keratitis and requires surgical intervention in the form of corneal transplantation. Epidemiology This disease is quite common in the tropics and with large agrarian population. India has a high number of cases with fungal keratitis, but poor reporting system prevents accurate data collection. Florida in US regularly reports cases of fungal keratitis, with Aspergillus and Fusarium spp. as the most common causes. According to the Global Action Fund for Fungal Infections, every year there are around 1 million cases of blindness due to fungal keratitis. History A case of fungal keratitis caused by Aspergillus was first described by Theodor Leber in 1879, in a 54 year old farmer who injured his eye. Society and culture The loss of vision with fungal keratitis can be quite disabling in terms of economic impact and social consequences. Many people come with fungal keratitis in the only eye and thus become blind due to the disease. The lack of education and proper eye protection in such cases is evidently responsible for their plight. Notable cases Recently, one particular product, ReNu with MoistureLoc brand of soft contact lens solutions made headlines regarding a report from the United States Centers for Disease Control and Prevention suggesting an increased incidence of a specific type of fungal keratitis (Fusarium keratitis) in people using Bausch & Lomb products. Bausch & Lomb subsequently suspended, then recalled, shipments of one particular product, ReNu with MoistureLoc. References == External links ==
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Could you offer a clear explanation of the term 'Chlormethine' as used in the medical field?
Chlormethine (INN, BAN), also known as mechlorethamine (USAN, USP), mustine, HN2, and (in post-Soviet states) embikhin (эмбихин), is a nitrogen mustard sold under the brand name Mustargen among others. It is the prototype of alkylating agents, a group of anticancer chemotherapeutic drugs. It works by binding to DNA, crosslinking two strands and preventing cell duplication. It binds to the N7 nitrogen on the DNA base guanine. As the chemical is a blister agent, its use is strongly restricted within the Chemical Weapons Convention where it is classified as a Schedule 1 substance. Mechlorethamine belongs to the group of nitrogen mustard alkylating agents. Uses It has been derivatized into the estrogen analogue estramustine phosphate, used to treat prostate cancer. It can also be used in chemical warfare where it has the code-name HN2. This chemical is a form of nitrogen mustard gas and a powerful vesicant. Historically, some uses of mechlorethamine have included lymphoid malignancies such as Hodgkins disease, lymphosarcoma, chronic myelocytic leukemia, polycythemia vera, and bronchogenic carcinoma Mechlorethamine is often administered intravenously, but when compounded into a topical formulation it can also be used to treat skin diseases. There have been studies demonstrating that topical administration of mechlorethamine has efficacy in mycosis fungoides-type cutaneous T cell lymphoma.Another important use of chlormethine is in the synthesis of pethidine (meperidine). Side effects and toxicity Mechlorethamine is a highly toxic medication, especially for women who are pregnant, breastfeeding, or of childbearing age. At high enough levels, exposure can be fatal.The adverse effects of mechlorethamine depend on the formulation. When used in chemical warfare, it can cause immunosuppression and damage to mucous membranes of the eyes, skin, and respiratory tract. Mucous membranes and damp or damaged skin are more affected by exposure to HN-2. Though symptoms of exposure are generally delayed, the DNA damage it causes occurs very quickly. More serious exposures cause symptoms to develop sooner. Eye symptoms develop first, in the first 1–2 hours (severe exposure) or 3–12 hours (mild to moderate exposure) followed by airway (2-6/12–24 hours) and skin symptoms (6–48 hours). Hot, humid weather shortens the latent (symptom-free) period.Symptoms of toxic exposure to HN-2 vary based on the route of exposure. Eye exposure causes lacrimation (tear production), burning, irritation, itching, a feeling of grittiness or dryness, blepharospasm (spasms of the eyelid), and miosis (pinpoint pupils). More severe cases cause edema (swelling from fluid accumulation) in the eyelids, photophobia (extreme sensitivity to light), severe pain, corneal ulceration, and blindness.Inhalation of chlormethine damages the upper and lower airways sequentially, with more severe exposures causing faster damage that afflicts lower parts of the respiratory tract. Early symptoms include rhinorrhea (runny nose), epistaxis (nosebleed), toneless voice, sneezing, barking cough, and dyspnea (in smokers and asthmatics). Later symptoms include pain in the nose/sinuses and inflammation of the airway. In severe cases, there may be epithelial necrosis throughout the respiratory tract, causing pseudomembrane formation, which can obstruct the airway. Pneumonia may develop and prove fatal.Skin exposure mainly causes erythema (redness) and vesication (blistering) at first, but absorption through the skin causes systemic toxicity. In cases where more than 25% of the skin is affected, fatal exposure is likely to have occurred.Though ingestion is uncommon, if mechlorethamine is swallowed it causes severe chemical burns to the gastrointestinal tract and concomitant nausea, vomiting, diarrhea, abdominal pain, and hemorrhage.Long-term effects of acute or chronic chlormethine exposure are caused by damage to the immune system. White blood cell counts drop, increasing the risk of infection, and red blood cell and platelet counts may also drop due to bone marrow damage. Chronic eye infections may result from exposure, but blindness is temporary. Long-term effects on the respiratory system include anosmia (inability to smell), ageusia (inability to taste), inflammation, chronic infections, fibrosis, and cancer. Skin that has been damaged by HN2 can change pigmentation or become scarred, and may eventually develop cancer. History The effect of vesicant (blister) agents in the form of mustard gas (sulfur mustard, Bis(2-chloroethyl) sulfide) on bone marrow and white blood cells had been known since the First World War. In 1935 several lines of chemical and biological research yielded results that would be explored after the start of the Second World War. The vesicant action of a family of chemicals related to the sulfur mustards, but with nitrogen substituting for sulfur was discovered—the "nitrogen mustards" were born. The particular nitrogen mustard chlormethine (mechlorethamine) was first synthesized. And the action of sulfur mustard on tumors in laboratory animals was investigated for the first time.After the U.S. entry into the Second World War the nitrogen mustards were candidate chemical warfare agents and research on them was initiated by the Office of Scientific Research and Development (OSRD). The OSRD let contracts to study them to two universities—Yale University and the University of Chicago. Inspired perhaps by the preliminary research in 1935, independently both groups thought to test whether a medically useful differential toxicity between animals and animal tumors existed. The Yale pharmacologists Louis Goodman and Alfred Gilman were the first to conduct a clinical trial, on 27 August 1942, using the agent HN3 (tris(2-chloroethyl)amine) on a patient known as J.D.The next year the Chicago group, led by Leon O. Jacobson, conducted trials with HN2 (chlormethine) which was the only agent in this group to see eventual clinical use. Wartime secrecy prevented any of this ground-breaking work on chemotherapy from being published, but papers were released once wartime secrecy ended, in 1946.Further clinical trials after the war, led by Cornelius P. Rhoads at Memorial Sloan-Kettering, helped establish the medical utility of the drug for Hodgkins disease and several other lymphomas and leukemia. Chemistry Chlormethine is combustible and becomes explosive under extreme conditions. It can react with metals to form gaseous hydrogen. See also Nitrogen mustards HN1 HN3 References External links British Columbia Cancer Agency profile "Chlormethine". Drug Information Portal. U.S. National Library of Medicine.
You are a medical knowledge base. Your task is to elucidate medical terminology, offering insights into their meanings, origins, and applications.
I'm seeking clarification on the medical term 'Geotrichosis.' Could you explain it?
Geotrichosis is a mycosis caused by Geotrichum candidum.: 311 Human colonization and disease Geotrichum candidum is also a frequent member of the human microbiome, notably associated with skin, sputum and feces where it occurs in 25-30% of specimens. The fungus can cause an infection known as geotrichosis, affecting the oral, bronchial, skin and bronchopulmonary epithelia. The inoculum may arise from endogenous or exogenous sources.In 1847 Bennett described Geotrichum candidum causing a superinfection in the tuberculous cavity. Bennett was able to differentiate infection by Geotrichum candidum from candidiasis, and diagnose the first case of geotrichosis. Other early medical case reports in 1916 and 1928 also described lung infections. Most cases affect the bronchopulmonary tree, although other sites can be involved, such as oral mucosa and vagina. Skin and gut infections are also known. Reported cases of geotrichosis have been characterized with symptoms of chronic or acute bronchitis. Exogenous geotrichosis may arise from contact with contaminated soil, fruits or dairy products. Pulmonary geotrichosis is the most frequent form of geotrichosis. The symptoms appear to be secondary symptoms of tuberculosis. This includes symptoms such as light, thick, grey sputum, which in some cases may be blood-tinged. Patients often have a cough that produces clear or yellow sputum. Another symptom of pulmonary geotrichosis includes fine to medium rales. Patients may develop fever, rapid pulse and leukocytosis. The condition appears chronic with the presence of a little debilitation and fever. There is no chest pain and occasional wheezing can occur. Bronchial geotrichosis does not involve the lung instead the disease persists within the bronchial. Geotrichum candidum grows in the lumen of the bronchi. The disease is characterized as an endobronchial infection. Bronchial geotrichosis is similar to the allergic reaction of aspergillosis. Symptoms include prominent chronic cough, gelatinous sputum, lack of fever and medium to coarse rales. Patients with the bronchial condition their pulse and respiration are rarely elevated. Fine mottling may be present in the middle or basilar pulmonary region. Colonization of the bronchi can be associated with Candida albicans and usually occur with patients with chronic obstructive lung disease. Oral and vaginal geotrichosis is similar to thrush in its appearances and was often confused with this infection. The difference between oral and vaginal geotrichosis can be determined using microscope analysis. The infected area forms a white plaque and patients usually report burning sensation in the affected areas. The vaginal geotrichosis is more common in pregnant women and is often associated with vaginitis. Gastrointestinal geotrichosis is enterocolitis associated with glutamic therapy. The symptoms usually stop once the glutamic therapy is discontinued. Establishment of the etiology of the fungi is difficult since G. candidum is found within the gut normal flora. The difference between normal gut flora form and the disease causing form is the production of toxins. Cutaneous geotrichosis has two different types of variants which include superficial and deep infection. The superficial form the infection occurs on skin folds including submammary, inguinal, perianal and interdigital folds. The deep form develops nodules, tumours and ulcers on legs, face and hands. Geotrichosis can cause a cystic lesion appears as soft tissue on the skin. Diagnosis Laboratory culture The diagnoses of geotrichosis cannot be determined without using culture or microscopic measurements. The laboratory diagnosis of geotrichosis involves collected fungi samples areas of infections without contamination. Scraping of the mouth lesions and the ulcers can provide a sample of G. candidum. Samples can also be collected from pus and mucus can be obtained from the feces. Sputum can be searched for the mucoid-like white flakes for further examination. Culturing the cylindrical barrel-shaped or elliptical fungi in considerable numbers in oral lesions is an indicator that a patient may have geotrichosis. Under the microscope the fungi appears yeast-like and septate branching hyphae that can be broken down into chains or individual arthrospores. Arthrospores appear rectangular with flat or rounded ends. Under the microscope the arthroconidia size range from 6-12μm x 3-6μm. Arthroconidia and coarse true hyphae can be observed can be observed under the microscope. Another identification method for G. candidum is selective isolation method. A selection isolation method based on the fungi tolerance to novobiocin and carbon dioxide can determine if G. candidum is the cause of illness. Diagnostic imaging X-rays can be used to examine the lung tissue, however it can not be used to positively diagnose geotrichosis. X-rays may show cavitation that is located the walls of the lungs tissues. The lung tissue resemble the early signs of tuberculosis. The results of an x-ray examination of pulmonary geotrichosis presents smooth, dense patchy infiltrations and some cavities. Bronchial geotrichosis shows peribronchial thickening with fine mottling may be present on middle or basilar pulmonary fields. Bronchial geotrichosis usually present itself as non-specific diffuse peribronchical infiltration. Treatment Geotrichosis generally has a good prognosis and patients generally have successful recovery. However, there is not a standard treatment for geotrichosis. There are several types of antimicrobial or antifungal compounds that can be used for geotrichosis treatment. Another method of treatment involves symptomatic care, bed rest, iodine therapy, aerosol nystatin and amphotericin B. Azole drugs including isoconazole and clotrimazole are used for geotrichosis treatment. Associated treatment for pulmonary geotrichosis includes the use of potassium iodide, sulfonamides or colistin. The associated asthma can be treated with desensitization and prednisolone. Amphotericin B, clotrimazole and S-fluorocytosine have become more susceptible to G. candidum. Antimycotic resistance can appear due to repeated treatment. References == External links ==
You are a medical advisor. Your role is to break down medical terminology into clear, detailed explanations for a general audience.
I'm not familiar with the medical term 'Nateglinide.' Could you provide some insights?
Nateglinide (INN, trade name Starlix) is a drug for the treatment of type 2 diabetes. Nateglinide was developed by Ajinomoto, a Japanese company and sold by the Swiss pharmaceutical company Novartis. Nateglinide belongs to the meglitinide class of blood glucose-lowering drugs. Pharmacology Nateglinide lowers blood glucose by stimulating the release of insulin from the pancreas. It achieves this by closing ATP-dependent potassium channels in the membrane of the β cells. This depolarizes the β cells and causes voltage-gated calcium channels to open. The resulting calcium influx induces fusion of insulin-containing vesicles with the cell membrane, and insulin secretion occurs. Contraindications Nateglinide is contraindicated in patients who: have known hypersensitivity to the compound or any ingredient in the formulation. are affected with type 1 (namely insulin-dependent) diabetes mellitus. are in diabetic ketoacidosis. Comparisons with other drugs for type 2 diabetes A study funded by Novo Nordisk, the U.S. distributor for Repaglinide, compared their product with Nateglinide in "A randomized, parallel-group, open-label, multicenter 16-week clinical trial". They concluded that the two were similar, but "repaglinide monotherapy was significantly more effective than nateglinide monotherapy in reducing HbA1c and FPG values after 16 weeks of therapy." Dosage Nateglinide is delivered in 60 mg & 120 mg tablet form. See also Repaglinide References External links Starlix - website of the manufacturer. How Nateglinide Works - website of the manufacturer.
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
The term 'Pegasys' keeps coming up in medical discussions. What does it stand for?
Pegasys may refer to: A brand name of the medication peginterferon alfa-2a Pegasys, Inc., a Japanese software company that develops the TMPGEnc family of video encoding/editing programs Pegasus (disambiguation)
You are a medical knowledge base. Your task is to elucidate medical terminology, offering insights into their meanings, origins, and applications.
I'm looking for a concise explanation of the medical term 'Chikungunya.'
Chikungunya is an infection caused by the Chikungunya virus (CHIKV). Symptoms include fever and joint pains. These typically occur two to twelve days after exposure. Other symptoms may include headache, muscle pain, joint swelling, and a rash. Symptoms usually improve within a week; however, occasionally the joint pain may last for months or years. The risk of death is around 1 in 1,000. The very young, old, and those with other health problems are at risk of more severe disease.The virus is spread between people by two types of mosquitos: Aedes albopictus and Aedes aegypti. They mainly bite during the day. The virus may circulate within a number of animals including birds and rodents. Diagnosis is by either testing the blood for the viruss RNA or antibodies to the virus. The symptoms can be mistaken for those of dengue fever and Zika fever. It is believed most people become immune after a single infection.The best means of prevention is overall mosquito control and the avoidance of bites in areas where the disease is common. This may be partly achieved by decreasing mosquito access to water and with the use of insect repellent and mosquito nets. There is no vaccine and no specific treatment as of 2016. Recommendations include rest, fluids, and medications to help with fever and joint pain.While the disease typically occurs in Africa and Asia, outbreaks have been reported in Europe and the Americas since the 2000s. In 2014 more than a million suspected cases occurred. In 2014 it was occurring in Florida in the continental United States but as of 2016 there were no further locally acquired cases. The disease was first identified in 1952 in Tanzania. The term is from the Kimakonde language and means "to become contorted". Signs and symptoms Around 85% of people infected with Chikungunya virus experience symptoms, typically beginning with a sudden high fever above 39 °C (102 °F). The fever is soon followed by severe muscle and joint pain. Pain usually affects multiple joints in the arms and legs, and is symmetric – i.e. if one elbow is affected, the other is as well. People with Chikungunya also frequently experience headache, back pain, nausea, and fatigue. Around half of those affected develop a rash, with reddening and sometimes small bumps on the palms, foot soles, torso, and face. For some, the rash remains constrained to a small part of the body; for others, the rash can be extensive, covering more than 90% of the skin. Some people experience gastrointestinal issues, with abdominal pain and vomiting. Others experience eye problems, namely sensitivity to light, conjunctivitis, and pain behind the eye. This first set of symptoms – called the "acute phase" of Chikungunya – lasts around a week, after which most symptoms resolve on their own.Many people continue to have symptoms after the "acute phase" resolves, termed the "post-acute phase" for symptoms lasting three weeks to three months, and the "chronic stage" for symptoms lasting longer than three months. In both cases, the lasting symptoms tend to be joint pains: arthritis, tenosynovitis, and/or bursitis. If the affected person had pre-existing joint issues, these tend to worsen. Overuse of a joint can result in painful swelling, stiffness, nerve damage, and neuropathic pain. Typically the joint pain improves with time; however, the chronic stage can last anywhere from a few months to several years.Joint pain is reported in 87–98% of cases, and nearly always occurs in more than one joint, though joint swelling is uncommon. Typically the affected joints are located in both arms and legs. Joints are more likely to be affected if they have previously been damaged by disorders such as arthritis. Pain most commonly occurs in peripheral joints, such as the wrists, ankles, and joints of the hands and feet as well as some of the larger joints, typically the shoulders, elbows and knees. Pain may also occur in the muscles or ligaments. In more than half of cases, normal activity is limited by significant fatigue and pain. Infrequently, inflammation of the eyes may occur in the form of iridocyclitis, or uveitis, and retinal lesions may occur. Temporary damage to the liver may occur.People with Chikungunya occasionally develop neurologic disorders, most frequently swelling or degeneration of the brain, inflammation or degneration of the myelin sheaths around neurons, Guillain–Barré syndrome, acute disseminated encephalomyelitis, hypotonia (in newborns), and issues with visual processing. In particularly rare cases, people may develop behavioral changes, seizures, irritation of the cerebellum or meninges, oculomotor nerve palsy, or paralysis of the eye muscles.Newborns are susceptible to particularly severe effects of Chikungunya infection. Signs of infection typically begin with fever, rash, and swelling at the extremities. Around half of newborns have a mild case of the disease that resolves on its own; the other half have severe disease with inflammation of the brain and seizures. In severe cases, affected newborns may also have issues with bleeding and bloodflow, and problems with heart function.In addition to newborns, the elderly, and those with diabetes, heart disease, liver and kidney diseases, and human immunodeficiency virus infection tend to have more severe cases of Chikungunya. Around 1 to 5 in 1,000 people with symptomatic Chikungunya die of the disease. Cause Virology Chikungunya virus (CHIKV), is a member of the genus Alphavirus, and family Togaviridae. It was first isolated in 1953 in Tanzania and is an RNA virus with a positive-sense single-stranded genome of about 11.6kb. It is a member of the Semliki Forest virus complex and is closely related to Ross River virus, Onyongnyong virus, and Semliki Forest virus. Because it is transmitted by arthropods, namely mosquitoes, it can also be referred to as an arbovirus (arthropod-borne virus). In the United States, it is classified as a category B priority pathogen, and work requires biosafety level III precautions. Transmission Chikungunya is generally transmitted from mosquitoes to humans. Less common modes of transmission include vertical transmission, which is transmission from mother to child during pregnancy or at birth. Transmission via infected blood products and through organ donation is also theoretically possible during times of outbreak, though no cases have yet been documented. The incubation period ranges from one to twelve days, and is most typically three to seven.Chikungunya is related to mosquitoes, their environments, and human behavior. The adaptation of mosquitoes to the changing climate of North Africa around 5,000 years ago made them seek out environments where humans stored water. Human habitation and the mosquitoes environments were then very closely connected. During periods of epidemics humans are the reservoir of the virus. Because high amounts of virus are present in the blood in the beginning of acute infection, the virus can be spread from a viremic human to a mosquito, and back to a human. During other times, monkeys, birds and other vertebrates have served as reservoirs. Three genotypes of this virus have been described, each with a distinct genotype and antigenic character: West African, East/Central/South African, and Asian genotypes. The Asian lineage originated in 1952 and has subsequently split into two lineages – India (Indian Ocean Lineage) and South East Asian clades. This virus was first reported in the Americas in 2014. Phylogenetic investigations have shown that there are two strains in Brazil – the Asian and East/Central/South African types – and that the Asian strain arrived in the Caribbean (most likely from Oceania) in about March 2013. The rate of molecular evolution was estimated to have a mean rate of 5 × 10−4 substitutions per site per year (95% higher probability density 2.9–7.9 × 10−4).Chikungunya is spread through bites from Aedes mosquitoes, and the species A. aegypti was identified as the most common vector, though the virus has recently been associated with many other species, including A. albopictus. Research by the Pasteur Institute in Paris has suggested Chikungunya virus strains in the 2005-2006 Reunion Island outbreak incurred a mutation that facilitated transmission by the Asian tiger mosquito (A. albopictus). Other species potentially able to transmit Chikungunya virus include Ae. furcifer-taylori, Ae. africanus, and Ae. luteocephalus. Mechanism Chikungunya virus is passed to humans when a bite from an infected mosquito breaks the skin and introduces the virus into the body. The pathogenesis of chikungunya infection in humans is still poorly understood, despite recent outbreaks. It appears that in vitro, Chikungunya virus is able to replicate in human epithelial and endothelial cells, primary fibroblasts, and monocyte-derived macrophages. Viral replication is highly cytopathic, but susceptible to type-I and -II interferon. In vivo, in studies using living cells, chikungunya virus appears to replicate in fibroblasts, skeletal muscle progenitor cells, and myofibers.The type-1 interferon response seems to play an important role in the hosts response to chikungunya infection. Upon infection with chikungunya, the hosts fibroblasts produce type-1 alpha and beta interferon (IFN-α and IFN-β). In mouse studies, deficiencies in INF-1 in mice exposed to the virus cause increased morbidity and mortality. The chikungunya-specific upstream components of the type-1 interferon pathway involved in the hosts response to chikungunya infection are still unknown. Nonetheless, mouse studies suggest that IPS-1 is an important factor, and that IRF3 and IRF7 are important in an age-dependent manner. Mouse studies also suggest that chikungunya evades host defenses and counters the type-I interferon response by producing NS2, a nonstructural protein that degrades RBP1 and turns off the host cells ability to transcribe DNA. NS2 interferes with the JAK-STAT signaling pathway and prevents STAT from becoming phosphorylated.In the acute phase of chikungunya, the virus is typically present in the areas where symptoms present, specifically skeletal muscles, and joints. In the chronic phase, it is suggested that viral persistence (the inability of the body to entirely rid itself of the virus), lack of clearance of the antigen, or both, contribute to joint pain. The inflammation response during both the acute and chronic phase of the disease results in part from interactions between the virus and monocytes and macrophages. Chikungunya virus disease in humans is associated with elevated serum levels of specific cytokines and chemokines. High levels of specific cytokines have been linked to more severe acute disease: interleukin-6 (IL-6), IL-1β, RANTES, monocyte chemoattractant protein 1 (MCP-1), monokine induced by gamma interferon (MIG), and interferon gamma-induced protein 10 (IP-10). Cytokines may also contribute to chronic Chikungunya virus disease, as persistent joint pain has been associated with elevated levels of IL-6 and granulocyte-macrophage colony-stimulating factor (GM-CSF). In those with chronic symptoms, a mild elevation of C-reactive protein (CRP) has been observed, suggesting ongoing chronic inflammation. However, there is little evidence linking chronic Chikungunya virus disease and the development of autoimmunity. Viral replication The virus consists of four nonstructural proteins and three structural proteins. The structural proteins are the capsid and two envelope glycoproteins: E1 and E2, which form heterodimeric spikes on the viron surface. E2 binds to cellular receptors in order to enter the host cell through receptor-mediated endocytosis. E1 contains a fusion peptide which, when exposed to the acidity of the endosome in eukaryotic cells, dissociates from E2 and initiates membrane fusion that allows the release of nucleocapsids into the host cytoplasm, promoting infection. The mature virion contains 240 heterodimeric spikes of E2/E1, which after release, bud on the surface of the infected cell, where they are released by exocytosis to infect other cells. Diagnosis Chikungunya is diagnosed on the basis of clinical, epidemiological, and laboratory criteria. Clinically, acute onset of high fever and severe joint pain would lead to suspicion of chikungunya. Epidemiological criteria consist of whether the individual has traveled to or spent time in an area in which chikungunya is present within the last twelve days (i.e.) the potential incubation period). Laboratory criteria include a decreased lymphocyte count consistent with viremia. However a definitive laboratory diagnosis can be accomplished through viral isolation, RT-PCR, or serological diagnosis.The differential diagnosis may include other mosquito-borne diseases, such as dengue or malaria, or other infections such as influenza. Chronic recurrent polyarthralgia occurs in at least 20% of chikungunya patients one year after infection, whereas such symptoms are uncommon in dengue.Virus isolation provides the most definitive diagnosis, but takes one to two weeks for completion and must be carried out in biosafety level III laboratories. The technique involves exposing specific cell lines to samples from whole blood and identifying Chikungunya virus-specific responses. RT-PCR using nested primer pairs is used to amplify several chikungunya-specific genes from whole blood, generating thousands to millions of copies of the genes in order to identify them. RT-PCR can also be used to quantify the viral load in the blood. Using RT-PCR, diagnostic results can be available in one to two days. Serological diagnosis requires a larger amount of blood than the other methods, and uses an ELISA assay to measure chikungunya-specific IgM levels in the blood serum. One advantage offered by serological diagnosis is that serum IgM is detectable from 5 days to months after the onset of symptoms, but drawbacks are that results may require two to three days, and false positives can occur with infection due to other related viruses, such as onyongnyong virus and Semliki Forest virus.Presently, there is no specific way to test for chronic signs and symptoms associated with Chikungunya fever although nonspecific laboratory findings such as C reactive protein and elevated cytokines can correlate with disease activity. Prevention Because no approved vaccine exists, the most effective means of prevention are protection against contact with the disease-carrying mosquitoes and controlling mosquito populations by limiting their habitat. Mosquito control focuses on eliminating the standing water where mosquitos lay eggs and develop as larva; if elimination of the standing water is not possible, insecticides or biological control agents can be added. Methods of protection against contact with mosquitos include using insect repellents with substances such as DEET, icaridin, PMD (p-menthane-3,8-diol, a substance derived from the lemon eucalyptus tree), or ethyl butylacetylaminopropionate (IR3535). However, increasing insecticide resistance presents a challenge to chemical control methods.Wearing bite-proof long sleeves and trousers also offers protection, and garments can be treated with pyrethroids, a class of insecticides that often has repellent properties. Vaporized pyrethroids (for example in mosquito coils) are also insect repellents. As infected mosquitoes often feed and rest inside homes, securing screens on windows and doors will help to keep mosquitoes out of the house. In the case of the day-active A. aegypti and A. albopictus, however, this will have only a limited effect, since many contacts between the mosquitoes and humans occur outdoors. Vaccine As of 2021, no approved vaccines are available. A phase-II vaccine trial used a live, attenuated virus, to develop viral resistance in 98% of those tested after 28 days and 85% still showed resistance after one year. However, 8% of people reported transient joint pain, and attenuation was found to be due to only two mutations in the E2 glycoprotein. Alternative vaccine strategies have been developed, and show efficacy in mouse models. In August 2014 researchers at the National Institute of Allergy and Infectious Diseases in the USA were testing an experimental vaccine which uses virus-like particles (VLPs) instead of attenuated virus. All the 25 people participated in this phase 1 trial developed strong immune responses. As of 2015, a phase 2 trial was planned, using 400 adults aged 18 to 60 and to take place at 6 locations in the Caribbean. Even with a vaccine, mosquito population control and bite prevention will be necessary to control chikungunya disease. In 2021, two vaccine manufacturers, one in France, the other in the United States, reported successful completion of Phase II clinical trials. Treatment Currently, no specific treatment for chikungunya is available. Supportive care is recommended, and symptomatic treatment of fever and joint swelling includes the use of nonsteroidal anti-inflammatory drugs such as naproxen, non-aspirin analgesics such as paracetamol (acetaminophen) and fluids. Aspirin is not recommended due to the increased risk of bleeding. Despite anti-inflammatory effects, corticosteroids are not recommended during the acute phase of disease, as they may cause immunosuppression and worsen infection.Passive immunotherapy has potential benefit in treatment of chikungunya. Studies in animals using passive immunotherapy have been effective, and clinical studies using passive immunotherapy in those particularly vulnerable to severe infection are currently in progress. Passive immunotherapy involves administration of anti-CHIKV hyperimmune human intravenous antibodies (immunoglobulins) to those exposed to a high risk of chikungunya infection. No antiviral treatment for Chikungunya virus is currently available, though testing has shown several medications to be effective in vitro. Chronic arthritis In those who have more than two weeks of arthritis, ribavirin may be useful. The effect of chloroquine is not clear. It does not appear to help acute disease, but tentative evidence indicates it might help those with chronic arthritis. Steroids do not appear to be an effective treatment. NSAIDs and simple analgesics can be used to provide partial symptom relief in most cases. Methotrexate, a drug used in the treatment of rheumatoid arthritis, has been shown to have benefit in treating inflammatory polyarthritis resulting from chikungunya, though the drug mechanism for improving viral arthritis is unclear. Prognosis The mortality rate of chikungunya is slightly less than 1 in 1000. Those over the age of 65, neonates, and those with underlying chronic medical problems are most likely to have severe complications. Neonates are vulnerable as it is possible to vertically transmit chikungunya from mother to infant during delivery, which results in high rates of morbidity, as infants lack fully developed immune systems. The likelihood of prolonged symptoms or chronic joint pain is increased with increased age and prior rheumatological disease. Epidemiology Historically, chikungunya has been present mostly in the developing world. The disease causes an estimated 3 million infections each year. Epidemics in the Indian Ocean, Pacific Islands, and in the Americas, continue to change the distribution of the disease. In Africa, chikungunya is spread by a sylvatic cycle in which the virus largely cycles between other non-human primates, small mammals, and mosquitos between human outbreaks. During outbreaks, due to the high concentration of virus in the blood of those in the acute phase of infection, the virus can circulate from humans to mosquitoes and back to humans. The transmission of the pathogen between humans and mosquitoes that exist in urban environments was established on multiple occasions from strains occurring on the eastern half of Africa in non-human primate hosts. This emergence and spread beyond Africa may have started as early as the 18th century. Currently, available data does not indicate whether the introduction of chikungunya into Asia occurred in the 19th century or more recently, but this epidemic Asian strain causes outbreaks in India and continues to circulate in Southeast Asia. In Africa, outbreaks were typically tied to heavy rainfall causing increased mosquito population. In recent outbreaks in urban centers, the virus has spread by circulating between humans and mosquitoes.Global rates of chikungunya infection are variable, depending on outbreaks. When chikungunya was first identified in 1952, it had a low-level circulation in West Africa, with infection rates linked to rainfall. Beginning in the 1960s, periodic outbreaks were documented in Asia and Africa. However, since 2005, following several decades of relative inactivity, chikungunya has re-emerged and caused large outbreaks in Africa, Asia, and the Americas. In India, for instance, chikungunya re-appeared following 32 years of absence of viral activity. Outbreaks have occurred in Europe, the Caribbean, and South America, areas in which chikungunya was not previously transmitted. Local transmission has also occurred in the United States and Australia, countries in which the virus was previously unknown. In 2005, an outbreak on the island of Réunion was the largest then documented, with an estimated 266,000 cases on an island with a population of approximately 770,000. In a 2006 outbreak, India reported 1.25 million suspected cases. Chikungunya was recently introduced to the Americas, and from 2013 to 2014 in the Americas, 1,118,763 suspected cases and 24,682 confirmed cases were reported by the PAHO.An analysis of the genetic code of Chikungunya virus suggests that the increased severity of the 2005–present outbreak may be due to a change in the genetic sequence which altered the E1 segment of the virus viral coat protein, a variant called E1-A226V. This mutation potentially allows the virus to multiply more easily in mosquito cells. The change allows the virus to use the Asian tiger mosquito (an invasive species) as a vector in addition to the more strictly tropical main vector, Aedes aegypti. Enhanced transmission of Chikungunya virus by A. albopictus could mean an increased risk for outbreaks in other areas where the Asian tiger mosquito is present. A albopictus is an invasive species which has spread through Europe, the Americas, the Caribbean, Africa and the Middle East.After the detection of zika virus in Brazil in April 2015, the first ever in the Western Hemisphere, it is now thought some chikungunya and dengue cases could in fact be zika virus cases or coinfections. History The word chikungunya is believed to have been derived from a description in the Makonde language, meaning "that which bends up", of the contorted posture of people affected with the severe joint pain and arthritic symptoms associated with this disease. The disease was first described by Marion Robinson and W.H.R. Lumsden in 1955, following an outbreak in 1952 on the Makonde Plateau, along the border between Mozambique and Tanganyika (the mainland part of modern-day Tanzania).According to the initial 1955 report about the epidemiology of the disease, the term chikungunya is derived from the Makonde root verb kungunyala, meaning to dry up or become contorted. In concurrent research, Robinson glossed the Makonde term more specifically as "that which bends up". Subsequent authors apparently overlooked the references to the Makonde language and assumed the term to have been derived from Swahili, the lingua franca of the region. The erroneous attribution to Swahili has been repeated in numerous print sources. Many erroneous spellings of the name of the disease are also in common use. Since its discovery in Tanganyika, Africa, in 1952, Chikungunya virus outbreaks have occurred occasionally in Africa, South Asia, and Southeast Asia, but recent outbreaks have spread the disease over a wider range.The first recorded outbreak of this disease may have been in 1779. This is in agreement with the molecular genetics evidence that suggests it evolved around the year 1700. Research Chikungunya is one of more than a dozen agents researched as a potential biological weapon.This disease is part of the group of neglected tropical diseases. See also Coalition for Epidemic Preparedness Innovations Epidemiology of chikungunya References Works cited Burt FJ, Chen W, Miner JJ, Lenschow DJ, Merits A, Schnettler E, Kohl A, Rudd PA, Taylor A, Herrero LJ, Zaid A, Ng LF, Mahalingam S (April 2017). "Chikungunya virus: an update on the biology and pathogenesis of this emerging pathogen". Lancet Infect Dis. 17 (4): e107–e117. doi:10.1016/S1473-3099(16)30385-1. PMID 28159534. Vairo F, Haider N, Kock R, Ntoumi F, Ippolito G, Zumla A (December 2019). "Chikungunya: Epidemiology, Pathogenesis, Clinical Features, Management, and Prevention" (PDF). Infect Dis Clin North Am. 33 (4): 1003–1025. doi:10.1016/j.idc.2019.08.006. PMID 31668189. S2CID 241044156. External links Chikungunya fact sheet—from the World Health Organization (WHO) Chikungunya outbreaks—from the World Health Organization (WHO) Togaviridae—from the Virus Pathogen Database and Analysis Resource (ViPR) "Chikungunya". European Centre for Disease Prevention and Control. 23 January 2008. Archived from the original on 5 August 2009. Retrieved 20 May 2013. Chikungunya in Cuba Archived 18 July 2014 at the Wayback Machine Chikungunya: The key role of "innate immunity"
You act as a bridge between medical jargon and everyday language. Explain medical terms in a manner that is comprehensive yet approachable for non-experts.
Please help me grasp the concept behind the medical term 'Atenolol/chlorthalidone.'
Atenolol/chlorthalidone, also known as co-tenidone, is a combination medication used to treat high blood pressure. It is made up of atenolol, a beta-blocker and chlortalidone, a diuretic. It is not recommended as an initial treatment but may be used in those who are taking atenolol and chlortalidone individually. It is taken by mouth.Common side effects include gastrointestinal upset and gout. Serious side effects may include liver problems, pancreatitis, and psychosis. Use is not recommended during pregnancy. Use during breastfeeding may harm the baby. Atenolol works by blocking β1-adrenergic receptors in the heart, thus decreasing the heart rate and workload. Chlorthalidone works by increasing the amount of sodium lost by the kidneys.The combination was approved for medical use in the United States in 1984. It is available as a generic medication. In 2017, it was the 295th most commonly prescribed medication in the United States, with more than one million prescriptions. References External links "Atenolol mixture with chlorthalidone". Drug Information Portal. U.S. National Library of Medicine.
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Could you offer a clear explanation of the term 'Cyanide poisoning' as used in the medical field?
Cyanide poisoning is poisoning that results from exposure to any of a number of forms of cyanide. Early symptoms include headache, dizziness, fast heart rate, shortness of breath, and vomiting. This phase may then be followed by seizures, slow heart rate, low blood pressure, loss of consciousness, and cardiac arrest. Onset of symptoms usually occurs within a few minutes. Some survivors have long-term neurological problems.Toxic cyanide-containing compounds include hydrogen cyanide gas and a number of cyanide salts. Poisoning is relatively common following breathing in smoke from a house fire. Other potential routes of exposure include workplaces involved in metal polishing, certain insecticides, the medication sodium nitroprusside, and certain seeds such as those of apples and apricots. Liquid forms of cyanide can be absorbed through the skin. Cyanide ions interfere with cellular respiration, resulting in the bodys tissues being unable to use oxygen.Diagnosis is often difficult. It may be suspected in a person following a house fire who has a decreased level of consciousness, low blood pressure, or high lactic acid. Blood levels of cyanide can be measured but take time. Levels of 0.5–1 mg/L are mild, 1–2 mg/L are moderate, 2–3 mg/L are severe, and greater than 3 mg/L generally result in death.If exposure is suspected, the person should be removed from the source of exposure and decontaminated. Treatment involves supportive care and giving the person 100% oxygen. Hydroxocobalamin (vitamin B12a) appears to be useful as an antidote and is generally first-line. Sodium thiosulphate may also be given. Historically cyanide has been used for mass suicide and by the Nazis for genocide. Signs and symptoms Acute exposure If hydrogen cyanide is inhaled it can cause a coma with seizures, apnea, and cardiac arrest, with death following in a matter of seconds. At lower doses, loss of consciousness may be preceded by general weakness, dizziness, headaches, vertigo, confusion, and perceived difficulty in breathing. At the first stages of unconsciousness, breathing is often sufficient or even rapid, although the state of the person progresses towards a deep coma, sometimes accompanied by pulmonary edema, and finally cardiac arrest. A cherry red skin color that darkens may be present as the result of increased venous hemoglobin oxygen saturation. Despite the similar name, cyanide does not directly cause cyanosis. A fatal dose for humans can be as low as 1.5 mg/kg body weight. Other sources claim a lethal dose is 1–3 mg per kg body weight for vertebrates. Chronic exposure Exposure to lower levels of cyanide over a long period (e.g., after use of improperly processed cassava roots, which are a primary food source in tropical Africa) results in increased blood cyanide levels, which can result in weakness and a variety of symptoms, including permanent paralysis, nervous lesions, hypothyroidism, and miscarriages. Other effects include mild liver and kidney damage. Causes Cyanide poisoning can result from the ingestion of cyanide salts; imbibing pure liquid prussic acid; skin absorption of prussic acid; intravenous infusion of nitroprusside for hypertensive crisis; or the inhalation of hydrogen cyanide gas. The last typically occurs through one of three mechanisms: The gas is directly released from canisters (e.g. as part of a pesticide, insecticide, or Zyklon B). It is generated on site by reacting potassium cyanide or sodium cyanide with sulfuric acid (e.g. in a modern American gas chamber). Fumes arise during a building fire or any similar scenario involving the burning of polyurethane, vinyl or other polymer products that required nitriles in their production.As potential contributing factors, cyanide is present in: Tobacco smoke. Many seeds or kernels such as those of almonds, apricots, apples, oranges, and flaxseed. Foods including cassava (also known as tapioca, yuca or manioc) and bamboo shoots.As a potential harm-reduction factor, Vitamin B12, in the form of hydroxocobalamin (also spelled hydroxycobalamin), might reduce the negative effects of chronic exposure; whereas, a deficiency might worsen negative health effects following exposure to cyanide. Mechanism Cyanide is a potent cytochrome c oxidase (COX, a.k.a Complex IV) inhibitor. As such, cyanide poisoning is a form of histotoxic hypoxia, because it interferes with oxidative phosphorylation.: 1475 Specifically, cyanide binds to the heme a3-CuB binuclear center of COX (and thus is a non-competitive inhibitor of it). This prevents electrons passing through COX from being transferred to O2, which not only blocks the mitochondrial electron transport chain but also interferes with the pumping of a proton out of the mitochondrial matrix which would otherwise occur at this stage. Therefore, cyanide interferes not only with aerobic respiration but also with the ATP synthesis pathway it facilitates, owing to the close relationship between those two processes.: 705 One antidote for cyanide poisoning, nitrite (i.e. via amyl nitrite), works by converting ferrohemoglobin to ferrihemoglobin, which can then compete with COX for free cyanide (as the cyanide will bind to the iron in its heme groups instead). Ferrihemoglobin cannot carry oxygen, but the amount of ferrihemoglobin that can be formed without impairing oxygen transport is much greater than the amount of COX in the body.: 1475 Cyanide is a broad-spectrum poison because the reaction it inhibits is essential to aerobic metabolism; COX is found in many forms of life. However, susceptibility to cyanide is far from uniform across affected species; for instance, plants have an alternative electron transfer pathway available that passes electrons directly from ubiquinone to O2, which confers cyanide resistance by bypassing COX.: 704 Diagnosis Lactate is produced by anaerobic glycolysis when oxygen concentration becomes too low for the normal aerobic respiration pathway. Cyanide poisoning inhibits aerobic respiration and therefore increases anaerobic glycolysis which causes a rise of lactate in the plasma. A lactate concentration above 10 mmol per liter is an indicator of cyanide poisoning, as defined by the presence of a blood cyanide concentration above 40 µmol per liter. Lactate levels greater than 6 mmol/L after reported or strongly suspected pure cyanide poisoning, such as cyanide-containing smoke exposure, suggests significant cyanide exposure.Methods of detection include colorimetric assays such as the Prussian blue test, the pyridine-barbiturate assay, also known as the "Conway diffusion method" and the taurine fluorescence-HPLC but like all colorimetric assays these are prone to false positives. Lipid peroxidation resulting in "TBARS," an artifact of heart attack produces dialdehydes that cross-react with the pyridine-barbiturate assay. Meanwhile, the taurine-fluorescence-HPLC assay used for cyanide detection is identical to the assay used to detect glutathione in spinal fluid. Cyanide and thiocyanate assays have been run with mass spectrometry (LC/MS/MS), which are considered specific tests. Since cyanide has a short half-life, the main metabolite, thiocyanate is typically measured to determine exposure. Other methods of detection include the identification of plasma lactate. Treatment Decontamination Decontamination of people exposed to hydrogen cyanide gas only requires removal of the outer clothing and the washing of their hair. Those exposed to liquids or powders generally require full decontamination. Antidote The International Programme on Chemical Safety issued a survey (IPCS/CEC Evaluation of Antidotes Series) that lists the following antidotal agents and their effects: oxygen, sodium thiosulfate, amyl nitrite, sodium nitrite, 4-dimethylaminophenol, hydroxocobalamin, and dicobalt edetate (Kelocyanor), as well as several others. Other commonly-recommended antidotes are solutions A and B (a solution of ferrous sulfate in aqueous citric acid, and aqueous sodium carbonate, respectively) and amyl nitrite. The United States standard cyanide antidote kit first uses a small inhaled dose of amyl nitrite, followed by intravenous sodium nitrite, followed by intravenous sodium thiosulfate. Hydroxocobalamin was approved for use in the US in late 2006 and is available in Cyanokit antidote kits. Sulfanegen TEA, which could be delivered to the body through an intra-muscular (IM) injection, detoxifies cyanide and converts the cyanide into thiocyanate, a less toxic substance. Alternative methods of treating cyanide intoxication are used in other countries. The Irish Health and Safety Executive (HSE) has recommended against the use of solutions A and B because of their limited shelf life, potential to cause iron poisoning, and limited applicability (effective only in cases of cyanide ingestion, whereas the main modes of poisoning are inhalation and skin contact). The HSE has also questioned the usefulness of amyl nitrite due to storage/availability problems, risk of abuse, and lack of evidence of significant benefits. It also states that the availability of kelocyanor at the workplace may mislead doctors into treating a patient for cyanide poisoning when this is an erroneous diagnosis. The HSE no longer recommends a particular cyanide antidote. History Fires The República Cromañón nightclub fire broke out in Buenos Aires, Argentina on 30 December 2004, killing 194 people and leaving at least 1,492 injured. Most of the victims died from inhaling poisonous gases, and carbon monoxide. After the fire, the technical institution INTI found that the level of toxicity due to the materials and volume of the building was 225 ppm of cyanide in the air. A lethal dose for rats is between 150 ppm and 220 ppm, meaning the air in the building was highly toxic. On 5 December 2009, a fire in the night club Lame Horse (Khromaya Loshad) in the Russian city of Perm took the lives of 156 people. Fatalities consisted of 111 people at the site and 45 later in hospitals. One of the main causes of death was poisoning from cyanide and other toxic gases released by the burning of plastic and polyurethane foam used in the construction of club interiors. Taking into account the number of deaths, this was the largest fire in post-Soviet Russia.On 27 January 2013, a fire at the Kiss nightclub in the city of Santa Maria, in the south of Brazil, caused the poisoning of hundreds of young people by cyanide released by the combustion of soundproofing foam made with polyurethane. By March 2013, 245 fatalities were confirmed. Gas chambers In early 1942, Zyklon B, which contains hydrogen cyanide, emerged as the preferred killing tool of Nazi Germany for use in extermination camps during the Holocaust. The chemical was used to murder roughly one million people in gas chambers installed in extermination camps at Auschwitz-Birkenau, Majdanek, and elsewhere. Most of the people who were murdered were Jews, and by far the majority of these murders took place at Auschwitz. Zyklon B was supplied to concentration camps at Mauthausen, Dachau, and Buchenwald by the distributor Heli, and to Auschwitz and Majdanek by Testa. Camps also occasionally bought Zyklon B directly from the manufacturers. Of the 729 tonnes of Zyklon B sold in Germany in 1942–44, 56 tonnes (about eight percent of domestic sales) were sold to concentration camps. Auschwitz received 23.8 tonnes, of which six tonnes were used for fumigation. The remainder was used in the gas chambers or lost to spoilage (the product had a stated shelf life of only three months). Testa conducted fumigations for the Wehrmacht and supplied them with Zyklon B. They also offered courses to the SS in the safe handling and use of the material for fumigation purposes. In April 1941, the German agriculture and interior ministries designated the SS as an authorized applier of the chemical, and thus they were able to use it without any further training or governmental oversight.Hydrogen cyanide gas has been used for judicial execution in some states of the United States, where cyanide was generated by reaction between potassium cyanide (or sodium cyanide) dropped into a compartment containing sulfuric acid, directly below the chair in the gas chamber. Suicide Cyanide salts are sometimes used as fast-acting suicide devices. Cyanide reacts at a higher level with high stomach acidity. On 26 January 1904, company promoter and swindler Whitaker Wright committed suicide by ingesting cyanide in a court anteroom immediately after being convicted of fraud. In February 1937, the Uruguayan short story writer Horacio Quiroga committed suicide by drinking cyanide in a hospital at Buenos Aires. In 1937, polymer chemist Wallace Carothers committed suicide by cyanide. In the 1943 Operation Gunnerside to destroy the Vemork Heavy Water Plant in World War II (an attempt to stop or slow German atomic bomb progress), the commandos were given cyanide tablets (cyanide enclosed in rubber) kept in the mouth and were instructed to bite into them in case of German capture. The tablets ensured death within three minutes. Cyanide, in the form of pure liquid prussic acid (a historical name for hydrogen cyanide), was the favored suicide agent of Nazi Germany. Erwin Rommel (1944), Adolf Hitlers wife, Eva Braun (1945), and Nazi leaders Heinrich Himmler (1945), possibly Martin Bormann (1945), and Hermann Göring (1946) all committed suicide by ingesting it. It is speculated that, in 1954, Alan Turing used an apple that had been injected with a solution of cyanide to commit suicide after being convicted of having a homosexual relationship, which was illegal at the time in the United Kingdom, and forced to undergo hormonal castration to avoid prison. An inquest determined that Turings death from cyanide poisoning was a suicide, although this has been disputed. Members of the Sri Lankan LTTE (Liberation Tigers of Tamil Eelam, whose insurgency lasted from 1983 to 2009), used to wear cyanide vials around their necks with the intention of committing suicide if captured by the government forces. On 22 June 1977, Moscow, Aleksandr Dmitrievich Ogorodnik, a Soviet diplomat accused of spying on behalf of the Colombian Intelligence Agency and the US Central Intelligence Agency, was arrested. During the interrogations, Ogorodnik offered to write a full confession and asked for his pen. Inside the pen cap was a cleverly hidden cyanide pill, which when bitten on, caused Ogorodnik to die before he hit the floor, according to the Soviets. On 18 November 1978, Jonestown. A total of 909 individuals died in Jonestown, many from apparent cyanide poisoning, in an event termed "revolutionary suicide" by Jones and some members on an audio tape of the event and in prior discussions. The poisonings in Jonestown followed the murder of five others by Temple members at Port Kaituma, including United States Congressman Leo Ryan, an act that Jones ordered. Four other Temple members committed murder-suicide in Georgetown at Jones command. On 6 June 1985, serial killer Leonard Lake died in custody after having ingested cyanide pills he had sewn into his clothes. On 28 June 2012, Wall Street trader Michael Marin ingested a cyanide pill seconds after a guilty verdict was read in his arson trial in Phoenix, AZ; he died minutes after. On 22 June 2015, John B. McLemore, a horologist and the central figure of the podcast S-Town, died after ingesting cyanide. On 29 November 2017, Slobodan Praljak died from drinking potassium cyanide, after being convicted of war crimes by the International Criminal Tribunal for the former Yugoslavia. Mining and industrial In 1993, an illegal spill resulted in the death of seven people in Avellaneda, Argentina. In their memory, the National Environmental Conscious Day (Día Nacional de la Conciencia Ambiental) was established. In 2000, a spill at Baia Mare, Romania, resulted in the worst environmental disaster in Europe since Chernobyl. In 2000, Allen Elias, CEO of Evergreen Resources was convicted of knowing endangerment for his role in the cyanide poisoning of employee Scott Dominguez. This was one of the first successful criminal prosecutions of a corporate executive by the Environmental Protection Agency. Murder John Tawell, a murderer who in 1845 became the first person to be arrested as the result of telecommunications technology. Grigori Rasputin (1916; attempted, later killed by gunshot) The Goebbels children (1945) Stepan Bandera (1959) Jonestown, Guyana, was the site of a large mass murder–suicide, in which over 900 members of the Peoples Temple drank potassium cyanide–laced Flavor Aid in 1978. Chicago Tylenol murders (1982) Timothy Marc OBryan (1966–1974) died on October 31, 1974 by ingesting potassium cyanide placed into a giant Pixy Stix. His father, Ronald Clark OBryan, was convicted of Tims murder plus four counts of attempted murder. OBryan put potassium cyanide into five giant Pixy Stix that he gave to his son and daughter along with three other children. Only Timothy ate the poisoned candy and died. Bruce Nickell (5 June 1986) Murdered by his wife who poisoned a bottle of Excedrin. Richard Kuklinski (1935–2006) Janet Overton (1942–1988) Her husband, Richard Overton was convicted of poisoning her, but Janets symptoms did not match those of classic cyanide poisoning, the timeline was inconsistent with cyanide poisoning, and the amount found was just a trace. The diagnostic method used was prone to false positives. Richard Overton died in prison in 2009. Urooj Khan (1966–2012), won the lottery and was found dead a few days later. A blood diagnostic reported a lethal level of cyanide in his blood, but the body did not display any classic symptoms of cyanide poisoning, and no link to cyanide could be found in Uroojs social circle. The diagnostic method used was the Conway diffusion method, prone to false positives with artifacts of heart attack and kidney failure. Autumn Marie Klein (20 April 2013), a prominent 41-year-old neuroscientist and physician, died from cyanide poisoning. Kleins husband, Robert J. Ferrante, also a prominent neuroscientist who used cyanide in his research, was convicted of murder and sentenced to life in prison for her death. Robert Ferrante is appealing his conviction. Mirna Salihin died in hospital on 6 January 2016, after drinking a Vietnamese iced coffee at a cafe in a shopping mall in Jakarta. Police reports claim that cyanide poisoning was the most likely cause of her death. Jolly Thomas of Kozhikode, Kerala, India, was arrested in 2019 for the murder of 6 family members. Murders took place over 14-year period and each victim ate a meal prepared by the killer. The murders were allegedly motivated by wanting control of the family finances and property. Mei Xiang Li of Brooklyn, NY, collapsed and died in April 2017, with cyanide later reported to be in her blood. However, Mei never exhibited symptoms of cyanide poisoning and no link to cyanide could be found in her life. Warfare or terrorism In 1988, between 3,200 and 5,000 people died in the Halabja massacre owing to unknown chemical nerve agents. Hydrogen cyanide gas was strongly suspected. In 1995, a device was discovered in a restroom in the Kayabacho Tokyo subway station, consisting of bags of sodium cyanide and sulfuric acid with a remote controlled motor to rupture them in what was believed to be an attempt by the Aum Shinrikyo cult to produce toxic amounts of hydrogen cyanide gas. In 2003, Al Qaeda reportedly planned to release cyanide gas into the New York City Subway system. The attack was supposedly aborted because there would not be enough casualties. Research Cobinamide is the final compound in the biosynthesis of cobalamin. It has greater affinity for the cyanide than cobalamin itself, which suggests that it could be a better option for emergency treatment. See also Anaerobic glycolysis Lactic acidosis List of poisonings Konzo References Explanatory notes Citations Sources Longerich, Peter (2010). Holocaust: The Nazi Persecution and Murder of the Jews. Oxford; New York: Oxford University Press. ISBN 978-0-19-280436-5. Hayes, Peter (2004). From Cooperation to Complicity: Degussa in the Third Reich. Cambridge; New York; Melbourne: Cambridge University Press. ISBN 978-0-521-78227-2. Piper, Franciszek (1994). "Gas Chambers and Crematoria". In Gutman, Yisrael; Berenbaum, Michael (eds.). Anatomy of the Auschwitz Death Camp. Bloomington, Indiana: Indiana University Press. pp. 157–182. ISBN 978-0-253-32684-3.
You are an expert in medical terminology. Explain medical terms in a way that is both precise and easy to understand for non-medical audiences.
I'm trying to expand my medical knowledge. Can you elucidate the term 'Cervicitis'?
Cervicitis is inflammation of the uterine cervix. Cervicitis in women has many features in common with urethritis in men and many cases are caused by sexually transmitted infections. Non-infectious causes of cervicitis can include intrauterine devices, contraceptive diaphragms, and allergic reactions to spermicides or latex condoms. Cervicitis affects over half of all women during their adult life.Cervicitis may ascend and cause endometritis and pelvic inflammatory disease (PID). Cervicitis may be acute or chronic. Symptoms and signs Cervicitis may have no symptoms. If symptoms do manifest, they may include: Abnormal vaginal bleeding after intercourse between periods Unusual gray, white, or yellow vaginal discharge Painful sexual intercourse Pain in the vagina Pressure or heaviness in the pelvis Frequent, painful urination Causes Cervicitis can be caused by any of a number of infections, of which the most common are chlamydia and gonorrhea, with chlamydia accounting for approximately 40% of cases. Other causes include Trichomonas vaginalis, herpes simplex virus, and Mycoplasma genitalium.While sexually transmitted infections (STIs) are the most common cause of cervicitis, there are other potential causes as well. This includes vaginitis caused by bacterial vaginosis or Trichomonas vaginalis. This also includes a device inserted into the pelvic area (i.e. a cervical cap, IUD, pessary, etc.); an allergy to spermicides or latex in condoms; or, exposure to a chemical, for example while douching. Inflammation can also be idiopathic, where no specific cause is found. While IUDs do not cause cervicitis, active cervicitis is a contraindication to placing an IUD. If a person with an IUD develops cervicitis, it usually does not need to be removed, if the person wants to continue using it.There are also certain behaviors that can place individuals at a higher risk for contracting cervicitis. High-risk sexual behavior, a history of STIs, many sexual partners, sex at an early age, and sexual partners who engage in high-risk sexual behavior or have had an STI can increase the likelihood of contracting cervicitis. Diagnosis To diagnose cervicitis, a clinician will perform a pelvic exam. This exam includes a speculum exam with visual inspection of the cervix for abnormal discharge, which is usually purulent or bleeding from the cervix with little provocation. Swabs can be used to collect a sample of this discharge for inspection under a microscope and/or lab testing for gonorrhea, chlamydia, and Trichomonas vaginalis. A bimanual exam in which the clinician palpates the cervix to see if there is any associated pain should be done to assess for pelvic inflammatory disease. Prevention The risk of contracting cervicitis from STIs can be reduced by using condoms during every sexual encounter. Condoms are effective against the spread of STIs like chlamydia and gonorrhea that cause cervicitis. Also, being in a long-term monogamous relationship with an uninfected partner can lower the risk of an STI.Ensuring that foreign objects like tampons are properly placed in the vagina and following instructions how long to leave it inside, how often to change it, and/or how often to clean it can reduce the risk of cervicitis. In addition, avoiding potential irritants like douches and deodorant tampons can prevent cervicitis. Treatment Non-infectious causes of cervicitis are primarily treated by eliminating or limiting exposure to the irritant. Antibiotics, usually azithromycin or doxycycline, or antiviral medications are used to treat infectious causes. Women at increased risk of sexually transmitted infections (i.e., less than 25 years of age and a new sexual partner, a sexual partner with other partners, or a sexual partner with a known sexually transmitted infection), should be treated presumptively for chlamydia and possibly gonorrhea, particularly if follow-up care cannot be ensured or diagnostic testing is not possible. For lower risk women, deferring treatment until test results are available is an option.To reduce the risk of reinfection, women should abstain from sexual intercourse for seven days after treatment is started. Also, sexual partners (within the last sixty days) of anyone with infectious cervicitis should be referred for evaluation or treated through expedited partner therapy (EPT). EPT is the process by which a clinician treats the sexual partner of a patient diagnosed with a sexually transmitted infection without first meeting or examining the partner. Sexual partners should also avoid sexual intercourse until they and their partners are adequately treated.Untreated cervicitis is also associated with an increased susceptibility to HIV infection. Women with infectious cervicitis should be tested for other sexually transmitted infections, including HIV and syphilis.Cervicitis should be followed up. Women with a specific diagnosis of chlamydia, gonorrhea, or trichomonas should see a clinician in three months after treatment for repeat testing because they are at higher risk of getting reinfected, regardless of whether their sex partners were treated. Treatment in pregnant women is the same as those who are not pregnant. References == External links ==
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
I'd like to learn more about the medical term 'Extravasation of urine.' Can you provide some details?
Extravasation of urine refers to the condition where an interruption of the urethra leads to a collection of urine in other cavities, such as the scrotum or the penis in males. It can be associated with a calculus. Mechanism An injury to the urethra leaving Bucks fascia intact results in a collection of urine (extravasation) limited to the penis, deep to Bucks fascia. However, if the injury to the bulb of the penis results in urethral injury accompanying a tear of the Bucks fascia, then extravasated blood and urine would accumulate in the superficial perineal space, passing into the penis (outer to Bucks fascia) as well as the scrotum and lower anterior abdominal wall. Extravasation of urine involving a compromised Bucks fascia can be appreciated clinically by blood collecting in the superficial pouch, resulting in a butterfly-shaped region around the penis. Urinoma Long term complications of renal trauma, ureteral obstruction, or kidney transplant can lead to the formation of an urinoma encapsulating extravasated urine. References == External links ==
You are a conduit for medical education. Your task is to offer detailed explanations of medical terms, ensuring they are understandable and precise.
The term 'Temporal lobe epilepsy' keeps coming up in medical discussions. What does it stand for?
Temporal lobe epilepsy (TLE) is a chronic disorder of the nervous system which is characterized by recurrent, unprovoked focal seizures that originate in the temporal lobe of the brain and last about one or two minutes. TLE is the most common form of epilepsy with focal seizures. A focal seizure in the temporal lobe may spread to other areas in the brain when it may become a focal to bilateral seizure. TLE is diagnosed by taking a medical history, blood tests, and brain imaging. It can have a number of causes such as head injury, stroke, brain infections, structural lesions in the brain, or brain tumors, or it can be of unknown onset. The first line of treatment is through anticonvulsants. Surgery may be an option, especially when there is an observable abnormality in the brain. Another treatment option is electrical stimulation of the brain through an implanted device called the vagus nerve stimulator (VNS). Types Over forty types of epilepsy are recognized and these are divided into two main groups: focal seizures and generalized seizures. Focal seizures account for approximately sixty percent of all adult cases. Temporal lobe epilepsy (TLE) is the single most common form of focal seizure.The International League Against Epilepsy (ILAE) recognizes two main types of temporal lobe epilepsy: mesial temporal lobe epilepsy (MTLE), arising in the hippocampus, the parahippocampal gyrus and the amygdala which are located in the inner (medial) aspect of the temporal lobe, and lateral temporal lobe epilepsy (LTLE), the rarer type, arising in the neocortex at the outer (lateral) surface of the temporal lobe. The seizures of LTLE are characterized by auditory or visual features. Autosomal dominant lateral temporal lobe epilepsy (ADLTLE) is a rare hereditary condition, often associated with mutations in the LGI1 gene. Signs and symptoms When a seizure begins in the temporal lobe, its effects depend on the precise location of its point of origin, its locus. In 1981, the ILAE recognized three types of seizures occurring in temporal lobe epilepsy. The classification was based on EEG findings. However, as of 2017 the general classification of seizures has been revised. The newer classification uses three key features: where the seizures begin, the level of awareness during a seizure, and other features. Focal seizures Focal seizures in the temporal lobe involve small areas of the lobe such as the amygdala and hippocampus.The newer classification gives two types of focal onset seizures, as focal aware and focal impaired awareness. Focal aware seizures Focal aware means that the level of consciousness is not altered during the seizure. In temporal lobe epilepsy, a focal seizure usually causes abnormal sensations only. Often, the patient cannot describe the sensations.These may be: Sensations such as déjà vu (a feeling of familiarity), jamais vu (a feeling of unfamiliarity) Amnesia of a single memory or set of memories A sudden sense of unprovoked fear and anxiety Nausea Auditory, visual, olfactory, gustatory, or tactile hallucinations; olfactory hallucinations often seem indescribable to patients beyond "pleasant" or "unpleasant" Visual distortions such as macropsia and micropsia Dissociation or derealisation Synesthesia (stimulation of one sense experienced in a second sense) Dysphoric or euphoric feelings, fear, anger, and other emotionsFocal aware seizures are often called "auras" when they serve as a warning sign of a subsequent seizure. Regardless, an aura is actually a seizure itself, and such a focal seizure may or may not progress to a focal impaired awareness seizure. People who experience only focal aware seizures may not recognize what they are, nor seek medical care. Focal impaired awareness seizures Focal impaired awareness seizures are seizures which impair consciousness to some extent: they alter the persons ability to interact normally with their environment. They usually begin with a focal aware seizure, then spread to a larger portion of the temporal lobe, resulting in impaired consciousness. They may include autonomic and psychic features present in focal aware seizures.Signs may include: Motionless staring Automatic movements of the hands or mouth Confusion and disorientation Altered ability to respond to others, unusual speech Transient aphasia (losing ability to speak, read, or comprehend spoken word)These seizures tend to have a warning or aura before they occur, and when they occur they generally tend to last only 1–2 minutes. It is not uncommon for an individual to be tired or confused for up to 15 minutes after a seizure has occurred, although postictal confusion can last for hours or even days. Though they may not seem harmful, due to the fact that the individual does not normally seize, they can be extremely harmful if the individual is left alone around dangerous objects. For example, if a person with complex partial seizures is driving alone, this can cause them to run into the ditch, or worse, cause an accident involving multiple people. With this type, some people do not even realize they are having a seizure and most of the time their memory from right before to after the seizure is wiped. First-aid is only required if there has been an injury or if this is the first time a person has had a seizure. Focal to bilateral seizures or generalized seizures Seizures which begin in the temporal lobe, and then spread to involve both sides of the brain are termed focal to bilateral. Where both sides of the brain or the whole brain are involved from the onset, these seizures are known as generalized seizures and may be tonic clonic. The arms, trunk, and legs stiffen (the tonic phase), in either a flexed or extended position, and then jerk (the clonic phase). These were previously known as grand mal seizures. The word grand mal comes from the French term, meaning major affliction. Postictal period There is some period of recovery in which neurological function is altered after each of these seizure types. This is the postictal state. The degree and length of postictal impairment directly correlates with the severity of the seizure type. Focal aware seizures often last less than sixty seconds; focal with impaired awareness seizures may last up to two minutes; and generalized tonic clonic seizures may last up to three minutes. The postictal state in seizures other than focal aware may last much longer than the seizure itself. Because a major function of the temporal lobe is short-term memory, a focal with impaired awareness seizure, and a focal to bilateral seizure can cause amnesia for the period of the seizure, meaning that the seizure may not be remembered. Complications Depression Individuals with temporal lobe epilepsy have a higher prevalence of depression than the general population. Although the psychosocial impacts of epilepsy may be causative, there are also links in the phenomenology and neurobiology of TLE and depression. Memory The temporal lobe and particularly the hippocampus play an important role in memory processing. Declarative memory (memories which can be consciously recalled) is formed in the area of the hippocampus called the dentate gyrus.Temporal lobe epilepsy is associated with memory disorders and loss of memory. Animal models and clinical studies show that memory loss correlates with temporal lobe neuronal loss in temporal lobe epilepsy. Verbal memory deficit correlates with pyramidal cell loss in TLE. This is more so on the left in verbal memory loss. Neuronal loss on the right is more prominent in non-verbal (visuospatial memory loss). Childhood onset After childhood onset, one third will "grow out" of TLE, finding a lasting remission up to an average of 20 years. The finding of a lesion such as hippocampal sclerosis (a scar in the hippocampus), tumour, or dysplasia, on magnetic resonance imaging (MRI) predicts the intractability of seizures. Personality The effect of temporal lobe epilepsy on personality is a historical observation dating to the 1800s. Personality and behavioural change in temporal lobe epilepsy is seen as a chronic condition when it persists for more than three months.Geschwind syndrome is a set of behavioural phenomena seen in some people with TLE. Documented by Norman Geschwind, signs include: hypergraphia (compulsion to write (or draw) excessively), hyperreligiosity (intense religious or philosophical experiences or interests), hyposexuality (reduced sexual interest or drive), circumstantiality (result of a non-linear thought pattern, talks at length about irrelevant and trivial details). The personality changes generally vary by hemisphere.The existence of a "temporal lobe epileptic personality" and of Geschwind syndrome have been disputed and research is inconclusive. Causes The causes of TLE include mesial temporal sclerosis, traumatic brain injury, brain infections, such as encephalitis and meningitis, hypoxic brain injury, stroke, cerebral tumours, and genetic syndromes. Temporal lobe epilepsy is not the result of psychiatric illness or fragility of the personality. Febrile seizures Although the theory is controversial, there is a link between febrile seizures (seizures coinciding with episodes of fever in young children) and subsequent temporal lobe epilepsy, at least epidemiologically. Human herpes virus 6 In the mid-1980s, human herpesvirus 6 (HHV-6) was suggested as a possible causal link between febrile convulsions and mesial temporal lobe epilepsy. However, although the virus is found in temporal lobe tissue at surgery for TLE, it has not been recognised as a major factor in febrile seizures or TLE. Reelin Dispersion of the granule cell layer in the hippocampal dentate gyrus is occasionally seen in temporal lobe epilepsy and has been linked to the downregulation of reelin, a protein that normally keeps the layer compact by containing neuronal migration. It is unknown whether changes in reelin expression play a role in epilepsy. Pathophysiology Neuronal loss In TLE, there is loss of neurons in region CA1 and CA3 of the hippocampus. There is also damage to mossy cells and inhibitory interneurons in the hilar region of the hippocampus (region IV) and to the granule cells of the dentate gyrus. In animal models, neuronal loss occurs during seizures but in humans, neuronal loss predates the first seizure and does not necessarily continue with seizure activity. The loss of the GABA-mediated inhibitory interneurons may increase the hyperexcitability of neurons of the hippocampus leading to recurrent seizures. According to the "dormant basket cell" hypothesis, mossy cells normally excite basket cells which in turn, inhibit granule cells. Loss of mossy cells lowers the threshold of action potentials of the granule cells. GABA reversal In certain patients with temporal lobe epilepsy it has been found that the subiculum could generate epileptic activity. It has been found that GABA reversal potential is depolarising in the subpopulation of the pyramidal cells due to the lack of KCC2 co-transporter. It has been shown that it is theoretically possible to generate seizures in the neural networks due to down-regulation of KCC2, consistent with the chloride measurements during the transition to seizure and KCC2 blockade experiments. Granule cell dispersion in the dentate gyrus Granule cell dispersion is a type of developmental migration and a pathological change found in the TLE brain which was first described in 1990. The granule cells of the dentate gyrus are tightly packed forming a uniform, laminated layer with no monosynaptic connections. This structure provides a filter for the excitability of neurons.In TLE, granule cells are lost, the structure is no longer closely packed and there are changes in the orientation of dendrites. These changes may or may not be epileptogenic. For instance, if the dendrites of granule cells reconnect, it may be in a way (through the laminar planes) that allows hyperexcitability. However, not all patients have granule cell dispersion.: 387–389 Aberrant mossy fiber sprouting Mossy fibers are the axons of granule cells. They project into the hilus of the dentate gyrus and stratum lucidum in the CA3 region giving inputs to both excitatory and inhibitory neurons.In the TLE brain, where granule cells are damaged or lost, axons, the mossy fibres, sprout in order to reconnect to other granule cell dendrites. This is an example of synaptic reorganization. This was noted in human tissue in 1974 and in animal models in 1985. In TLE, the sprouting mossy fibres are larger than in the normal brain and their connections may be aberrant. Mossy fibre sprouting continues from one week to two months after injury.: 416–431 Aberrant mossy fibre sprouting may create excitatory feedback circuits that lead to temporal lobe seizures. This is evident in intracellular recordings. Stimulation of aberrant mossy fibre areas increases the excitatory postsynaptic potential response.However, aberrant mossy fiber sprouting may inhibit excitatory transmission by synapsing with basket cells which are inhibitory neurons and by releasing GABA and neuropeptide Y which are inhibitory neurotransmitters. Also, in animal models, granule cell hyper-excitability is recorded before aberrant mossy fibre sprouting has occurred. Diagnosis The diagnosis of temporal lobe epilepsy can include the following methods: Magnetic resonance imaging (MRI), CT scans, positron emission tomography (PET), EEG, and magnetoencephalography. Imaging CT scan is useful as the emergency situations when the suspected cause of epilepsy is due to conditions such as intracerebral haemorrhage, or brain abscesses, or when MRI imaging is not readily available or there is any contraindications to MRI such as the presence of cardiac pacemakers or cochlear implants in the subjects body. CT scan also detect some abnormal calcifications in the brain that is characteristic of diseases such as tuberous sclerosis and Sturge-Weber syndrome. However, CT scan is not sensitive or specific enough when compared in MRI in detecting the common causes of epilepsy such as small tumours, vascular malformations, abnormalities of cerebral cortex development, or abnormalities in the medial part of the temporal lobe.MRI is the imaging choice when assessing those with epilepsy. In newly diagnosed epilepsy, MRI can detect brain lesion in up to 12 to 14% of the cases. However, for those with chronic epilepsy, MRI can detect brain lesion in 80% of the cases. However, in cases where there is definite clinical and EEG diagnosis of idiopathic generalized epilepsy, or Rolandic epilepsy, MRI scan is not needed. Differential diagnosis Other medical conditions with similar symptoms include panic attacks, psychosis spectrum disorders, tardive dyskinesia, and occipital lobe epilepsy. Treatments Anticonvulsants Many anticonvulsant oral medications are available for the management of temporal lobe seizures. Most anticonvulsants function by decreasing the excitation of neurons, for example, by blocking fast or slow sodium channels or by modulating calcium channels; or by enhancing the inhibition of neurons, for example by potentiating the effects of inhibitory neurotransmitters like GABA.In TLE, the most commonly used older medications are phenytoin, carbamazepine, primidone, valproate, and phenobarbital. Newer drugs, such as gabapentin, topiramate, levetiracetam, lamotrigine, pregabalin, tiagabine, lacosamide, and zonisamide promise similar effectiveness, with possibly fewer side-effects. Felbamate and vigabatrin are newer, but can have serious adverse effects so they are not considered as first-line treatments.Up to one third of patients with medial temporal lobe epilepsy will not have adequate seizure control with medication alone. For patients with medial TLE whose seizures remain uncontrolled after trials of several types of anticonvulsants (that is, the epilepsy is intractable), surgical excision of the affected temporal lobe may be considered. Surgical interventions Epilepsy surgery has been performed since the 1860s and doctors have observed that it is highly effective in producing freedom from seizures. However, it was not until 2001 that a scientifically sound study was carried out to examine the effectiveness of temporal lobectomy.Temporal lobe surgery can be complicated by decreased cognitive function. However, after temporal lobectomy, memory function is supported by the opposite temporal lobe; and recruitment of the frontal lobe. Cognitive rehabilitation may also help. Other treatments Where surgery is not recommended, further management options include new (including experimental) anticonvulsants, and vagus nerve stimulation. The ketogenic diet is also recommended for children, and some adults. Other options include brain cortex responsive neural stimulators, deep brain stimulation, stereotactic radiosurgery, such as the gamma knife, and laser ablation. Effects on society The first to record and catalog the abnormal symptoms and signs of TLE was Norman Geschwind. He found a constellation of symptoms that included hypergraphia, hyperreligiosity, collapse, and pedantism, now called Geschwind syndrome. Vilayanur S. Ramachandran explored the neural basis of the hyperreligiosity seen in TLE using the galvanic skin response (GSR), which correlates with emotional arousal, to determine whether the hyperreligiosity seen in TLE was due to an overall heightened emotional state or was specific to religious stimuli. Ramachandran presented two subjects with neutral, sexually arousing and religious words while measuring GSR. Ramachandran was able to show that patients with TLE showed enhanced emotional responses to the religious words, diminished responses to the sexually charged words, and normal responses to the neutral words. This study was presented as an abstract at a neuroscience conference and referenced in Ramachandrans book, Phantoms in the Brain, but it has never been published in the peer-reviewed scientific press.A study in 2015, reported that intrinsic religiosity and religiosity outside of organized religion were higher in patients with epilepsy than in controls. Lower education level, abnormal background EEG activity, and hippocampal sclerosis have been found to be contributing factors for religiosity in TLE.TLE has been suggested as a materialistic explanation for the revelatory experiences of prominent religious figures such as Abraham, Moses, Jesus, Mohammed, Saint Paul, Joan of Arc, Saint Teresa of Ávila, and Joseph Smith. These experiences are described (in possibly unreliable accounts) as complex interactions with their visions; but possibly (and dependent on the reliability of historical accounts, often made by acolytes) lack the stereotypy, amnestic periods, and automatisms or generalized motor events, which are characteristic of TLE. Psychiatric conditions with psychotic spectrum symptoms might be more plausible physical explanation of these experiences. It has been suggested that Pope Pius IXs doctrine of the immaculate conception was influenced by his forensically diagnosed partial epilepsy.In 2016, a case history found that a male temporal lobe epileptic patient experienced a vision of God following a temporal lobe seizure, while undergoing EEG monitoring. The patient reported that God had sent him to the world to "bring redemption to the people of Israel". The purported link between TLE and religiosity has inspired work by Michael Persinger and other researchers in the field of neurotheology. Others have questioned the evidence for a link between temporal lobe epilepsy and religiosity. == References ==
You are an assistant in medical learning. Your role is to clarify medical terminology, ensuring the information is both accurate and accessible.
I'd like to learn more about the medical term 'Arsenic poisoning.' Can you provide some details?
Arsenic poisoning is a medical condition that occurs due to elevated levels of arsenic in the body. If arsenic poisoning occurs over a brief period of time, symptoms may include vomiting, abdominal pain, encephalopathy, and watery diarrhea that contains blood. Long-term exposure can result in thickening of the skin, darker skin, abdominal pain, diarrhea, heart disease, numbness, and cancer.The most common reason for long-term exposure is contaminated drinking water. Groundwater most often becomes contaminated naturally; however, contamination may also occur from mining or agriculture. It may also be found in the soil and air. Recommended levels in water are less than 10–50 µg/L (10–50 parts per billion). Other routes of exposure include toxic waste sites and traditional medicines. Most cases of poisoning are accidental. Arsenic acts by changing the functioning of around 200 enzymes. Diagnosis is by testing the urine, blood, or hair.Prevention is by using water that does not contain high levels of arsenic. This may be achieved by the use of special filters or using rainwater. There is not good evidence to support specific treatments for long-term poisoning. For acute poisonings treating dehydration is important. Dimercaptosuccinic acid or dimercaptopropane sulfonate may be used while dimercaprol (BAL) is not recommended. Hemodialysis may also be used.Through drinking water, more than 200 million people globally are exposed to higher-than-safe levels of arsenic. The areas most affected are Bangladesh and West Bengal. Exposure is also more common in people of low income and minorities. Acute poisoning is uncommon. The toxicity of arsenic has been described as far back as 1500 BC in the Ebers papyrus. Signs and symptoms Symptoms of arsenic poisoning begin with headaches, confusion, severe diarrhea, and drowsiness. As the poisoning develops, convulsions and changes in fingernail pigmentation called leukonychia striata (Meess lines, or Aldrich-Meess lines) may occur. When the poisoning becomes acute, symptoms may include diarrhea, vomiting, vomiting blood, blood in the urine, cramping muscles, hair loss, stomach pain, and more convulsions. The organs of the body that are usually affected by arsenic poisoning are the lungs, skin, kidneys, and liver. The final result of arsenic poisoning is coma and death.Arsenic is related to heart disease (hypertension-related cardiovascular disease), cancer, stroke (cerebrovascular diseases), chronic lower respiratory diseases, and diabetes. Skin effects can include skin cancer in the long term, but often prior to skin cancer are different skin lesions. Other effects may include darkening of skin and thickening of skin.Chronic exposure to arsenic is related to vitamin A deficiency, which is related to heart disease and night blindness. The acute minimal lethal dose of arsenic in adults is estimated to be 70 to 200 mg or 1 mg/kg/day. Cancer Arsenic increases the risk of cancer. Exposure is related to skin, lung, liver, and kidney cancer among others.Its comutagenic effects may be explained by interference with base and nucleotide excision repair, eventually through interaction with zinc finger structures. Dimethylarsinic acid, DMA(V), caused DNA single strand breaks resulting from inhibition of repair enzymes at levels of 5 to 100 mM in human epithelial type II cells.MMA(III) and DMA(III) were also shown to be directly genotoxic by effectuating scissions in supercoiled ΦX174 DNA. Increased arsenic exposure is associated with an increased frequency of chromosomal aberrations, micronuclei and sister-chromatid exchanges. An explanation for chromosomal aberrations is the sensitivity of the protein tubulin and the mitotic spindle to arsenic. Histological observations confirm effects on cellular integrity, shape and locomotion.DMA(III) is able to form reactive oxygen species by reaction with molecular oxygen. Resulting metabolites are the dimethylarsenic radical and the dimethylarsenic peroxyl radical. Both DMA(III) and DMA(V) were shown to release iron from horse spleen as well as from human liver ferritin if ascorbic acid was administered simultaneously. Thus, formation of reactive oxygen species can be promoted. Moreover, arsenic could cause oxidative stress by depleting the cells antioxidants, especially the ones containing thiol groups. The accumulation of reactive oxygen species like that cited above and hydroxyl radicals, superoxide radicals and hydrogen peroxides causes aberrant gene expression at low concentrations and lesions of lipids, proteins and DNA in higher concentrations which eventually lead to cellular death. In a rat animal model, urine levels of 8-hydroxy-2-deoxyguanosine (as a biomarker of DNA damage byreactive oxygen species) were measured after treatment with DMA(V). In comparison to control levels, they turned out to be significantly increased. This theory is further supported by a cross-sectional study which found elevated mean serum lipid peroxides in the As exposed individuals which correlated with blood levels of inorganic arsenic and methylated metabolites and inversely correlated with nonprotein sulfhydryl (NPSH) levels in whole blood. Another study found an association of As levels in whole blood with the level of reactive oxidants in plasma and an inverse relationship with plasma antioxidants. A finding of the latter study indicates that methylation might in fact be a detoxification pathway with regard to oxidative stress: the results showed that the lower the As methylation capacity was, the lower the level of plasma antioxidant capacity. As reviewed by Kitchin (2001), the oxidative stress theory provides an explanation for the preferred tumor sites connected with arsenic exposure. Considering that a high partial pressure of oxygen is present in lungs and DMA(III) is excreted in gaseous state via the lungs, this seems to be a plausible mechanism for special vulnerability. The fact that DMA is produced by methylation in the liver, excreted via the kidneys and later on stored in the bladder accounts for the other tumor localizations. Regarding DNA methylation, some studies suggest interaction of As with methyltransferases which leads to an inactivation of tumor suppressor genes through hypermethylation; others state that hypomethylation might occur due to a lack of SAM resulting in aberrant gene activation. An experiment by Zhong et al. (2001) with arsenite-exposed human lung A549, kidney UOK123, UOK109 and UOK121 cells isolated eight different DNA fragments by methylation-sensitive arbitrarily primed polymerase chain reactions. It turned out that six of the fragments were hyper- and two of them were hypomethylated. Higher levels of DNA methyltransferase mRNA and enzyme activity were found.Kitchin (2001) proposed a model of altered growth factors which lead to cell proliferation and thus to carcinogenesis. From observations, it is known that chronic low-dose arsenic poisoning can lead to increased tolerance to its acute toxicity. MRP1-overexpressing lung tumor GLC4/Sb30 cells poorly accumulate arsenite and arsenate. This is mediated through MRP-1 dependent efflux. The efflux requires glutathione, but no arsenic-glutathione complex formation.Although many mechanisms have been proposed, no definite model can be given for the mechanisms of chronic arsenic poisoning. The prevailing events of toxicity and carcinogenicity might be quite tissue-specific. Current consensus on the mode of carcinogenesis is that it acts primarily as a tumor promoter. Its co-carcinogenicity has been demonstrated in several models. However, the finding of several studies that chronically arsenic-exposed Andean populations (as most extremely exposed to UV-light) do not develop skin cancer with chronic arsenic exposure, is puzzling. Causes Organic arsenic is less harmful than inorganic arsenic. Seafood is a common source of the less toxic organic arsenic in the form of arsenobetaine. The arsenic reported in 2012 in fruit juice and rice by Consumer Reports was primarily inorganic arsenic. Because of its high toxicity, arsenic is seldom used in the Western world, although in Asia it is still a popular pesticide. Arsenic is mainly encountered occupationally in the smelting of zinc and copper ores. Drinking water Arsenic is naturally found in groundwater and presents serious health threats when high amounts exist. Chronic arsenic poisoning results from drinking contaminated well water over a long period of time. Many aquifers contain high concentration of arsenic salts. The World Health Organization (WHO) Guidelines for drinking water quality established in 1993 a provisional guideline value of 0.01 mg/L (10 parts per billion) for maximum contaminant levels of arsenic in drinking water. This recommendation was established based on the limit of detection for most laboratories testing equipment at the time of publication of the WHO water quality guidelines. More recent findings show that consumption of water with levels as low as 0.00017 mg/L (0.17 parts per billion) over long periods of time can lead to arsenicosis.From a 1988 study in China, the US protection agency quantified the lifetime exposure of arsenic in drinking water at concentrations of 0.0017 mg/L (1.7 ppb), 0.00017 mg/L, and 0.000017 mg/L are associated with a lifetime skin cancer risk of 1 in 10,000, 1 in 100,000, and 1 in 1,000,000 respectively. WHO asserts that a water level of 0.01 mg/L (10 ppb) poses a risk of 6 in 10,000 chance of lifetime skin cancer risk and contends that this level of risk is acceptable.One of the worst incidents of arsenic poisoning via well water occurred in Bangladesh, which the World Health Organization called the "largest mass poisoning of a population in history" recognized as a major public health concern. The contamination in the Ganga-Brahmaputra fluvial plains in India and Padma-Meghna fluvial plains in Bangladesh demonstrated adverse impacts on human health.Mining techniques such as hydraulic fracturing may mobilize arsenic in groundwater and aquifers due to enhanced methane transport and resulting changes in redox conditions, and inject fluid containing additional arsenic. Groundwater In the US, the U.S. Geological Survey estimates that the median groundwater concentration is 1 μg/L or less, although some groundwater aquifers, particularly in the western United States, can contain much higher levels. For example, median levels in Nevada were about 8 μg/L but levels of naturally occurring arsenic as high as 1000 μg/L have been measured in the United States in drinking water.Geothermally active zones occur at hotspots where mantle-derived plumes ascend, such as in Hawaii and Yellowstone National Park, US. Arsenic is an incompatible element (does not fit easily into the lattices of common rock-forming minerals). Concentrations of arsenic are high mainly in geothermal waters that leach continental rocks. Arsenic in hot geothermal fluids was shown to be derived mainly from leaching of host rocks at Yellowstone National Park, in Wyoming, US, rather than from magmas.In the western US, there are As (arsenic) inputs to groundwater and surface water from geothermal fluids in and near Yellowstone National Park, and in other western mineralized areas. Groundwater associated with volcanics in California contain As at concentrations ranging up to 48,000 μg/L, with As-bearing sulfide minerals as the main source. Geothermal waters on Dominica in the Lesser Antilles also contain concentrations of As >50 μg/L.In general, because arsenic is an incompatible element, it accumulates in differentiated magmas, and in other western mineralized areas. Weathering of pegmatite veins in Connecticut, US, was thought to contribute As to groundwater.In Pennsylvania, As concentrations in water discharging from abandoned anthracite mines ranged from <0.03 to 15 μg/L and from abandoned bituminous mines, from 0.10 to 64 μg/L, with 10% of samples exceeding the United States Environmental Protection Agency MLC of 10 μg/L.In Wisconsin, As concentrations of water in sandstone and dolomite aquifers were as high as 100 μg/L. Oxidation of pyrite hosted by these formations was the likely source of the As.In the Piedmont of Pennsylvania and New Jersey, groundwater in Mesozoic age aquifers contains elevated levels of As—domestic well waters from Pennsylvania contained up to 65 μg/L, whereas in New Jersey the highest concentration measured recently was 215 μg/L. Food In the United States, Schoof et al. estimated an average adult intake of 3.2 μg/day, with a range of 1–20 μg/day. Estimates for children were similar. Food also contains many organic arsenic compounds. The key organic arsenic compounds that can be routinely found in food (depending on food type) include monomethylarsonic acid (MMAsV), dimethylarsinic acid (DMAsV), arsenobetaine, arsenocholine, arsenosugars, and arsenolipids. DMAsV or MMAsV can be found in various types of fin fish, crabs, and mollusks, but often at very low levels.Arsenobetaine is the major form of arsenic in marine animals, and, by all accounts, it is considered a compound that is nontoxic under conditions of human consumption. Arsenocholine, which is mainly found in shrimp, is chemically similar to arsenobetaine, and is considered to be "essentially nontoxic". Although arsenobetaine is little studied, available information indicates it is not mutagenic, immunotoxic, or embryotoxic.Arsenosugars and arsenolipids have recently been identified. Exposure to these compounds and toxicological implications are currently being studied. Arsenosugars are detected mainly in seaweed but are also found to a lesser extent in marine mollusks. Studies addressing arsenosugar toxicity, however, have largely been limited to in vitro studies, which show that arsenosugars are significantly less toxic than both inorganic arsenic and trivalent methylated arsenic metabolites.It has been found that rice is particularly susceptible to accumulation of arsenic from soil. Rice grown in the United States has an average 260 ppb of arsenic, according to a study; but U.S. arsenic intake remains far below World Health Organization-recommended limits. China has set a standard for arsenic limits in food (150 ppb), as levels in rice exceed those in water.Arsenic is a ubiquitous element present in American drinking water. In the United States, levels of arsenic that are above natural levels, but still well below danger levels set in federal safety standards, have been detected in commercially raised chickens. The source of the arsenic appears to be the feed additives roxarsone and nitarsone, which are used to control the parasitic infection coccidiosis as well as to increase weight and skin coloring of the poultry.High levels of inorganic arsenic were reportedly found in 83 California wines in 2015. Soil Exposure to arsenic in soil can occur through multiple pathways. Compared with the intake of naturally occurring arsenic from water and the diet, soil arsenic constitutes only a small fraction of intake. Air The European Commission (2000) reports that levels of arsenic in air range 0–1 ng/m3 in remote areas, 0.2–1.5 ng/m3 in rural areas, 0.5–3 ng/m3 in urban areas, and up to about 50 ng/m3 in the vicinity of industrial sites. Based on these data, the European Commission (2000) estimated that in relation to food, cigarette smoking, water, and soil, air contributes less than 1% of total arsenic exposure. Pesticides The use of lead arsenate pesticides has been effectively eliminated for over 50 years. However, because of the pesticides environmental persistence, it is estimated that millions of acres of land are still contaminated with lead arsenate residues. This presents a potentially significant public health concern in some areas of the United States (e.g., New Jersey, Washington, and Wisconsin), where large areas of land used historically as orchards have been converted into residential developments.Some modern uses of arsenic-based pesticides still exist. Chromated copper arsenate has been registered for use in the United States since the 1940s as a wood preservative, protecting wood from insects and microbial agents. In 2003, manufacturers of chromated copper arsenate instituted a voluntary recall of residential uses of wood treated with the chemical. The Environmental Protection Agency Act2008 final report stated that chromated copper arsenate is still approved for use in nonresidential applications, such as in marine facilities (pilings and structures), utility poles, and sand highway structures. Copper smelting Exposure studies in the copper smelting industry are much more extensive and have established definitive links between arsenic, a by-product of copper smelting, and lung cancer via inhalation. Dermal and neurological effects were also increased in some of these studies. Although as time went on, occupational controls became more stringent and workers were exposed to reduced arsenic concentrations, the arsenic exposures measured from these studies ranged from about 0.05 to 0.3 mg/m3 and are significantly higher than airborne environmental exposures to arsenic (which range from 0 to 0.000003 mg/m3). Pathophysiology Arsenic interferes with cellular longevity by allosteric inhibition of an essential metabolic enzyme pyruvate dehydrogenase complex, which catalyzes the oxidation of pyruvate to acetyl-CoA by NAD+. With the enzyme inhibited, the energy system of the cell is disrupted resulting in cellular apoptosis. Biochemically, arsenic prevents use of thiamine resulting in a clinical picture resembling thiamine deficiency. Poisoning with arsenic can raise lactate levels and lead to lactic acidosis. Low potassium levels in the cells increases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide. Arsenic in cells clearly stimulates the production of hydrogen peroxide (H2O2). When the H2O2 reacts with certain metals such as iron or manganese it produces a highly reactive hydroxyl radical. Inorganic arsenic trioxide found in ground water particularly affects voltage-gated potassium channels, disrupting cellular electrolytic function resulting in neurological disturbances, cardiovascular episodes such as prolonged QT interval, neutropenia, high blood pressure, central nervous system dysfunction, anemia, and death. Arsenic exposure plays a key role in the pathogenesis of vascular endothelial dysfunction as it inactivates endothelial nitric oxide synthase, leading to reduction in the generation and bioavailability of nitric oxide. In addition, the chronic arsenic exposure induces high oxidative stress, which may affect the structure and function of cardiovascular system. Further, the arsenic exposure has been noted to induce atherosclerosis by increasing the platelet aggregation and reducing fibrinolysis. Moreover, arsenic exposure may cause arrhythmia by increasing the QT interval and accelerating the cellular calcium overload. The chronic exposure to arsenic upregulates the expression of tumor necrosis factor-α, interleukin-1, vascular cell adhesion molecule and vascular endothelial growth factor to induce cardiovascular pathogenesis. Arsenic has also been shown to induce cardiac hypertrophy by activating certain transcription factors involved in pathologically remodeling the heart. Tissue culture studies have shown that arsenic compounds block both IKr and Iks channels and, at the same time, activate IK-ATP channels. Arsenic compounds also disrupt ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits pyruvate dehydrogenase and by competing with phosphate it uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration, and ATP synthesis. Hydrogen peroxide production is also increased, which might form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure, probably from necrotic cell death, not apoptosis. A post mortem reveals brick red colored mucosa, due to severe hemorrhage. Although arsenic causes toxicity, it can also play a protective role. Mechanism Arsenite inhibits not only the formation of acetyl-CoA but also the enzyme succinic dehydrogenase. Arsenate can replace phosphate in many reactions. It is able to form Glc-6-arsenate in vitro; therefore it has been argued that hexokinase could be inhibited. (Eventually this may be a mechanism leading to muscle weakness in chronic arsenic poisoning.) In the glyceraldehyde 3-phosphate dehydrogenase reaction arsenate attacks the enzyme-bound thioester. The formed 1-arseno-3-phosphoglycerate is unstable and hydrolyzes spontaneously. Thus, ATP formation in glycolysis is inhibited while bypassing the phosphoglycerate kinase reaction. (Moreover, the formation of 2,3-bisphosphoglycerate in erythrocytes might be affected, followed by a higher oxygen affinity of hemoglobin and subsequently enhanced cyanosis.) As shown by Gresser (1981), submitochondrial particles synthesize adenosine-5-diphosphate-arsenate from ADP and arsenate in presence of succinate. Thus, by a variety of mechanisms arsenate leads to an impairment of cell respiration and subsequently diminished ATP formation. This is consistent with observed ATP depletion of exposed cells and histopathological findings of mitochondrial and cell swelling, glycogen depletion in liver cells and fatty change in liver, heart and kidney. Experiments demonstrated enhanced arterial thrombosis in a rat animal model, elevations of serotonin levels, thromboxane A[2] and adhesion proteins in platelets, while human platelets showed similar responses. The effect on vascular endothelium may eventually be mediated by the arsenic-induced formation of nitric oxide. It was demonstrated that +3 As concentrations substantially lower than concentrations required for inhibition of the lysosomal protease cathepsin L in B cell line TA3 were sufficient to trigger apoptosis in the same B cell line, while the latter could be a mechanism mediating immunosuppressive effects. Kinetics The two forms of inorganic arsenic, reduced (trivalent As(III)) and oxidized (pentavalent As(V)), can be absorbed, and accumulated in tissues and body fluids. In the liver, the metabolism of arsenic involves enzymatic and non-enzymatic methylation; the most frequently excreted metabolite (≥ 90%) in the urine of mammals is dimethylarsinic acid or cacodylic acid, DMA(V). Dimethylarsenic acid is also known as Agent Blue and was used as herbicide in the American war in Vietnam. In humans inorganic arsenic is reduced nonenzymatically from pentoxide to trioxide, using glutathione or it is mediated by enzymes. Reduction of arsenic pentoxide to arsenic trioxide increases its toxicity and bio availability, Methylation occurs through methyltransferase enzymes. S-adenosylmethionine (SAM) may serve as methyl donor. Various pathways are used, the principal route being dependent on the current environment of the cell. Resulting metabolites are monomethylarsonous acid, MMA(III), and dimethylarsinous acid, DMA(III). Methylation had been regarded as a detoxification process, but reduction from +5 As to +3 As may be considered as a bioactivation instead. Another suggestion is that methylation might be a detoxification if "As[III] intermediates are not permitted to accumulate" because the pentavalent organoarsenics have a lower affinity to thiol groups than inorganic pentavalent arsenics. Gebel (2002) stated that methylation is a detoxification through accelerated excretion. With regard to carcinogenicity it has been suggested that methylation should be regarded as a toxification.Arsenic, especially +3 As, binds to single, but with higher affinity to vicinal sulfhydryl groups, thus reacts with a variety of proteins and inhibits their activity. It was also proposed that binding of arsenite at nonessential sites might contribute to detoxification. Arsenite inhibits members of the disulfide oxidoreductase family like glutathione reductase and thioredoxin reductase.The remaining unbound arsenic (≤ 10%) accumulates in cells, which over time may lead to skin, bladder, kidney, liver, lung, and prostate cancers. Other forms of arsenic toxicity in humans have been observed in blood, bone marrow, cardiac, central nervous system, gastrointestinal, gonadal, kidney, liver, pancreatic, and skin tissues. Heat shock response Another aspect is the similarity of arsenic effects to the heat shock response. Short-term arsenic exposure has effects on signal transduction inducing heat shock proteins with masses of 27, 60, 70, 72, 90, and 110 kDa as well as metallothionein, ubiquitin, mitogen-activated [MAP] kinases, extracellular regulated kinase [ERK], c-jun terminal kinases [JNK] and p38. Via JNK and p38 it activates c-fos, c-jun and egr-1 which are usually activated by growth factors and cytokines. The effects are largely dependent on the dosing regime and may be as well inversed. As shown by some experiments reviewed by Del Razo (2001), reactive oxygen species induced by low levels of inorganic arsenic increase the transcription and the activity of the activator protein 1 (AP-1) and the nuclear factor-κB (NF-κB) (maybe enhanced by elevated MAPK levels), which results in c-fos/c-jun activation, over-secretion of pro-inflammatory and growth promoting cytokines stimulating cell proliferation. Germolec et al. (1996) found an increased cytokine expression and cell proliferation in skin biopsies from individuals chronically exposed to arsenic-contaminated drinking water.Increased AP-1 and NF-κB obviously also result in an up-regulation of mdm2 protein, which decreases p53 protein levels. Thus, taking into account p53s function, a lack of it could cause a faster accumulation of mutations contributing to carcinogenesis. However, high levels of inorganic arsenic inhibit NF-κB activation and cell proliferation. An experiment of Hu et al. (2002) demonstrated increased binding activity of AP-1 and NF-κB after acute (24 h) exposure to +3 sodium arsenite, whereas long-term exposure (10–12 weeks) yielded the opposite result. The authors conclude that the former may be interpreted as a defense response while the latter could lead to carcinogenesis. As the contradicting findings and connected mechanistic hypotheses indicate, there is a difference in acute and chronic effects of arsenic on signal transduction which is not clearly understood yet. Oxidative stress Studies have demonstrated that the oxidative stress generated by arsenic may disrupt the signal transduction pathways of the nuclear transcriptional factors PPARs, AP-1, and NF-κB, as well as the pro-inflammatory cytokines IL-8 and TNF-α. The interference of oxidative stress with signal transduction pathways may affect physiological processes associated with cell growth, metabolic syndrome X, glucose homeostasis, lipid metabolism, obesity, insulin resistance, inflammation, and diabetes-2. Recent scientific evidence has elucidated the physiological roles of the PPARs in the ω- hydroxylation of fatty acids and the inhibition of pro-inflammatory transcription factors (NF-κB and AP-1), pro-inflammatory cytokines (IL-1, -6, -8, -12, and TNF-α), cell4 adhesion molecules (ICAM-1 and VCAM-1), inducible nitric oxide synthase, proinflammatory nitric oxide (NO), and anti-apoptotic factors.Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of type 2 diabetes. The human liver after exposure to therapeutic drugs may exhibit hepatic non-cirrhotic portal hypertension, fibrosis, and cirrhosis. However, the literature provides insufficient scientific evidence to show cause and effect between arsenic and the onset of diabetes mellitus Type 2. Diagnosis Arsenic may be measured in blood or urine to monitor excessive environmental or occupational exposure, confirm a diagnosis of poisoning in hospitalized victims or to assist in the forensic investigation in a case of fatal over dosage. Some analytical techniques are capable of distinguishing organic from inorganic forms of the element. Organic arsenic compounds tend to be eliminated in the urine in unchanged form, while inorganic forms are largely converted to organic arsenic compounds in the body prior to urinary excretion. The current biological exposure index for U.S. workers of 35 µg/L total urinary arsenic may easily be exceeded by a healthy person eating a seafood meal.Tests are available to diagnose poisoning by measuring arsenic in blood, urine, hair, and fingernails. The urine test is the most reliable test for arsenic exposure within the last few days. Urine testing needs to be done within 24–48 hours for an accurate analysis of an acute exposure. Tests on hair and fingernails can measure exposure to high levels of arsenic over the past 6–12 months. These tests can determine if one has been exposed to above-average levels of arsenic. They cannot predict, however, whether the arsenic levels in the body will affect health. Chronic arsenic exposure can remain in the body systems for a longer period of time than a shorter term or more isolated exposure and can be detected in a longer time frame after the introduction of the arsenic, important in trying to determine the source of the exposure. Hair is a potential bioindicator for arsenic exposure due to its ability to store trace elements from blood. Incorporated elements maintain their position during growth of hair. Thus for a temporal estimation of exposure, an assay of hair composition needs to be carried out with a single hair which is not possible with older techniques requiring homogenization and dissolution of several strands of hair. This type of biomonitoring has been achieved with newer microanalytical techniques like synchrotron radiation based X-ray fluorescence spectroscopy and microparticle induced X-ray emission. The highly focused and intense beams study small spots on biological samples allowing analysis to micro level along with the chemical speciation. In a study, this method has been used to follow arsenic level before, during and after treatment with arsenious oxide in patients with acute promyelocytic leukemia. Treatment Chelation Dimercaprol and dimercaptosuccinic acid are chelating agents that sequester the arsenic away from blood proteins and are used in treating acute arsenic poisoning. The most important side effect is hypertension. Dimercaprol is considerably more toxic than succimer. dimercaptosuccinic acid monoesters, e.g. MiADMSA, are promising antidotes for arsenic poisoning. Nutrition Supplemental potassium decreases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide. History Beginning in about 3000 BC arsenic was mined and added to copper in the alloying of bronze, but the adverse health effects of working with arsenic led to it being abandoned when a viable alternative, tin, was discovered.In addition to its presence as a poison, for centuries arsenic was used medicinally. It has been used for over 2,400 years as a part of traditional Chinese medicine. In the western world, arsenic compounds, such as salvarsan, were used extensively to treat syphilis before penicillin was introduced. It was eventually replaced as a therapeutic agent by sulfa drugs and then by other antibiotics. Arsenic was also an ingredient in many tonics (or "patent medicines"). In addition, during the Elizabethan era, some women used a mixture of vinegar, chalk, and arsenic applied topically to whiten their skin. This use of arsenic was intended to prevent aging and creasing of the skin, but some arsenic was inevitably absorbed into the blood stream.During the Victorian era (late 19th century) in the United States, U.S. newspapers advertised "arsenic complexion wafers" that promised to remove facial blemishes such as moles and pimples.Some pigments, most notably the popular Emerald Green (known also under several other names), were based on arsenic compounds. Overexposure to these pigments was a frequent cause of accidental poisoning of artists and craftsmen. Arsenic became a favored method for murder of the Middle Ages and Renaissance, particularly among ruling classes in Italy allegedly. Because the symptoms are similar to those of cholera, which was common at the time, arsenic poisoning often went undetected.: 63  By the 19th century, it had acquired the nickname "inheritance powder," perhaps because impatient heirs were known or suspected to use it to ensure or accelerate their inheritances.: 21  It was also a common murder technique in the 19th century in domestic violence situations, such as the case of Rebecca Copin, who attempted to poison her husband by "putting arsenic in his coffee".In post-WW1 Hungary, arsenic extracted by boiling fly paper was used in an estimated 300 murders by the Angel Makers of Nagyrév. In imperial China, arsenic trioxide and sulfides were used in murder, as well as for capital punishment for members of the royal family or aristocracy. Forensic studies have determined that the Guangxu Emperor (d. 1908) was murdered by arsenic, most likely ordered by the Empress Dowager Cixi or Generalissimo Yuan Shikai. Likewise, in ancient Korea, and particularly in the Joseon Dynasty, arsenic-sulfur compounds have been used as a major ingredient of sayak (사약; 賜藥), which was a poison cocktail used in capital punishment of high-profile political figures and members of the royal family. Due to social and political prominence of the condemned, many of these events were well-documented, often in the Annals of Joseon Dynasty; they are sometimes portrayed in historical television miniseries because of their dramatic nature. Legislation In the U.S. in 1975, under the authority of the Safe Drinking Water Act, the U.S. Environmental Protection Agency determined the National Interim Primary Drinking Water Regulation levels of arsenic (inorganic contaminant – IOCs) to be 0.05 mg/L (50 parts per billion – ppb).Throughout the years, many studies reported dose-dependent effects of arsenic in drinking water and skin cancer. In order to prevent new cases and death from cancerous and non-cancerous diseases, the Safe Drinking Water Act directed the Environmental Protection Agency to revise arsenics levels and specified the maximum contaminant level (MCL). MCLs are set as close to the health goals as possible, considering cost, benefits and the ability of public water systems to detect and remove contaminants using suitable treatment technologies.In 2001, Environmental Protection Agency adopted a lower standard of MCL 0.01 mg/L (10 ppb) for arsenic in drinking water that applies to both community water systems and non-transient non-community water systems.In some other countries, when developing national drinking water standards based on the guideline values, it is necessary to take account of a variety of geographical, socio-economic, dietary and other conditions affecting potential exposure. These factors lead to national standards that differ appreciably from the guideline values. That is the case in countries such as India and Bangladesh, where the permissible limit of arsenic in absence of an alternative source of water is 0.05 mg/L. Challenges to implementation Arsenic removal technologies are traditional treatment processes which have been tailored to improve removal of arsenic from drinking water. Although some of the removal processes, such as precipitative processes, adsorption processes, ion exchange processes, and separation (membrane) processes, may be technically feasible, their cost may be prohibitive.For underdeveloped countries, the challenge is finding the means to fund such technologies. The Environmental Protection Agency, for example, has estimated the total national annualized cost of treatment, monitoring, reporting, record keeping, and administration to enforce the MCL rule to be approximately $181 million. Most of the cost is due to the installation and operation of the treatment technologies needed to reduce arsenic in public water systems. Pregnancy Arsenic exposure through groundwater is highly concerning throughout the perinatal period. Pregnant women are a high-risk population because not only are the mothers at risk for adverse outcomes, but in-utero exposure also poses health risks to the infant. There is a dose-dependent relationship between maternal exposure to arsenic and infant mortality, meaning that infants born to women exposed to higher concentrations, or exposed for longer periods of time, have a higher mortality rate.Studies have shown that ingesting arsenic through groundwater during pregnancy poses dangers to the mother including, but not limited to abdominal pain, vomiting, diarrhea, skin pigmentation changes, and cancer. Research has also demonstrated that arsenic exposure also causes low birth weight, low birth size, infant mortality, and a variety of other outcomes in infants. Some of these effects, like lower birth-rate and size may be due to the effects of arsenic on maternal weight gain during pregnancy. See also 2007 Peruvian meteorite event – a meteorite impact believed to have caused arsenic poisoning Arsenic contamination of groundwater Mary Ann Cotton – serial arsenic poisoner Felicia Dorothea Kate Dover – arsenic poisoner James Marsh (chemist) – invented the Marsh test for detecting arsenic Toroku arsenic disease References Further reading Atlas (color) of Chronic Arsenic Poisoning (2010), Nobuyuki Hotta, Ichiro Kikuchi, Yasuko Kojo, Sakuragaoka Hospital, Kumamoto, ISBN 978-4-9905256-0-6. A 2011 article in the journal Social Medicine discusses community interventions to combat arsenic poisoning: Beyond medical treatment, arsenic poisoning in rural Bangladesh. D. J. Vaughan and D. A. Polya (2013): Arsenic – the great poisoner revisited. Elements 9, 315–316. PDF (update on the world situation in 2013) External links Arsenic poisoning at Curlie
You are a medical educator. Your role is to provide detailed and clear explanations for medical terms, suitable for both professionals and laypersons.
Can you break down the meaning of the medical term 'Daunorubicin' for me?
Daunorubicin, also known as daunomycin, is a chemotherapy medication used to treat cancer. Specifically it is used for acute myeloid leukemia (AML), acute lymphoblastic leukemia (ALL), chronic myelogenous leukemia (CML), and Kaposis sarcoma. It is administeted by injection into a vein. A liposomal formulation known as liposomal daunorubicin also exists.Common side effects include hair loss, vomiting, bone marrow suppression, and inflammation of the inside of the mouth. Other severe side effects include heart disease and tissue death at the site of injection. Use in pregnancy may harm the fetus. Daunorubicin is in the anthracycline family of medication. It works in part by blocking the function of topoisomerase II.Daunorubicin was approved for medical use in the United States in 1979. It is on the World Health Organizations List of Essential Medicines. It was originally isolated from bacteria of the Streptomyces type. Medical uses It slows or stops the growth of cancer cells in the body. Treatment is usually performed together with other chemotherapy drugs (such as cytarabine), and its administration depends on the type of tumor and the degree of response. In addition to its major use in treating AML, daunorubicin is also used to treat neuroblastoma. Daunorubicin has been used with other chemotherapy agents to treat the blastic phase of chronic myelogenous leukemia. Daunorubicin is also used as the starting material for semi-synthetic manufacturing of doxorubicin, epirubicin and idarubicin. Mechanism of action Similar to doxorubicin, daunorubicin interacts with DNA by intercalation and inhibition of macromolecular biosynthesis. This inhibits the progression of the enzyme topoisomerase II, which relaxes supercoils in DNA for transcription. Daunorubicin stabilizes the topoisomerase II complex after it has broken the DNA chain for replication, preventing the DNA double helix from being resealed and thereby stopping the process of replication. On binding to DNA, daunomycin intercalates, with its daunosamine residue directed toward the minor groove. It has the highest preference for two adjacent G/C base pairs flanked on the 5 side by an A/T base pair. Crystallography shows that daunomycin induces a local unwinding angle of 8°, and other conformational disturbances of adjacent and second-neighbour base pairs. It can also induce histone eviction from chromatin upon intercalation. History In the 1950s, an Italian research company, Farmitalia Research Laboratories, began an organized effort to isolate anticancer compounds from soil-based microbes. A soil sample was isolated from the area surrounding the Castel del Monte, a 13th-century castle in Apulia. A new strain of Streptomyces peucetius which produced a red pigment was isolated, and an antibiotic was produced from this bacterium that was found to have good activity against murine tumors. Since a group of French researchers discovered the same compound at about the same time, the two teams named the compound daunorubicin, combining the name Dauni, a pre-Roman tribe that occupied the area of Italy where the compound was isolated, with the French word for ruby, rubis, describing the color. Clinical trials began in the 1960s, and the drug saw success in treating acute leukemia and lymphoma. However, by 1967, it was recognized that daunorubicin could produce fatal cardiac toxicity.In 2015–16, a team at Ohio State University "showed that, by carefully manipulating strands of viral DNA, an origami structure with complex folds can be created in just 10 minutes. Incredibly, these structures are only 100 nanometers across – that’s 1,000 times smaller than the width of a human hair. Small volumes of daunorubicin can be wrapped up in these minuscule pods, which can then be released into a leukemia cell-filled environment." Route of administration Daunorubicin should only be administered in a rapid intravenous infusion. It should not be administered intramuscularly or subcutaneously, since it may cause extensive tissue necrosis. It should also never be administered intrathecally (into the spinal canal), as this will cause extensive damage to the nervous system and may lead to death. Daunorubicin has been used intravitreally (inside the eye) for the purposes of preventing proliferative vitreoretinopathy, a common complication following retinal detachment surgery, but has not been found to be effective and is not used for any other ophthalmic purposes at this time. See also Doxorubicin Idarubicin References External links "Daunorubicin". Drug Information Portal. U.S. National Library of Medicine.
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
I'm not familiar with the medical term 'Periventricular leukomalacia.' Could you provide some insights?
Periventricular leukomalacia (PVL) is a form of white-matter brain injury, characterized by the necrosis (more often coagulation) of white matter near the lateral ventricles. It can affect newborns and (less commonly) fetuses; premature infants are at the greatest risk of neonatal encephalopathy which may lead to this condition. Affected individuals generally exhibit motor control problems or other developmental delays, and they often develop cerebral palsy or epilepsy later in life. The white matter in preterm born children is particularly vulnerable during the third trimester of pregnancy when white matter developing takes place and the myelination process starts around 30 weeks of gestational age.This pathology of the brain was described under various names ("encephalodystrophy", "ischemic necrosis", "periventricular infarction", "coagulation necrosis", "leukomalacia," "softening of the brain", "infarct periventricular white matter", "necrosis of white matter", "diffuse symmetrical periventricular leukoencephalopathy"), and more often by German scientists, but the worldwide dissemination was the term periventricular leukomalacia, introduced in 1962 B. A. Banker and J. C. Larroche. The term can be misleading, as there is no softening of the tissue in PVL. V. V. Vlasyuk and V. P. Tumanov in 1985 published the worlds first monograph devoted to PVL. Vlasyuk (1981) first revealed the high incidence of optic radiation lesions and demonstrated that PVL - a persistent process that the old necrosis can join a new, foci of PVL may be at different stages of development. In the process of morphogenesis focuses PVL pass through three stages: 1) necrosis, 2) resorption, and 3) the formation gliosis scars or cysts. Cysts occur when large and confluent focuses of PVL, with mixed necrosis (kollikvacia in the center and coagulation rim at the periphery). Around the foci is generally defined area of other lesions of the brain white matter - the death of prooligodendrocytes, proliferation mikrogliocytes and astrocytes, swelling, bleeding, loss of capillaries, and others (the so-called "diffuse component PVL"). However, diffuse lesions without necrosis are not PVL. Presentation It is often impossible to identify PVL based on the patients physical or behavioral characteristics. The white matter in the periventricular regions is involved heavily in motor control, and so individuals with PVL often exhibit motor problems. However, since healthy newborns (especially premature infants) can perform very few specific motor tasks, early deficits are very difficult to identify. As the individual develops, the areas and extent of problems caused by PVL can begin to be identified; however, these problems are usually found after an initial diagnosis has been made. The extent of signs is strongly dependent on the extent of white matter damage: minor damage leads to only minor deficits or delays, while significant white matter damage can cause severe problems with motor coordination or organ function. Some of the most frequent signs include delayed motor development, vision deficits, apneas, low heart rates, and seizures. Delayed motor development Delayed motor development of infants affected by PVL has been demonstrated in multiple studies. One of the earliest markers of developmental delays can be seen in the leg movements of affected infants, as early as one month of age. Those with white matter injury often exhibit "tight coupling" of leg joints (all extending or all flexing) much longer than other infants (premature and full-term). Additionally, infants with PVL may not be able to assume the same positions for sleeping, playing, and feeding as premature or full-term children of the same age. These developmental delays can continue throughout infancy, childhood, and adulthood. Vision deficits Premature infants often exhibit visual impairment and motor deficits in eye control immediately after birth. However, the correction of these deficits occurs "in a predictable pattern" in healthy premature infants, and infants have vision comparable to full-term infants by 36 to 40 weeks after conception. Infants with PVL often exhibit decreased abilities to maintain a steady gaze on a fixed object and create coordinated eye movements. Additionally, children with PVL often exhibit nystagmus, strabismus, and refractive error. Seizures Occurrence of seizures is often reported in children with PVL. In an Israel-based study of infants born between 1995 and 2002, seizures occurred in 102 of 541, or 18.7%, of PVL patients. Seizures are typically seen in more severe cases of PVL, affecting patients with greater amounts of lesions and those born at lower gestational ages and birth weights. Causes Predisposing factors Those generally considered to be at greatest risk for PVL are premature, very low birth-weight infants. It is estimated that approximately 3-4% of infants who weigh less than 1,500 g (3.3 lb) have PVL, and 4-10% of those born prior to 33 weeks of gestation (but who survive more than three days postpartum) have the disorder. Gestational CMV infection also produces PVL in neonates. Injury pathway Two major factors appear to be involved in the development of PVL: (1) decreased blood or oxygen flow to the periventricular region (the white matter near the cerebral ventricles) and (2) damage to glial cells, the cells that support neurons throughout the nervous system. These factors are especially likely to interact in premature infants, resulting in a sequence of events that leads to the development of white matter lesions. The initial hypoxia (decreased oxygen flow) or ischemia (decreased blood flow) can occur for a number of reasons. Fetal blood vessels are thin-walled structures, and it is likely that the vessels providing nutrients to the periventricular region cannot maintain a sufficient blood flow during episodes of decreased oxygenation during development. Additionally, hypotension resulting from fetal distress or cesarean section births can lead to decreased blood and oxygen flow to the developing brain. These hypoxic-ischemic incidents can cause damage to the blood brain barrier (BBB), a system of endothelial cells and glial cells that regulates the flow of nutrients to the brain. A damaged BBB can contribute to even greater levels of hypoxia. Alternatively, damage to the BBB can occur due to maternal infection during fetal development, fetal infections, or infection of the newly delivered infant. Because their cardiovascular and immune systems are not fully developed, premature infants are especially at risk for these initial insults. Damage caused to the BBB by hypoxic-ischemic injury or infection sets off a sequence of responses called the inflammatory response. Immediately after an injury, the nervous system generates "pro-inflammatory" cytokines, which are molecules used to coordinate a response to the insult. These cytokines are toxic to the developing brain, and their activity in an effort to respond to specific areas of damaged tissue is believed to cause "bystander damage" to nearby areas that were not affected by the original insult. Further damage is believed to be caused by free radicals, compounds produced during ischemic episodes. The processes affecting neurons also cause damage to glial cells, leaving nearby neurons with little or no support system. It is thought that other factors might lead to PVL, and researchers are studying other potential pathways. A 2007 article by Miller, et al., provides evidence that white-matter injury is not a condition limited to premature infants: full-term infants with congenital heart diseases also exhibit a "strikingly high incidence of white-matter injury." In a study described by Miller, of 41 full-term newborns with congenital heart disease, 13 infants (32%) exhibited white matter injury. Diagnosis As previously noted, there are often few signs of white matter injury in newborns. Occasionally, physicians can make the initial observations of extreme stiffness or poor ability to suckle. The preliminary diagnosis of PVL is often made using imaging technologies. In most hospitals, premature infants are examined with ultrasound soon after birth to check for brain damage. Severe white matter injury can be seen with a head ultrasound; however, the low sensitivity of this technology allows for some white matter damage to be missed. Magnetic resonance imaging (MRI) is much more effective at identifying PVL, but it is unusual for preterm infants to receive an MRI unless they have had a particularly difficult course of development (including repeated or severe infection, or known hypoxic events during or immediately after birth). No agencies or regulatory bodies have established protocols or guidelines for screening of at-risk populations, so each hospital or doctor generally makes decisions regarding which patients should be screened with a more sensitive MRI instead of the basic head ultrasound. PVL is overdiagnosed by neuroimaging studies and the other white matter lesions of the brain are underestimated. It is important to differentiate PVL from the following major white matter lesions in the cerebral hemispheres: edematous hemorrhagic leukoencephalopathy (OGL), telentsefalny gliosis (TG), diffuse leukomalacia (DFL), subcortical leukomalacia (SL), periventricular hemorrhagic infarction (PHI), intracerebral hemorrhage ( ICH), multicystic encephalomalacia (ME), subendymal pseudocyst. Diffuse white matter lesions of the cerebral hemispheres of the brain, accompanied by softening and spreading to the central and subcortical areas are more likely DFL, PHI and ME. Prevention Preventing or delaying premature birth is considered the most important step in decreasing the risk of PVL. Common methods for preventing a premature birth include self-care techniques (dietary and lifestyle decisions), bed rest, and prescribed anti-contraction medications. Avoiding premature birth allows the fetus to develop further, strengthening the systems affected during the development of PVL. An emphasis on prenatal health and regular medical examinations of the mother can also notably decrease the risk of PVL. Prompt diagnosis and treatment of maternal infection during gestation reduces the likelihood of large inflammatory responses. Additionally, treatment of infection with steroids (especially in the 24–34 weeks of gestation) have been indicated in decreasing the risk of PVL.It has also been suggested that avoiding maternal cocaine usage and any maternal-fetal blood flow alterations can decrease the risk of PVL. Episodes of hypotension or decreased blood flow to the infant can cause white matter damage. Treatment Current treatments Currently, there are no treatments prescribed for PVL. All treatments administered are in response to secondary pathologies that develop as a consequence of the PVL. Because white matter injury in the periventricular region can result in a variety of deficits, neurologists must closely monitor infants diagnosed with PVL in order to determine the severity and extent of their conditions. Patients are typically treated with an individualized treatment. It is crucial for doctors to observe and maintain organ function: visceral organ failure can potentially occur in untreated patients. Additionally, motor deficits and increased muscle tone are often treated with individualized physical and occupational therapy treatments. Treatment challenges The fetal and neonatal brain is a rapidly changing, developing structure. Because neural structures are still developing and connections are still being formed at birth, many medications that are successful for treatment and protection in the adult central nervous system (CNS) are ineffective in infants. Moreover, some adult treatments have actually been shown to be toxic to developing brains. Future treatments Although no treatments have been approved for use in human PVL patients, a significant amount of research is occurring in developing treatments for protection of the nervous system. Researchers have begun to examine the potential of synthetic neuroprotection to minimize the amount of lesioning in patients exposed to ischemic conditions. Prognosis The prognosis of patients with PVL is dependent on the severity and extent of white matter damage. Some children exhibit relatively minor deficits, while others have significant deficits and disabilities. Minor tissue damage Minor white matter damage usually is exhibited through slight developmental delays and deficits in posture, vision systems, and motor skills. Many patients exhibit spastic diplegia, a condition characterized by increased muscle tone and spasticity in the lower body. The gait of PVL patients with spastic diplegia exhibits an unusual pattern of flexing during walking. Progression Those patients with severe white matter injury typically exhibit more extensive signs of brain damage. Infants with severe PVL suffer from extremely high levels of muscle tone and frequent seizures. Children and adults may be quadriplegic, exhibiting a loss of function or paralysis of all four limbs. Cerebral palsy Many infants with PVL eventually develop cerebral palsy. The percentage of individuals with PVL who develop cerebral palsy is generally reported with significant variability from study to study, with estimates ranging from 20% to more than 60%. One of the reasons for this discrepancy is the large variability in severity of cerebral palsy. This range corresponds to the severity of PVL, which can also be quite variable. More white matter damage leads to more severe cerebral palsy; different subtypes are identified and diagnosed by a neurologist. Despite the varying grades of PVL and cerebral palsy, affected infants typically begin to exhibit signs of cerebral palsy in a predictable manner. Typically, some abnormal neurological signs (such as those previously mentioned) are visible by the third trimester of pregnancy (28 to 40 weeks after conception), and definitive signs of cerebral palsy are visible by six to nine months of age. Epilepsy Another common but severe outcome of PVL patients is the development of epilepsy. The link between the two is not entirely clear; however, it appears that both genetic and early environmental factors are involved. One study estimated that 47% of children with PVL also have epilepsy, with 78% of those patients having a form of epilepsy not easily managed by medication. Many of these affected patients exhibit some seizures, as well as spastic diplegia or more severe forms of cerebral palsy, before a diagnosis of epilepsy is made. Frequency Unfortunately, there are very few population-based studies on the frequency of PVL. As previously described, the highest frequency of PVL is seen in premature, very low birth weight infants. These infants are typically seen in the NICU in a hospital, with approximately 4-20% of patients in the NICU being affected by PVL. On a large autopsy material without selecting the most frequently detected PVL in male children with birth weight was 1500-2500 g., dying at 6–8 days of life. Diffuse brain damage with softening (diffus leucomalacia, DFL) are found more frequently in children weighing less than 1500 g. However, PVL is not a DFL. Research Animal research Animal models are frequently used to develop improved treatments for and a more complete understanding of PVL. A rat model that has white matter lesions and experiences seizures has been developed, as well as other rodents used in the study of PVL. These animal models can be used to examine the potential efficacy of new medications in the prevention and treatment of PVL. Clinical research Current clinical research ranges from studies aimed at understanding the progression and pathology of PVL to developing protocols for the prevention of PVL development. Many studies examine the trends in outcomes of individuals with PVL: a recent study by Hamrick, et al., considered the role of cystic periventricular leukomalacia (a particularly severe form of PVL, involving development of cysts) in the developmental outcome of the infant.Other ongoing clinical studies are aimed at the prevention and treatment of PVL: clinical trials testing neuroprotectants, prevention of premature births, and examining potential medications for the attenuation of white matter damage are all currently supported by NIH funding. References == External links ==
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
Please help me grasp the concept behind the medical term 'Umbilical hernia.'
An umbilical hernia is a health condition where the abdominal wall behind the navel is damaged. It may cause the navel to bulge outwards—the bulge consisting of abdominal fat from the greater omentum or occasionally parts of the small intestine. The bulge can often be pressed back through the hole in the abdominal wall, and may "pop out" when coughing or otherwise acting to increase intra-abdominal pressure. Treatment is surgical, and surgery may be performed for cosmetic as well as health-related reasons. Signs and symptoms A hernia is present at the site of the umbilicus (commonly called a navel or belly button) in newborns; although sometimes quite large, these hernias tend to resolve without any treatment by around the age of 2–3 years. Obstruction and strangulation of the hernia is rare because the underlying defect in the abdominal wall is larger than in an inguinal hernia of the newborn. The size of the base of the herniated tissue is inversely correlated with risk of strangulation (i.e., a narrow base is more likely to strangulate). Babies are prone to this malformation because of the process during fetal development by which the abdominal organs form outside the abdominal cavity, later returning into it through an opening which will become the umbilicus.Hernias may be asymptomatic and present only as a bulge of the umbilicus. Symptoms may develop when the contracting abdominal wall causes pressure on the hernia contents. This results in abdominal pain or discomfort. These symptoms may be worsened by the patient lifting or straining. Causes The causes of umbilical hernia are congenital and acquired malformation, but an apparent third cause is really a cause of a different type, a paraumbilical hernia. Congenital Congenital umbilical hernia is a congenital malformation of the navel (umbilicus). Among adults, it is three times more common in women than in men; among children, the ratio is roughly equal. It is also found to be more common in children of African descent. Acquired An acquired umbilical hernia directly results from increased intra-abdominal pressure caused by obesity, heavy lifting, a long history of coughing, or multiple pregnancies. Another type of acquired umbilical hernias are incisional hernias, which are hernia developing in a scar following abdominal surgery, e.g. after insertion of laparoscopy trocars through the umbilicus. Paraumbilical Importantly, an umbilical hernia must be distinguished from a paraumbilical hernia, which occurs in adults and involves a defect in the midline near to the umbilicus, and from omphalocele. Diagnosis Navels with the umbilical tip protruding past the umbilical skin ("outies") are often mistaken for umbilical hernias, which are a completely different shape. Treatment for cosmetic purposes is not necessary, unless there are health concerns such as pain, discomfort or incarceration of the hernia content. Incarceration refers to the inability to reduce the hernia back into the abdominal cavity. Prolonged incarceration can lead to tissue ischemia (strangulation) and shock when untreated. Umbilical hernias are common. With a study involving Africans, 92% of children had protrusions, 49% of adults, and 90% of pregnant women. However, a much smaller number actually had hernias: only 23% of children, 8% of adults, and 15% of pregnant women.When the orifice is small (< 1 or 2 cm), 90% close within 3 years (some sources state 85% of all umbilical hernias, regardless of size), and if these hernias are asymptomatic, reducible, and dont enlarge, no surgery is needed (and in other cases it must be considered). Treatment Children In some communities mothers routinely push the small bulge back in and tape a coin over the palpable hernia hole until closure occurs. This practice is not medically recommended as there is a small risk of trapping a loop of bowel under part of the coin resulting in a small area of ischemic bowel. This "fix" does not help and germs may accumulate under the tape, causing infection. The use of bandages or other articles to continuously reduce the hernia is not evidence-based.An umbilical hernia can be fixed in two different ways. The surgeon can opt to stitch the walls of the abdomen or he/she can place mesh over the opening and stitch it to the abdominal walls. The latter is of a stronger hold and is commonly used for larger defects in the abdominal wall. Most surgeons will not repair the hernia until 5–6 years after the baby is born. Most umbilical hernias in infants and children close spontaneously and rarely have complications of gastrointestinal-content incarcerations.How far the projection of the swelling extends from the surface of the abdomen (the belly) varies from child to child. In some, it may be just a small protrusion; in others it may be a large rounded swelling that bulges out when the baby cries. It may hardly be visible when the child is quiet and or sleeping.Normally, the abdominal muscles converge and fuse at the umbilicus during the formation stage, however, in some cases, there remains a gap where the muscles do not close and through this gap the inner intestines come up and bulge under the skin, giving rise to an umbilical hernia. The bulge and its contents can easily be pushed back and reduced into the abdominal cavity.In contrast to an inguinal hernia, the complication incidence is very low, and in addition, the gap in the muscles usually closes with time and the hernia disappears on its own. The treatment of this condition is essentially conservative: observation allowing the child to grow up and see if it disappears. Operation and closure of the defect is required only if the hernia persists after the age of 3 years or if the child has an episode of complication during the period of observation like irreducibility, intestinal obstruction, abdominal distension with vomiting, or red shiny painful skin over the swelling. Surgery is always done under anesthesia. The defect in the muscles is defined and the edges of the muscles are brought together with sutures to close the defect. In general, the child needs to stay in the hospital for 1 day and the healing is complete within 8 days.At times, there may be a fleshy red swelling seen in the hollow of the umbilicus that persists after the cord has fallen off. It may bleed on touch, or may stain the clothes that come in contact with it. This needs to be shown to a pediatric surgeon. This is most likely to be an umbilical polyp and the therapy is to tie it at the base with a stitch so that it falls off and there is no bleeding. Alternatively, it may be an umbilical granuloma that responds well to local application of dry salt or silver nitrate but may take a few weeks to heal and dry. Adults Many hernias never cause any problems, and do not require any treatment at all. However, because the risk of complications with age are higher and the hernia is unlikely to resolve without treatment, surgery is usually recommended.Usually hernia has content of bowel, abdominal fat or omentum, tissue that normally would reside inside the abdominal cavity if it wasnt for the hernia. In some cases, the content gets trapped in the hernia sac, outside the abdominal wall. The blood flow to this trapped tissue may be compromised, or the content even strangulated in some cases. Depending on the severity and duration of blood flow compromise, it can cause some pain and discomfort. Usually the situation resolves itself, when the protrusion of content is returned to the abdominal cavity. Sometimes this needs to be done by a doctor at the ICU.The hernia content becoming trapped combined with severe pain, inability to perform bowel movement or pass gas, swelling, fever, nausea and/or discoloration over the area could be signs of a prolonged compromise in blood flow of the hernia content. If so, emergency surgery is often required, since prolonged compromise in blood flow otherwise threatens organ integrity.If hernias are symptomatic and disturb daily activity or have had episodes of threatening incarceration, preventive surgical treatment can be considered. The surgery is performed under anaesthesia, while the surgeon identifies the edges of the defect and bring them together permanently using either suture or mesh. Small umbilical hernias are often successfully repaired with suture, while larger hernias may require a suitable mesh, although some surgeons advocate mesh treatment for most hernias. The most common complications for both techniques are superficial wound infections and recurrence of the hernia and some people experience pain at the surgical site. See also Fetal development Umbilicoplasty Paraumbilical hernia Omphalocoele References External links Overview at Cincinnati Childrens Hospital Medical Center Overview at Penn State
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
I'm encountering the term 'Nevus of Ito' in medical literature. What's its definition?
Nevus of Ito (also known as "Nevus fuscoceruleus acromiodeltoideus") is a skin condition with similar features to the Nevus of Ota, but occurring in a different distribution.: 700 See also Skin lesion References == External links ==
You are a medical lexicon. Explain medical terminology with depth and clarity, making sure the information is both accurate and easy to grasp.
I'm seeking clarification on the medical term 'Stiff skin syndrome.' Could you explain it?
Stiff skin syndrome (also known as "Congenital fascial dystrophy") is a cutaneous condition characterized by ‘rock hard’ induration, thickening of the skin and subcutaneous tissues, limited joint mobility, and mild hypertrichosis in infancy or early childhood. Immunologic abnormalities or vascular hyperactivity are not present in patients.Not much is known about it, cause or treatment, and further investigation is required, as it has only been reported 41 times throughout history. Genetics This condition is associated with mutations in the Fibrillin 1 (FBN1) gene. Diagnosis Differential diagnosis Other conditions associated with mutations in this gene include acromicric dysplasia, Marfan syndrome and its variant Marfanoid–progeroid–lipodystrophy syndrome, autosomal dominant Weill-Marchesani syndrome, isolated ectopia lentis, MASS phenotype, and Shprintzen-Goldberg syndrome. See also Scleroderma Self-healing papular mucinosis List of cutaneous conditions References == External links ==
You act as a medical communicator. Your goal is to translate medical terms into comprehensive, yet straightforward language for all readers.
The term 'Amoebiasis' keeps coming up in medical discussions. What does it stand for?
Amoebiasis, or amoebic dysentery, is an infection of the intestines caused by a parasitic amoeba Entamoeba histolytica. Amoebiasis can be present with no, mild, or severe symptoms. Symptoms may include lethargy, loss of weight, colonic ulcerations, abdominal pain, diarrhea, or bloody diarrhea. Complications can include inflammation and ulceration of the colon with tissue death or perforation, which may result in peritonitis. Anemia may develop due to prolonged gastric bleeding.Cysts of Entamoeba can survive for up to a month in soil or for up to 45 minutes under fingernails. Invasion of the intestinal lining results in bloody diarrhea. If the parasite reaches the bloodstream it can spread through the body, most frequently ending up in the liver where it can cause amoebic liver abscesses. Liver abscesses can occur without previous diarrhea. Diagnosis is typically made by stool examination using microscopy, but it can be difficult to distinguish E. hystolitica from other harmless entamoeba species. An increased white blood cell count may be present in severe cases. The most accurate test is finding specific antibodies in the blood, but it may remain positive following treatment. Bacterial colitis can result in similar symptoms.Prevention of amoebiasis is by improved sanitation, including separating food and water from faeces. There is no vaccine. There are two treatment options depending on the location of the infection. Amoebiasis in tissues is treated with either metronidazole, tinidazole, nitazoxanide, dehydroemetine or chloroquine, while luminal infection is treated with diloxanide furoate or iodoquinoline. Effective treatment against all stages of the disease may require a combination of medications. Infections without symptoms may be treated with just one antibiotic, and infections with symptoms are treated with two antibiotics.Amoebiasis is present all over the world, though most cases occur in the developing world. About 480 million people are currently infected with about 40 million new cases per year with significant symptoms. This results in the death of between 40,000–100,000 people a year. The first case of amoebiasis was documented in 1875 and in 1891 the disease was described in detail, resulting in the terms amoebic dysentery and amoebic liver abscess. Further evidence from the Philippines in 1913 found that upon swallowing cysts of E. histolytica volunteers developed the disease. Signs and symptoms Most infected people, about 90%, are asymptomatic, but this disease has the potential to become serious. It is estimated that about 40,000 to 100,000 people worldwide die annually due to amoebiasis.Infections can sometimes last for years if there is no treatment. Symptoms take from a few days to a few weeks to develop and manifest themselves, but usually it is about two to four weeks. Symptoms can range from mild diarrhea to dysentery with blood, coupled with intense abdominal pains. Extra-intestinal complications might also arise as a result of invasive infection which includes colitis, liver, lung, or brain abscesses. The blood comes from bleeding lesions created by the amoebae invading the lining of the colon. In about 10% of invasive cases the amoebae enter the bloodstream and may travel to other organs in the body. Most commonly this means the liver, as this is where blood from the intestine reaches first, but they can end up almost anywhere in the body.Onset time is highly variable and the average asymptomatic infection persists for over a year. It is theorized that the absence of symptoms or their intensity may vary with such factors as strain of amoeba, immune response of the host, and perhaps associated bacteria and viruses.In asymptomatic infections, the amoeba lives by eating and digesting bacteria and food particles in the gut, a part of the gastrointestinal tract. It does not usually come in contact with the intestine itself due to the protective layer of mucus that lines the gut. Disease occurs when amoeba comes in contact with the cells lining the intestine. It then secretes the same substances it uses to digest bacteria, which include enzymes that destroy cell membranes and proteins. This process can lead to penetration and digestion of human tissues, resulting first in flask-shaped ulcerations in the intestine. Entamoeba histolytica ingests the destroyed cells by phagocytosis and is often seen with red blood cells (a process known as erythrophagocytosis) inside when viewed in stool samples. Especially in Latin America, a granulomatous mass (known as an amoeboma) may form in the wall of the ascending colon or rectum due to long-lasting immunological cellular response, and is sometimes confused with cancer.The ingestion of one viable cyst may cause an infection.Steroid therapy can occasionally provoke severe amoebic colitis in people with any E. histolytica infection. This bears high mortality: on average more than 50% with severe colitis die. Cause Amoebiasis is an infection caused by the amoeba Entamoeba histolytica. Transmission Amoebiasis is usually transmitted by the fecal-oral route, but it can also be transmitted indirectly through contact with dirty hands or objects as well as by anal-oral contact. Infection is spread through ingestion of the cyst form of the parasite, a semi-dormant and hardy structure found in feces. Any non-encysted amoebae, or trophozoites, die quickly after leaving the body but may also be present in stool: these are rarely the source of new infections. Since amoebiasis is transmitted through contaminated food and water, it is often endemic in regions of the world with limited modern sanitation systems, including México, Central America, western South America, South Asia, and western and southern Africa.Amoebic dysentery is one form of travelers diarrhea, although most travelers diarrhea is bacterial or viral in origin. Pathogenesis Amoebiasis results from tissue destruction induced by the E. histolytica parasite. E. histolytica causes tissue damage by three main events: direct host cell killing, inflammation, and parasite invasion. Diagnosis With colonoscopy it is possible to detect small ulcers of between 3–5mm, but diagnosis may be difficult as the mucous membrane between these areas can look either healthy or inflamed. Trophozoites may be identified at the ulcer edge or within the tissue, using immunohistochemical staining with specific anti-E. histolytica antibodies.Asymptomatic human infections are usually diagnosed by finding cysts shed in the stool. Various flotation or sedimentation procedures have been developed to recover the cysts from fecal matter and stains help to visualize the isolated cysts for microscopic examination. Since cysts are not shed constantly, a minimum of three stools are examined. In symptomatic infections, the motile form (the trophozoite) is often seen in fresh feces. Serological tests exist, and most infected individuals (with symptoms or not) test positive for the presence of antibodies. The levels of antibody are much higher in individuals with liver abscesses. Serology only becomes positive about two weeks after infection. More recent developments include a kit that detects the presence of amoeba proteins in the feces, and another that detects ameba DNA in feces. These tests are not in widespread use due to their expense.Microscopy is still by far the most widespread method of diagnosis around the world. However it is not as sensitive or accurate in diagnosis as the other tests available. It is important to distinguish the E. histolytica cyst from the cysts of nonpathogenic intestinal protozoa such as Entamoeba coli by its appearance. E. histolytica cysts have a maximum of four nuclei, while the commensal Entamoeba coli cyst has up to 8 nuclei. Additionally, in E. histolytica, the endosome is centrally located in the nucleus, while it is usually off-center in Entamoeba coli. Finally, chromatoidal bodies in E. histolytica cysts are rounded, while they are jagged in Entamoeba coli. However, other species, Entamoeba dispar and E. moshkovskii, are also commensals and cannot be distinguished from E. histolytica under the microscope. As E. dispar is much more common than E. histolytica in most parts of the world this means that there is a lot of incorrect diagnosis of E. histolytica infection taking place. The WHO recommends that infections diagnosed by microscopy alone should not be treated if they are asymptomatic and there is no other reason to suspect that the infection is actually E. histolytica. Detection of cysts or trophozoites stools under microscope may require examination of several samples over several days to determine if they are present, because cysts are shed intermittently and may not show up in every sample.Typically, the organism can no longer be found in the feces once the disease goes extra-intestinal. Serological tests are useful in detecting infection by E. histolytica if the organism goes extra-intestinal and in excluding the organism from the diagnosis of other disorders. An Ova & Parasite (O&P) test or an E. histolytica fecal antigen assay is the proper assay for intestinal infections. Since antibodies may persist for years after clinical cure, a positive serological result may not necessarily indicate an active infection. A negative serological result, however, can be equally important in excluding suspected tissue invasion by E. histolytica.Stool antigen detection tests have helped to overcome some of the limitations of stool microscopy. Antigen detection tests are easy to use, but they have variable sensitivity and specificity, especially in low-endemic areas.Polymerase chain reaction (PCR) is considered the gold standard for diagnosis but remains underutilized. Prevention To help prevent the spread of amoebiasis around the home : Wash hands thoroughly with soap and hot running water for at least 10 seconds after using the toilet or changing a babys diaper, and before handling food. Clean bathrooms and toilets often; pay particular attention to toilet seats and taps. Avoid sharing towels or face washers.To help prevent infection: Avoid raw vegetables when in endemic areas, as they may have been fertilized using human feces. Boil water or treat with iodine tablets. Avoid eating street foods especially in public places where others are sharing sauces in one containerGood sanitary practice, as well as responsible sewage disposal or treatment, are necessary for the prevention of E. histolytica infection on an endemic level. E.histolytica cysts are usually resistant to chlorination, therefore sedimentation and filtration of water supplies are necessary to reduce the incidence of infection.E. histolytica cysts may be recovered from contaminated food by methods similar to those used for recovering Giardia lamblia cysts from feces. Filtration is probably the most practical method for recovery from drinking water and liquid foods. E. histolytica cysts must be distinguished from cysts of other parasitic (but nonpathogenic) protozoa and from cysts of free-living protozoa as discussed above. Recovery procedures are not very accurate; cysts are easily lost or damaged beyond recognition, which leads to many falsely negative results in recovery tests. Treatment E. histolytica infections occur in both the intestine and (in people with symptoms) in tissue of the intestine and/or liver. Those with symptoms require treatment with two medications, an amoebicidal tissue-active agent and a luminal cysticidal agent. Individuals that are asymptomatic only need a luminal cysticidal agent. Prognosis In the majority of cases, amoebas remain in the gastrointestinal tract of the hosts. Severe ulceration of the gastrointestinal mucosal surfaces occurs in less than 16% of cases. In fewer cases, the parasite invades the soft tissues, most commonly the liver. Only rarely are masses formed (amoebomas) that lead to intestinal obstruction.(Mistaken for Ca caecum and appendicular mass) Other local complications include bloody diarrhea, pericolic and pericaecal abscess.Complications of hepatic amoebiasis includes subdiaphragmatic abscess, perforation of diaphragm to pericardium and pleural cavity, perforation to abdominal cavital (amoebic peritonitis) and perforation of skin (amoebiasis cutis).Pulmonary amoebiasis can occur from liver lesions by spread through the blood or by perforation of pleural cavity and lung. It can cause lung abscess, pulmono pleural fistula, empyema lung and broncho pleural fistula. It can also reach the brain through blood vessels and cause amoebic brain abscess and amoebic meningoencephalitis. Cutaneous amoebiasis can also occur in skin around sites of colostomy wound, perianal region, region overlying visceral lesion and at the site of drainage of liver abscess.Urogenital tract amoebiasis derived from intestinal lesion can cause amoebic vulvovaginitis (Mays disease), rectovesicle fistula and rectovaginal fistula.Entamoeba histolytica infection is associated with malnutrition and stunting of growth in children. Epidemiology Amoebiasis caused about 55,000 deaths worldwide in 2010, down from 68,000 in 1990. In older textbooks it is often stated that 10% of the worlds population is infected with Entamoeba histolytica. Nevertheless, this means that there are up to 50 million true E. histolytica infections and approximately seventy thousand die each year, mostly from liver abscesses or other complications. Although usually considered a tropical parasite, the first case reported (in 1875) was actually in St Petersburg in Russia, near the Arctic Circle. Infection is more common in warmer areas, but this is because of both poorer hygiene and the parasitic cysts surviving longer in warm moist conditions. History Amoebiasis was first described by Fedor A. Lösch in 1875, in northern Russia. The most dramatic incident in the US was the Chicago Worlds Fair outbreak in 1933, caused by contaminated drinking water. There were more than a thousand cases, with 98 deaths. It has been known since 1897 that at least one non-disease-causing species of Entamoeba existed (Entamoeba coli), but it was first formally recognized by the WHO in 1997 that E. histolytica was two species, despite this having first been proposed in 1925. In addition to the now-recognized E. dispar, evidence shows there are at least two other species of Entamoeba that look the same in humans: E. moshkovskii and Entamoeba bangladeshi. The reason these species havent been differentiated until recently is because of the reliance on appearance.Joel Connolly of the Chicago Bureau of Sanitary Engineering brought the outbreak to an end when he found that defective plumbing permitted sewage to contaminate drinking water. In 1998 there was an outbreak of amoebiasis in the Republic of Georgia. Between 26 May and 3 September 1998, 177 cases were reported, including 71 cases of intestinal amoebiasis and 106 probable cases of liver abscess.The Nicobarese people have attested to the medicinal properties found in Glochidion calocarpum, a plant common to India, saying that its bark and seed are most effective in curing abdominal disorders associated with amoebiasis. Society and culture An outbreak of amoebic dysentery occurs in Diana Gabaldons novel A Breath of Snow and Ashes. References External links Amoebiasis - Centers for Disease Control and Prevention
You function as a medical informant. Please provide in-depth yet accessible descriptions of medical terms, suitable for a broad audience.
The term 'Hydrosalpinx' keeps coming up in medical discussions. What does it stand for?
A hydrosalpinx is a condition that occurs when a Fallopian tube is blocked and fills with serous or clear fluid near the ovary (distal to the uterus). The blocked tube may become substantially distended giving the tube a characteristic sausage-like or retort-like shape. The condition is often bilateral and the affected tubes may reach several centimeters in diameter. The blocked tubes cause infertility. A Fallopian tube filled with blood is a hematosalpinx, and with pus a pyosalpinx.Hydrosalpinx is a composite of the Greek words ὕδωρ (hydōr – "water") and σάλπιγξ (sálpinx – "trumpet"); its plural is hydrosalpinges. Signs and symptoms Symptoms can vary. Some patients have lower often recurring abdominal pain or pelvic pain, while others may be asymptomatic. As tubal function is impeded, infertility is a common symptom. Patients who are not trying to get pregnant and have no pain, may go undetected. Endometriosis, ruptured appendicitis, and abdominal surgery sometimes are associated with the problem. As a reaction to injury, the body rushes inflammatory cells into the area, and inflammation and later healing result in loss of the fimbria and closure of the tube. These infections usually affect both Fallopian tubes, and although a hydrosalpinx can be one-sided, the other tube on the opposite side is often abnormal. By the time it is detected, the tubal fluid usually is sterile, and does not contain an active infection. (Not symptoms) Cause The major cause for distal tubal occlusion is pelvic inflammatory disease, usually as a consequence of an ascending infection by chlamydia or gonorrhea. However, not all pelvic infections will cause distal tubal occlusion. Tubal tuberculosis is an uncommon cause of hydrosalpinx formation.While the cilia of the inner lining (endosalpinx) of the Fallopian tube beat towards the uterus, tubal fluid is normally discharged via the fimbriated end into the peritoneal cavity from where it is cleared. If the fimbriated end of the tube becomes agglutinated, the resulting obstruction does not allow the tubal fluid to pass; it accumulates and reverts its flow downstream, into the uterus, or production is curtailed by damage to the endosalpinx. This tube then is unable to participate in the reproductive process: sperm cannot pass, the egg is not picked up, and fertilization does not take place.Other causes of distal tubal occlusion include adhesion formation from surgery, endometriosis, and cancer of the tube, ovary or other surrounding organs. A hematosalpinx is most commonly associated with an ectopic pregnancy. A pyosalpinx is typically seen in a more acute stage of pelvic inflammatory disease and may be part of a tubo-ovarian abscess. Tubal phimosis refers to a situation where the tubal end is partially occluded, in this case fertility is impeded, and the risk of an ectopic pregnancy is increased. Diagnosis Hydrosalpinx may be diagnosed using ultrasonography as the fluid filled elongated and distended tubes display their typical echolucent pattern. However, a small hydrosalpinx may be missed by sonography. During an infertility work-up a hysterosalpingogram, an X-ray procedure that uses a contrast agent to image the Fallopian tubes, shows the retort-like shape of the distended tubes and the absence of spillage of the dye into the peritoneum. If, however, there is a tubal occlusion at the utero-tubal junction, a hydrosalpinx may go undetected. When a hydrosalpinx is detected by a hysterosalpingogram it is prudent to administer antibiotics to reduce the risk of reactivation of an inflammatory process. When laparoscopy is performed, the surgeon may note the distended tubes, identify the occlusion, and may also find associated adhesions affecting the pelvic organs. Laparoscopy not only allows for the diagnosis of hydrosalpinx, but also presents a platform for intervention (see management). Prevention As pelvic inflammatory disease is the major cause of hydrosalpinx formation, steps to reduce sexually transmitted disease will reduce incidence of hydrosalpinx. Also, as hydrosalpinx is a sequel to a pelvic infection, adequate and early antibiotic treatment of a pelvic infection is called for. Management For most of the past century patients with tubal infertility due to hydrosalpinx underwent tubal corrective surgery to open up the distally occluded end of the tubes (salpingostomy) and remove adhesions (adhesiolysis). Unfortunately, pregnancy rates tended to be low as the infection process often had permanently damaged the tubes, and in many cases hydrosalpinges and adhesions formed again. Further, ectopic pregnancy is a typical complication. Surgical interventions can be done by laparotomy or laparoscopy. Non-infertile patients who suffer from severe chronic pain due to hydrosalpinx formation that is not relieved by pain management may consider surgical removal of the affected tubes (salpingectomy) or even a hysterectomy with removal of the tubes, possibly ovaries. IVF Among the main causes for female infertility, tubal factors account for 25-35%. In particular, hydrosalpinx is found in 10-30% of couples with infertility; actually, this condition may impair fertility and IVF outcomes.With the advent of IVF which bypasses the need for tubal function a more successful treatment approach has become available for women who want to conceive. IVF has now become the major treatment for women with hydrosalpinx to achieve a pregnancy. Several studies have shown that IVF patients with untreated hydrosalpinx have lower conception rates than controls and it has been speculated that the tubal fluid that enters the endometrial cavity alters the local environment or affects the embryo in a detrimental way. Indeed, the presence of hydrosalpinx prior to IVF treatments, negatively affects pregnancy rates and increases the risk for spontaneous miscarriage. Even if the exact mechanism remains unclear, these effects could be ascribed to a combination of mechanical effects of hydrosalpinx fluid, toxic effects on the embryo and altered endometrial receptivity Thus, many specialists advocate that prior to an IVF attempt, the hydrosalpinx should be removed. However, the benefits seem to be higher when the hydrosalpinx is bilateral, visible on ultrasound, or both. Salpingectomy removes the chronically infected hydrosalpinx, decreasing the risk of infection after oocyte retrieval and increasing the accessibility to the ovary; anyway, it is a surgical procedure and it could also affect the ovarian blood flow. History Regnier de Graaf may have been the first to understand basic tubal function, describe hydrosalpinx, and link the development of hydrosalpinx with female infertility. The usually infectious cause of the process was well known to physicians by the end of the nineteenth century. With the introduction of hysterosalpingography (1914) and tubal insufflation (1920) its non-surgical diagnosis became possible. Surgery was gradually displaced by IVF as the main treatment for tubal infertility after the birth of Louise Brown in 1978. References External links fact sheet, ASRM Medpix pictures of hydrosalpinx
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
What does the medical term 'Tonsil carcinoma' encompass?
Carcinoma of the tonsil is a type of squamous cell carcinoma. The tonsil is the most common site of squamous cell carcinoma in the oropharynx. It comprises 23.1% of all malignancies of the oropharynx. The tumors frequently present at advanced stages, and around 70% of patients present with metastasis to the cervical lymph nodes. . The most reported complaints include sore throat, otalgia or dysphagia. Some patients may complain of feeling the presence of a lump in the throat. Approximately 20% patients present with a node in the neck as the only symptom.Main risk factors of developing carcinoma tonsil include tobacco smoking and regular intake of high amount of alcohol. It has also been linked to human papilloma virus (HPV type HPV16). Other risk factors include poor maintenance of oral hygiene, a genetic predisposition leading to inclination towards development of throat cancer, immunocompromised states (such as post solid-organ transplant), and chronic exposure to agents such as asbestos and perchloroethylene in certain occupations, radiation therapy and dietary factors. Signs and symptoms The early lesions are usually asymptomatic. The patients presenting with an advanced stage of the disease comprises around 66–77% of the cases. The most important signs include a lump in the neck when palpated and weight loss. People may also present with fatigue as a symptom.The primary tumor does not have readily discernible signs or symptoms as they grow within the tonsillar capsule. It is difficult to notice anything suspicious on examination of the tonsil other than slight enlargement or the development of firmness around the area. The carcinoma may occur in one or more sites deep within the tonsillar crypts. It may be accompanied by the enlargement of the tonsil. The affected tonsil grows into the oropharyngeal space making it noticeable by the patient in the form of a neck mass mostly in the jugulodigastric region. As the tonsils consist of a rich network of lymphatics, the carcinoma may metastasize to the neck lymph nodes which many are cystic. Extension of tumor to skull or mediastinum can occur. The additional symptoms include a painful throat, dysphagia, otalgia (due to cranial nerve involvement), foreign body sensation, bleeding, fixation of tongue (infiltration of deep muscles) and trismus (if the pterygoid muscle is involved in the parapharyngeal space).On the other hand, the tumor may also present as a deep red or white fungating wound growing outwards, breaking the skin surface with a central ulceration. This wound-like ulcer fails to heal (non-healing) leading to bleeding and throat pain and other associated symptoms.During biopsy, the lesion may show three signs: gritty texture, firmness and cystification owing to keratinization, fibrosis and necrosis respectively.Cervical lymphydenopathy may be present. Cause Smoking and alcohol abuse are the major risk factors. Viral causes have recently been taken under consideration as one of the risk factors. Viruses such as Epstein-Barr virus (EBV) (majorly involved in causing nasopharyngeal carcinoma) and human papilloma virus are included in this category. Chewing of betel nut (Areca catechu) quid has been directly associated to cause oral cancers. It has also been stated under the FDA poisonous plant data base by the U.S Food and Drug Administration An unbalanced diet, deficit in fruits and vegetables has shown to increase the risk of cancer. Pathophysiology Metastasis to regional lymphnodes is common as the tonsil has a rich supply of lymphatics giving way to the tumor cells to metastasis to other lymph nodes (commonly the lymph nodes of neck) and cause lymphydenopathy. The cervical lyphydenopathy can be ipsilateral (70% or more patients) or bilateral (30% and fewer patients). The carcinoma of tonsil usually spreads through the cervical lymph node levels II, III, IV, V, and retropharyngeal lymph nodes.The fourth edition of WHO s classification of head and neck tumors subdivides squamous cell carcinoma of the tonsil into two types: HPV positive or negative. HPV positive tumors arise from the deep lymphoid tissue of the tonsillar crypts and are non-keratinizing. On the other hand HPV negative tumors develop from the tonsillar surface epithelium and hence have keratinizing dysplasia. Routes of metastasis Metastasis is common in tonsillar carcinoma. It largely depends on the stage of the cancer and the route through with the cancer cells metastasize. The cancer cells may spread to adjacent structures, to lymphatics or to distant locations in the body producing secondary tumors. Local The tumor may spread locally to soft palate and pillars, base of tongue, pharyngeal wall and hypopharynx. It may invade pterygoid muscles and mandible, resulting in pain and trismus. Parapharyngeal space may also get invaded. Lymphatic 50% of patients have initial cervical node involvement at the time of presentation. Jugulardigastric nodes are the first to be involved. Distant metastasis The occurrence of distant metastasis varies extensively, ranging between 4–31% in clinical studies. Factors influencing the incidence of distant metastasis are: Location of primary tumor. Initial staging Histological differentiation Loco-regional control of the primary tumor.The records of 471 male patients with tonsillar carcinoma seen at the Veterans Administration Medical Center, Hines, Illinois, have been reviewed to establish the incidence and site of distant metastasis. All the patients were histological diagnosed and proven cases of tonsillar carcinoma. 72 (15%) out of 471 patients and 33 (29%) of 155 autopsied patients were reported to have distant metastasis. Squamous cell carcinoma was the most common reported cell type (88%); cases with Lymphoepithelioma had the highest incidence of distant metastasis. The most common anatomical sites of incidence of distant metastasis include lung, liver and bones. Thorough investigation of these organs is highly recommended before treatment as well as during follow ups. Diagnosis The first step to diagnosing tonsil carcinoma is to obtain an accurate history from the patient. The physician will also examine the patient for any indicative physical signs. A few tests then, maybe conducted depending on the progress of the disease or if the doctor feels the need for. The tests include: Fine needle aspiration, blood tests, MRI, x-rays and PET scan. Staging The staging of a tumor mass is based on TNM staging.T staging is the based on the tumor mass. The N staging is based on the extent of spread of cancer to the lymph nodes. Finally, the M stage indicates if the cancer has spread beyond the head and neck or not. T Staging The basis of deciding the T stage depends on physical examination and imaging of the tumor. N staging This stage is decided through the assessment of the lymph nodes. M staging Based on the examination of the entire body. FInally, the stage is decided by concluding the above results and referring the following chart: Treatment The treatment for tonsil carcinoma includes the following methods: Radiotherapy Early radio-sensitive tumors are treated by radiotherapy along with irradiation of cervical nodes. The radiation uses high-energy X-rays, electron beams, or radioactive isotopes to destroy cancer cells. Chemotherapy Induction chemotherapy is the treatment adapted for shrinking the tonsil tumor. It is given prior to other treatments, hence, the term induction. After the therapy is completed, the patient is asked to rest and is evaluated over a period of time. Then the patient is given chemo-radiation therapy (a combination of chemotherapy and radiation) to completely destroy the tumor cells. Surgery If radiation and chemotherapy are unable to destroy the tumor, surgical intervention is considered. Excision of the tonsil can be done for early superficial lesions. Large lesions and those which invade bone require wide surgical excision with hemimandibulectomy and neck dissection (Commando operation) Combination therapy Surgery may be combined with pre- or post operative radiation. Chemotherapy may be given as an adjunct to surgery or radiation. Prognosis Prognosis is determined by various factors such as stage, Human Papilloma Virus (HPV) status, Lymph infiltration of cancer cells, spread of cancer cells beyond the lymph node capsule, margins of the tumor and the extent of metastasis. Many factors are unique to each individual patient and may affect the chances of success of the treatment. Factors determining the prognosis of tonsillar carcinoma are as follows: HPV status Tonsillar carcinoma can be either HPV related or HPV unrelated. It is shown that cases which are HPV positive have a better prognosis than those with HPV negative oropharyngeal cancer. Stage The stage at which the cancer presents itself affects the type of definitive treatment, chance of cure, recurrence of cancer and survival rate of the patient. Generally the patient presents very late due to the lack of definitive symptoms in the early stages of the disease. Nearly three fourths of the patients present in Stage III or later.The stages of oropharyngeal cancer are as follows: • Stage 0 (carcinoma in situ): This stage indicates a good prognosis as most patients with stage 0 survive for a long period without the requirement of an intensive treatment. Although, the patient must cease smoking as it can increase the risk of developing a new cancer.• Stage I and II: Most patients presenting at this stage receive successful treatment, showing a good prognosis. The modes of treatment for this stage include chemotherapy, surgery, radiation therapy or chemoradiation. The main treatment at this stage is radiation, targeting the tumor and the cervical lymph nodes. Surgical removal of the tumor and lymphadenectomy of the cervical (neck) lymph nodes can also be taken up at the main treatment method instead of radiation. And remaining cancer cells post surgery are treated with chemoradiation.• Stage III and IVA: In this stage the cancer cells metastasize into the local tissues and cervical lymph nodes. The treatment used in these cases is chemo radiation. Any remaining cancer cells post chemoradiation are surgically removed. Lymphadenectomy may also be done after treatment with chemoradiation if the cancer cells have infiltrated the cervical lymph nodes. Another method of treatment includes, first, surgical removal of tumor as well as cervical lymph nodes followed by chemoradiation or radiation to decrease the chances of recurrence.• Stage IVB: In this stage the cancer has already undergone distant metastasis, hence showing poor prognosis. The treatment includes chemotherapy, cetuximab or both. Radiation may be used to aid in relieving symptoms arising from the cancer and also to prevent further development of complications. Lymphatic infiltration Nearly half of the patients with anterior pillar lesions and three fourths of the patients with tonsillar fossa lesions have nodal metastasis at the time of presentation itself. Metastasis of cancer cells to cervical lymph nodes diminishes the chance of cure. Specially, if there is evidence of metastasis of cancer cells beyond the lymph node capsule. Though, some data indicates that the metastasis of cancer cells outside the lymph node capsule is a bad prognosis for HPV-unrelated oropharynx cancer than it is for HPV-related oropharynx. Tumor extension Extension of the tumor to the base of tongue reduces the chances of cure drastically. It also increases the chances of recurrence after treatment. Metastasis Spread of cancer cells to local structures like tissues, vessels, large nerves and lymphatics worsens a patients prognosis.A study that analyzed the survival rate in HPV-related oropharynx carcinoma to that in HPV-unrelated oropharynx carcinoma. The study revealed that based on the HPV status of the patient, for STAGE III and STAGE IV oropharynx carcinoma, there was a discrepancy in survival after three years. The survival was 82% in HPV positive and then also 57% in HPV negative cancers. References == External links ==
You are a resource for medical understanding. Offer detailed explanations of medical terms, making complex concepts clear and comprehensible.
Could you provide a brief overview of 'Sofosbuvir' in a medical context?
Sofosbuvir, sold under the brand name Sovaldi among others, is a medication used to treat hepatitis C. It is taken by mouth.Common side effects include fatigue, headache, nausea, and trouble sleeping. Side effects are generally more common in interferon-containing regimens.: 7  Sofosbuvir may reactivate hepatitis B in those who have been previously infected. In combination with ledipasvir, daclatasvir or simeprevir, it is not recommended with amiodarone due to the risk of an abnormally slow heartbeat. Sofosbuvir is in the nucleotide analog family of medications and works by blocking the hepatitis C NS5B protein.Sofosbuvir was discovered in 2007 and approved for medical use in the United States in 2013. It is on the World Health Organizations List of Essential Medicines. Medical uses Initial HCV treatment In 2016, the American Association for the Study of Liver Diseases and the Infectious Diseases Society of America jointly published a recommendation for the management of hepatitis C. In this recommendation, sofosbuvir used in combination with other drugs is part of all first-line treatments for HCV genotypes 1, 2, 3, 4, 5, and 6, and is also part of some second-line treatments. Sofosbuvir in combination with velpatasvir is recommended for all genotypes with a cure rate greater than 90%, and close to 100% in most cases. The duration of treatment is typically 12 weeks.Sofosbuvir is also used with other medications and longer treatment durations, depending on specific circumstances, genotype and cost-effectiveness–based perspective. For example, for the treatment of genotypes 1, 4, 5, and 6 hepatitis C infections, sofosbuvir can be used in combination with the viral NS5A inhibitor ledipasvir. In genotype 2 and 3 HCV infections, sofosbuvir can be used in combination with daclatasvir. For the treatment of cases with cirrhosis or liver transplant patients, weight-based ribavirin is sometimes added. Peginterferon with or without sofosbuvir is not recommended in an initial HCV treatment.Compared to previous treatments, sofosbuvir-based regimens provide a higher cure rate, fewer side effects, and a two- to four-fold reduction in therapy duration. Sofosbuvir allows most people to be treated successfully without the use of peginterferon, an injectable drug with severe side effects that is a key component of older drug combinations for the treatment of hepatitis C virus. Prior failed treatment For people who have experienced treatment failure with some form of combination therapy for hepatitis C infection, one of the next possible steps would be retreatment with sofosbuvir and either ledipasvir or daclatasvir, with or without weight-based ribavirin. The genotype and particular combination therapy a person was on when the initial treatment failed are also taken into consideration when deciding which combination to use next. The duration of retreatment can range from 12 weeks to 24 weeks depending on several factors, including which medications are used for the retreatment, whether the person has liver cirrhosis or not, and whether the liver damage is classified as compensated cirrhosis or decompensated cirrhosis. Pregnancy and breastfeeding No adequate human data are available to establish whether or not sofosbuvir poses a risk to pregnancy outcomes. However, ribavirin, a medication that is often given together with sofosbuvir to treat hepatitis C, is assigned a Pregnancy Category X (contraindicated in pregnancy) by the FDA. Pregnant women with hepatitis C who take ribavirin have shown some cases of birth defects and death in their fetus. It is recommended that sofosbuvir/ribarivin combinations be avoided in pregnant females and their male sexual partners in order to reduce harmful fetal defects caused by ribavirin. Females who could potentially become pregnant should undergo a pregnancy test 2 months prior to starting the sofosbuvir/ribavirin/peginterferon combination treatment, monthly throughout the duration of the treatment, and six months post-treatment to reduce the risk of fetal harm in case of accidental pregnancy.It is unknown whether sofosbuvir and ribavirin pass into breastmilk; therefore, it is recommended that the mother does not breastfeed during treatment with sofosbuvir alone or in combination with ribavirin. Contraindications There are no specific contraindications for sofosbuvir when used alone. However, when used in combination with ribavirin or peginterferon alfa/ribavirin, or others, the contraindications applicable to these agents are applied. Side effects Sofosbuvir used alone and in combination with other drugs such as ribavirin with or without a peginterferon has a good safety profile. Common side effects are fatigue, headache, nausea, rash, irritability, dizziness, back pain, and anemia. Most side effects are more common in interferon-containing regimens as compared to interferon-free regimens. For example, fatigue and headache are nearly reduced by half, influenza-like symptoms are reduced to 3–6% as compared to 16–18%, and neutropenia is almost absent in interferon-free treatment.: 7 Sofosbuvir may reactivate hepatitis B in those who have been previously infected. The European Medicines Agency (EMA) has recommended screening all people for hepatitis B before starting sofosbuvir for hepatitis C in order to minimize the risk of hepatitis B reactivation. Interactions Sofosbuvir (in combination with ledipasvir, daclatasvir or simeprevir) should not be used with amiodarone due to the risk of abnormally slow heartbeats.Sofosbuvir is a substrate of P-glycoprotein, a transporter protein that pumps drugs and other substances from intestinal epithelium cells back into the gut. Therefore, inducers of intestinal P-glycoprotein, such as rifampicin and St. Johns wort, could reduce the absorption of sofosbuvir.In addition, coadministration of sofosbuvir with anticonvulsants (carbamazepine, phenytoin, phenobarbital, oxcarbazepine), antimycobacterials (rifampin, rifabutin, rifapentine), and the HIV protease inhibitor tipranavir and ritonavir is expected to decrease sofosbuvir concentration. Thus, coadministration is not recommended.The interaction between sofosbuvir and a number of other drugs, such as ciclosporin, darunavir/ritonavir, efavirenz, emtricitabine, methadone, raltegravir, rilpivirine, tacrolimus, or tenofovir disoproxil, were evaluated in clinical trials and no dose adjustment is needed for any of these drugs. Pharmacology Mechanism of action Sofosbuvir inhibits the hepatitis C NS5B protein. Sofosbuvir appears to have a high barrier to the development of resistance.Sofosbuvir is a prodrug of the Protide type, whereby the active phosphorylated nucleotide is granted cell permeability and oral bioavailability. It is metabolized to the active antiviral agent GS-461203 (2-deoxy-2-α-fluoro-β-C-methyluridine-5-triphosphate). GS-461203 serves as a defective substrate for the NS5B protein, which is the viral RNA polymerase, thus acts as an inhibitor of viral RNA synthesis. Although sofosbuvir has a 3 hydroxyl group to act as a nucleophile for an incoming NTP, a similar nucleotide analogue, 2-deoxy-2-α-fluoro-β-C-methylcytidine, is proposed to act as a chain terminator because the 2 methyl group of the nucleotide analogue causes a steric clash with an incoming NTP. Sofosbuvir would act in a similar way. Pharmacokinetics Sofosbuvir is only administered orally. The peak concentration after oral administration is 0.5–2 hours post-dose, regardless of initial dose. Peak plasma concentration of the main circulating metabolite GS-331077 occurs 2–4 hours post-dose. GS-331077 is the pharmacologically inactive nucleoside.Plasma protein binding of sofosbuvir is 61–65%, while GS-331077 has minimal binding.Sofosbuvir is activated in the liver to the triphosphate GS-461203 by hydrolysis of the carboxylate ester by either of the enzymes cathepsin A or carboxylesterase 1, followed by cleaving of the phosphoramidate by the enzyme histidine triad nucleotide-binding protein 1 (HINT1), and subsequent repeated phosphorylation. Dephosphorylation creates the inactive metabolite GS-331077. The half life of sofosbuvir is 0.4 hours, and the half life of GS-331077 is 27 hours.Following a single 400 mg oral dose of sofosbuvir, 80% is excreted in urine, 14% in feces, and 2.5% in expired air recovery. However, of the urine recovery 78% was the metabolite (GS-331077) and 3.5% was sofosbuvir. Chemistry Prior to the discovery of sofosbuvir, a variety of nucleoside analogs had been examined as antihepatitis C treatments, but these exhibited relatively low potency. This low potency arose in part because the enzymatic addition of the first of the three phosphate groups of the triphosphate is slow. The design of sofosbuvir, based on the ProTide approach, avoids this slow step by building the first phosphate group into the structure of the drug during synthesis. Additional groups are attached to the phosphorus to temporarily mask the two negative charges of the phosphate group, thereby facilitating entry of the drug into the infected cell. The NS5B protein is a RNA-dependent RNA polymerase critical for the viral reproduction cycle. History Sofosbuvir was discovered in 2007 by Michael Sofia, a scientist at Pharmasset, and the drug was first tested in people in 2010. In 2011, Gilead Sciences bought Pharmasset for about $11 billion. Gilead submitted the New Drug Application for sofosbuvir in combination with ribavirin in April 2013, and in October 2013 it received the FDAs Breakthrough Therapy Designation. In December 2013, the FDA approved sofosbuvir in combination with ribavirin for oral dual therapy of HCV genotypes 2 and 3, and for triple therapy with injected pegylated interferon (pegIFN) and RBV for treatment-naive people with HCV genotypes 1 and 4. Two months before, the FDA had approved another drug, simeprevir, as a hepatitis C treatment.In 2014, the fixed dose combination drug sofosbuvir/ledipasvir, the latter a viral NS5A inhibitor, was approved; it had also been granted breakthrough status.Prior to the availability of sofosbuvir, hepatitis C treatments involved 6 to 12 months of treatment with an interferon-based regimen. This regimen provided cure rates of 70% or less and was associated with severe side effects, including anemia, depression, severe rash, nausea, diarrhea, and fatigue. As sofosbuvir clinical development progressed, physicians began to "warehouse" people in anticipation of its availability. Sofosbuvirs U.S. launch was the fastest of any new drug in history. Society and culture Sofosbuvir is on the World Health Organizations List of Essential Medicines. Economics Following its approval by the FDA in 2013, the price of sofosbuvir as quoted in various media sources in 2014 ranged from $84,000 to $168,000 depending on course of treatment in the U.S. and £35,000 in the United Kingdom for a 12-week regimine, causing considerable controversy. Sofosbuvir was more affordable in Japan and South Korea at approximately $300 and $5900 respectively for a 12-week treatment, with each government covering 99% and 70% of the cost respectively. In 2014, Gilead announced it would work with generic manufacturers in 91 developing countries to produce and sell sofosbuvir, and that it would sell a name brand version of the product in India for approximately $300 per course of treatment; it had signed agreements with generic manufacturers by September 2015. United States Since its launch, the price of sofosbuvir declined as more competitors entered the direct-acting antiviral (DAA) market. In 2020, the price for a course of sofosbuvir was $64,693 in the United States. In 2014, the list price of a 12-week combination treatment with a sofosbuvir-based regimen ranged from US$84,000 to $94,000. In April 2014, U.S. House Democrats Henry Waxman, Frank Pallone Jr., and Diana DeGette wrote Gilead Sciences Inc. questioning the $84,000 price for sofosbuvir. They specifically asked Gilead CEO John Martin to "explain how the drug was priced, what discounts are being made available to low-income patients and government health programs, and the potential impact to public health by insurers blocking or delaying access to the medicine because of its cost." Sofosbuvir is cited as an example of how specialty drugs present both benefits and challenges. Sofosbuvir also is an excellent example of both the benefit and the challenge of specialty medications. On one hand, this agent offers up to a 95% response rate as part of an interferon-free treatment regimen for hepatitis C. Generally speaking, it is more effective and better tolerated than alternative treatments. Unfortunately, the current per pill cost—$1,000—results in an $84,000 treatment course, creating barriers to therapy for many. Patients, providers, and payors alike have expressed outrage, and the debate has even drawn the attention of the US Congress. Despite these concerns, sofosbuvir rapidly has become a top seller in the United States... In February 2015, Gilead announced that due in part to negotiated discounts with pharmacy benefit managers and legally mandated discounts to government payers, the average discount-to-list price in 2014 was 22%. The company estimated that the average discount in 2015 would be 46%. According to the California Technology Assessment Forum, a panel of academic pharmacoeconomic experts, representatives of managed care organizations, and advocates for people with hepatitis, a 46% discount would bring the average price of treatment to about $40,000, at which price sofosbuvir-based treatment regimens represent a "high value" for people and healthcare systems.Because of sofosbuvirs high price in the United States, by 2017, some states—such as Louisiana—were withholding the medicine from Medicaid patients with hepatitis until their livers were severely damaged. This puts "patients at increased risk of medical complications" and contributes to the "transmission of the hepatitis C virus". In an article published in May 2016 in Health Affairs, the authors proposed the invocation of the federal "government patent use" law which would enable the government to procure "important patent-protected" drugs at lower prices while compensating "the patent-holding companies reasonable royalties ... for research and development." By July 2017, Louisianas health secretary Rebekah Gee, who described Louisiana as Americas "public-health-crisis cradle", was investigating the use of the "government patent use" as a strategy. Japan and South Korea Unlike other comparable Western developed countries, sofosbuvir is far more affordable in Japan and South Korea at approximately $300 and $2165 cost to patients respectively for a 12-week treatment, as each government covers 99% and 70% of the original cost respectively. Germany In Germany, negotiations between Gilead and health insurers led to a price of €41,000 for 12 weeks of treatment. This is the same price previously negotiated with the national healthcare system in France, except that additional discounts and rebates apply in France depending on the volume of sales and the number of treatment failures. Switzerland In Switzerland, the price is fixed by the government every three years. The price in 2016 was CHF 16,102.50 for 24 pills of 400 mg. United Kingdom In 2020, the originator price per course of sofosbuvir was £35,443. In 2013, the price in the United Kingdom was expected to be £35,000 for a 12-weeks course. NHS England established 22 Operational Delivery Networks to roll out delivery, which was approved by the National Institute for Health and Care Excellence in 2015, and proposes to fund 10,000 courses of treatment in 2016–17. Each was given a "run rate" of how many people they were allowed to treat, and this was the NHSs single biggest new treatment investment in 2016. Croatia As of 2015, sofosbuvir is included on the list of essential medications in Croatia and its cost is fully covered by the Croatian Health Insurance Fund. As a result of negotiations with the manufacturer, only therapies with successful outcome would be paid by the Fund with the rest being covered by the manufacturer. India In July 2014, Gilead Sciences filed a patent for sofosbuvir in India. If the office of the controller general of patents had granted it, Gilead would have obtained exclusive rights to produce and sell sofosbuvir in the country. However, in January 2015, the Indian Patent Office rejected Gileads application. Gileads lawyers moved the Delhi High Court against this decision. That decision was overturned on appeal in February 2015. In the meantime, it granted Indian companies voluntary licenses (VLs), which allowed them to make and sell in a selected few countries at a discounted price. This agreement also granted 7% of the royalties to Gilead. However, the list of countries open to Indian firms under this agreement excluded 73 million people with hepatitis C. Developing world In 2014, Gilead announced it would seek generic licensing agreements with manufacturers to produce sofosbuvir in 91 developing countries, which contained 54% of the worlds HCV-infected population. Gilead also said it would sell a name brand version of the product in India for $300 per course of treatment, approximately double a third party estimate of the minimum achievable cost of manufacture. It had signed licenses with generic manufacturers by September 2015. The leader of one Indian activist group called this move inadequate, but nine companies launched products, which "unleashed a fierce marketing war", according to Indias The Economic Times.In Egypt, which had the worlds highest incidence of hepatitis C, Gilead offered sofosbuvir at the discounted price of $900 to the Egyptian government. The government in turn made it free to patients. Later, Gilead licensed a generic version to be available in Egypt.The Access to Medicine Index ranked Gilead first among the worlds 20 largest pharmaceutical countries in the Pricing, Manufacturing and Distribution category in both 2013 and 2014, citing Gileads "leading performance in equitable pricing." In contrast, Jennifer Cohn of Doctors Without Borders and the organization Doctors of the World criticized the price of sofosbuvir as reflecting "corporate greed" and ignoring the needs of people in developing countries.In Algeria, as of 2011 about 70,000 people were infected with hepatitis C. As of August 2015, Gilead had licensed its partners in India to sell sofosbuvir in Algeria. It had been criticized for not making the drug available in middle-income countries including Algeria prior to that. Controversies The price has generated considerable controversy. In 2017, the range of costs per treatment varied from about $84,000 to about $50. Patent challenges In February 2015, it was reported that Doctors of the World had submitted an objection to Gileads patent at the European Patent Office, claiming that the structure of sofosbuvir is based on already known molecules. In particular, Doctors of the World argued that the Protide technology powering sofosbuvir was previously invented by the Chris McGuigan team at Cardiff University in the UK, and that the Gilead drug is not therefore inventive. The group filed challenges in other developing countries as well. These challenges were unsuccessful and the court recognized the innovative step in pro-drug design and application to the specific example of sofosbuvir. Medical tourism Due to the high cost of sofosbuvir in the U.S., as of 2016 increasing numbers of Americans with hepatitis C were traveling to India to purchase the drug. Similarly, increasing numbers of Chinese were also traveling to India to purchase sofosbuvir, which had not yet been approved for sale in China by the countrys State Food and Drug Administration (SFDA). Research Combinations of sofosbuvir with NS5A inhibitors, such as daclatasvir, ledipasvir or velpatasvir, have shown sustained virological response rates of up to 100% in people infected with HCV. Most studies indicate that the efficacy rate is between 94% and 97%; much higher than previous treatment options. That treatments could be conducted at very low costs was demonstrated by Hill and coworkers who presented data on 1,160 patients who used generic versions of solfosbuvir, ledipasvir, plus daclatasvir from suppliers in India, Egypt, China and other countries and reported over 90% success at costs of about $50 per therapy.Sofosbuvir has also been tested against other viruses such as the Zika virus and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). See also AT-527—a similar drug developed for the treatment of SARS-CoV-2 Tenofovir alafenamide—a nucleotide reverse-transcriptase inhibitor that uses similar phosphoramidate prodrug technology Remdesivir—a nucleotide analogue RNA polymerase inhibitor originally intended to treat hepatitis C that uses similar phosphoramidate prodrug technology and displays very similar PK. References Further reading Dean L (2017). "Sofosbuvir Therapy and IFNL4 Genotype". In Pratt VM, McLeod HL, Rubinstein WS, et al. (eds.). Medical Genetics Summaries. National Center for Biotechnology Information (NCBI). PMID 28520377. Bookshelf ID: NBK409960. External links "Sofosbuvir". Drug Information Portal. U.S. National Library of Medicine.
You act as a mediator between medical professionals and the public. Provide comprehensive explanations of medical terms, ensuring they are both precise and easily understood.
I'm seeking clarification on the medical term 'Yaws.' Could you explain it?
Yaws is a tropical infection of the skin, bones, and joints caused by the spirochete bacterium Treponema pallidum pertenue. The disease begins with a round, hard swelling of the skin, 2 to 5 cm (0.79 to 1.97 in) in diameter. The center may break open and form an ulcer. This initial skin lesion typically heals after 3–6 months. After weeks to years, joints and bones may become painful, fatigue may develop, and new skin lesions may appear. The skin of the palms of the hands and the soles of the feet may become thick and break open. The bones (especially those of the nose) may become misshapen. After 5 years or more, large areas of skin may die, leaving scars.Yaws is spread by direct contact with the fluid from a lesion of an infected person. The contact is usually of a nonsexual nature. The disease is most common among children, who spread it by playing together. Other related treponemal diseases are bejel (T. pallidum endemicum), pinta (T. carateum), and syphilis (T. p. pallidum). Yaws is often diagnosed by the appearance of the lesions. Blood antibody tests may be useful, but cannot separate previous from current infections. Polymerase chain reaction is the most accurate method of diagnosis.No vaccine has yet been found. Prevention is, in part, done by curing those who have the disease, thereby decreasing the risk of transmission. Where the disease is common, treating the entire community is effective. Improving cleanliness and sanitation also decreases spread. Treatment is typically with antibiotics, including: azithromycin by mouth or benzathine penicillin by injection. Without treatment, physical deformities occur in 10% of cases.Yaws is common in at least 13 tropical countries as of 2012. Almost 85% of infections occurred in three countries—Ghana, Papua New Guinea, and Solomon Islands. The disease only infects humans. Efforts in the 1950s and 1960s by the World Health Organization decreased the number of cases by 95%. Since then, cases have increased, but with renewed efforts to globally eradicate the disease by 2020. In 1995, the number of people infected was estimated at more than 500,000. In 2016, the number of reported cases was 59,000. Although one of the first descriptions of the disease was made in 1679 by Willem Piso, archaeological evidence suggests that yaws may have been present among human ancestors as far back as 1.6 million years ago. Signs and symptoms Yaws is classified as primary, secondary, and tertiary; this is useful, but people often have a mix of stages.Within 9–90 days (but usually about 21 days) of infection, a painless but distinctive "mother yaw" nodule appears. Initially reddened and inflamed, it may become a papilloma, which can then become an ulcer, possibly with a yellow crust. Mother yaws are most commonly found on the legs and ankles, and are rarely found on the genitals (unlike syphilis) The mother yaw enlarges and becomes warty in appearance. Nearby "daughter yaws" may also appear simultaneously. This primary stage resolves completely, with scarring, within 3–6 months. The scar is often pigmented. The secondary stage occurs months to two years later (but usually 1–2 months later), and may thus begin when the mother yaw has not yet healed. It happens when the bacterium spreads in the blood and lymph. It begins as multiple, pinhead-like papules; these initial lesions grow and change in appearance and may last weeks before healing, with or without scarring.Secondary yaws typically shows widespread skin lesions that vary in appearance, including "crab yaws" (areas of skin of abnormal colour) on the palms of the hands and soles of the feet (named for the crab-like gait they cause people with painful soles to assume). These may show desquamation. These secondary lesions frequently ulcerate and are then highly infectious, but heal after 6 months or more.Secondary yaws affects the skin and bones. The most common bone-related problem is periostitis, an inflammation around the bone, often occurs in the bones of the fingers and the long bones of the lower arms and legs, causing swollen fingers and limbs. This causes pain at night and thickening of the affected bones (periostitis). About 75% of infected children surveyed in Papua New Guinea reported joint pain. Swollen lymph nodes, fever, and malaise are also common.After primary and secondary yaws (and possibly, in some cases, without these phases), a latent infection develops. Within five years (rarely, within ten years) it can relapse and become active again, causing further secondary lesions, which may infect others. These relapse lesions are most commonly found around the armpits, mouth, and anus. An estimated 10% of people with yaws formerly were thought to develop tertiary disease symptoms, but more recently, tertiary yaws has been less frequently reported.Tertiary yaws can include gummatous nodules. It most commonly affects the skin. The skin of the palms and soles may thicken (hyperkeratosis). Nodules ulcerating near joints can cause tissue death. Periostitis can be much more severe. The shinbones may become bowed (saber shin) from chronic periostitis.Yaws may or may not have cardiovascular or neurological effects; definitive evidence is lacking. Rhinopharyngitis mutilans Rhinopharyngitis mutilans, also known as gangosa, is a destructive ulcerative condition that usually originates about the soft palate and spreads into the hard palate, nasopharynx, and nose, resulting in mutilating cicatrices, and outward to the face, eroding intervening bone, cartilage, and soft tissues. It occurs in late stages of yaws, usually 5 to 10 years after first symptoms of infection. This is now rare. Very rarely, yaws may cause bone spurs in the upper jaw near the nose (gondou); gondou was rare even when yaws was a common disease. Cause The disease is transmitted by skin-to-skin contact with an infective lesion, with the bacterium entering through a pre-existing cut, bite, or scratch.Early (primary and secondary) yaws lesions have a higher bacterial load, thus are more infectious. Both papillomas and ulcers are infectious. Infectivity is thought to last 12–18 months after infection, longer if a relapse occurs. Early yaws lesions are often itchy, and more lesions may form along lines that are scratched. Yaws may be evolving less conspicuous lesions.Yaws is most common among children, who spread it by playing together. It is not thought to be transmitted from mother to child in the womb. Yaws is not a venereal disease.T. pallidum pertenue has been identified in nonhuman primates (baboons, chimpanzees, and gorillas) and experimental inoculation of human beings with a simian isolate causes yaws-like disease. However, no evidence exists of cross-transmission between human beings and primates, but more research is needed to discount the possibility of a yaws animal reservoir in nonhuman primates. Diagnosis Most often the diagnosis is made clinically. Dark field microscopy of samples taken from early lesions (particularly ulcerative lesions) may show the responsible bacteria; the spirochaetes are only 0.3 µm wide by 6–20 µm long, so light-field microscopy does not suffice.A microscopic examination of a biopsy of a yaw may show skin with clear epidermal hyperplasia (a type of skin thickening) and papillomatosis (a type of surface irregularity), often with focal spongiosis (an accumulation of fluid in specific part of the epidermis). Immune system cells, neutrophils and plasma cells, accumulate in the skin, in densities that may cause microabscesses.Warthin–Starry or Levaditi silver stains selectively stain T. pallidum, and direct and indirect immunofluorescence and immunoperoxidase tests can detect polyclonal antibodies to T. pallidums. Histology often shows some spatial features which distinguish yaws from syphilis (syphilis is more likely to be found in the dermis, not the epidermis, and shows more endothelial cell proliferation and vascular obliteration).Blood-serum (serological) tests are increasingly done at the point of care. They include a growing range of treponemal and nontreponemal assays. Treponemal tests are more specific, and are positive for any one who has ever been infected with yaws; they include the Treponema pallidum particle agglutination assay. Nontreponemal assays can be used to indicate the progress of an infection and a cure, and positive results weaken and may become negative after recovery, especially after a case treated early. They include the venereal disease research laboratory (VDRL; requires microscopy) and rapid plasma reagin (RPR; naked-eye result) tests, both of which flocculate patient-derived antibodies with antigens.Serological tests cannot distinguish yaws from the closely related syphilis; no test distinguishing yaws from syphilis is widely available. The two genomes differ by about 0.2%. PCR and DNA sequencing can distinguish the two. There are also no common blood tests which distinguish among the four treponematoses: syphilis (Treponema pallidum pallidum), yaws (Treponema pallidum pertenue), bejel (Treponema pallidum endemicum), and pinta (Treponema carateum).Haemophilus ducreyi infections can cause skin conditions that mimic primary yaws. People infected with Haemophilus ducreyi lesions may or may not also have latent yaws, and thus may or may not test positive on serological tests. This was discovered in the mid-2010s. It seems that a recently diverged strain of Haemophilus ducreyi has evolved from being a sexually transmitted infection to being a skin ulcer pathogen that looks like yaws.Yaws has been reported in nonendemic countries. Treatment Treatment is normally by a single intramuscular injection of long-acting benzathine benzylpenicillin, or less commonly by a course of other antibiotics, such as azithromycin or tetracycline tablets. Penicillin has been the front-line treatment since at least the 1960s, but there is no solid evidence of the evolution of penicillin resistance in yaws.The historical strategy for the eradication of yaws (1952–1964) was: Benzathine benzylpenicillin requires a cold chain and staff who can inject it, and there is a small risk of anaphylaxis. It was also not reliably available during the 2010s; there have been supply shortages.In the 2010s, a single oral dose of azithromycin was shown to be as effective as intramuscular penicillin. Unlike penicillin, there is strong evidence that yaws is evolving antibiotic resistance to azithromycin; there are two known mutations in the bacterium, each of which can cause resistance and make the treatment ineffective. This has threatened eradication efforts.Within 8–10 hours of penicillin treatment, bacteria can no longer be found in lesion biopsies. Primary and secondary lesions usually heal in 2–4 weeks; bone pain may improve within two days. If treated early enough, bone deformities may reverse and heal. Primary and secondary stage lesions may heal completely, but the destructive changes of tertiary yaws are largely irreversible.If lesions do not heal, or RPR test results do not improve, this may indicate treatment failure or re-infection; the treatment is typically repeated. WHO guidelines says that any presumed treatment failures at 4 weeks require macrolide resistance testing. Epidemiology Because T. pallidum pertenue is temperature- and humidity-dependent, yaws is found in humid tropical forest regions in South America, Africa, Asia and Oceania.About three quarters of people affected are children under 15 years of age, with the greatest incidence in children 6–10 years old. Therefore, children are the main reservoir of infection.It is more common in remote areas, where access to treatment is poorer. It is associated with poverty and poor sanitation facilities and personal hygiene.Worldwide, almost 85% of yaws cases are in Ghana, Papua New Guinea, and the Solomon Islands. Rates in sub-Saharan Africa are low, but tend to be concentrated in specific populations. As of 2015, it is estimated that about 89 million people live in yaws-endemic areas, but data are poor, and this is likely an over-estimate.In the early 1900s, yaws was very common; in sub-saharan Africa, it was more frequently treated than malaria, sometimes making up more than half of treatments.Mass treatment campaigns in the 1950s reduced the worldwide prevalence from 50 to 150 million to fewer than 2.5 million; however, during the 1970s there were outbreaks in South-East Asia, and there have been continued sporadic cases in South America. As of 2011, it was unclear how many people worldwide were currently infected.From 2008 to 2012, 13 countries reported over 300,000 new cases to the WHO. There was no system for certifying local elimination of yaws, and it is not known whether the lack of reports from some countries is because they stopped having yaws cases or because they stopped reporting them. It is estimated that if there is not an active surveillance programme, there is less than a 1-in-2 chance that a country will successfully report yaws cases (if it gets them) in over three-quarters of countries with a history of yaws. These countries are thought to need international assistance to mount effective surveillance.Generally, yaws is not a notifiable disease. History Examination of remains of Homo erectus from Kenya, that are about 1.6 million years old, has revealed signs typical of yaws. The genetic analysis of the yaws causative bacteria—Treponema pallidum pertenue—has led to the conclusion that yaws is the most ancient of the four known Treponema diseases. All other Treponema pallidum subspecies probably evolved from Treponema pallidum pertenue. Yaws is believed to have originated in tropical areas of Africa, and spread to other tropical areas of the world via immigration and the slave trade. The latter is likely the way it was introduced to Europe from Africa in the 15th century. The first unambiguous description of yaws was made by the Dutch physician Willem Piso. Yaws was clearly described in 1679 among African slaves by Thomas Sydenham in his epistle on venereal diseases, although he thought that it was the same disease as syphilis. The causative agent of yaws was discovered in 1905 by Aldo Castellani in ulcers of patients from Ceylon.The current English name is believed to be of Carib origin, from "yaya", meaning sore.Towards the end of the Second World War yaws became widespread in the North of Malaya under Japanese occupation. After the country was liberated, the population was treated for yaws by injections of arsenic, of which there was a great shortage, so only those with stage 1 were treated. Eradication A series of WHO yaws control efforts, which began shortly after creation of the WHO in 1948, succeeded in eradicating the disease locally from many countries, but have not lasted long enough to eradicate it globally. The Global Control of Treponematoses (TCP) programme by the WHO and the UNICEF launched in 1952 and continued until 1964. A 1953 questionnaire-based estimate was that there were 50–150 million yaws cases in 90 countries. The global prevalence of yaws and the other endemic treponematoses, bejel and pinta, was reduced by the Global Control of Treponematoses (TCP) programme between 1952 and 1964 from about 50 million cases to about 2.5 million (a 95% reduction). However, "premature integration of yaws and other endemic treponematoses activities into weak primary health-care systems, and the dismantling of the vertical eradication programmes after 1964, led to the failure to finish with the remaining 5% of cases" and led to a resurgence of yaws in the 1970s, with the largest number of case found in the Western Africa region. Following the cessation of this program, resources, attention and commitment for yaws gradually disappeared and yaws remained at a low prevalence in parts of Asia, Africa, and the Americas with sporadic outbreaks. With few cases, mainly affecting poor, remote communities with little access to treatment, yaws became poorly known, yaws knowledge and skills died out even among health professionals, and yaws eradication was not seen as a high priority. Although a single injection of long-acting penicillin or other beta lactam antibiotic cures the disease and is widely available and the disease highly localised, many eradication campaigns ended in complacency and neglect; even in areas where transmission was successfully interrupted, re-introduction from infected areas occurred. Yaws eradication remained a priority in south-east Asia. In 1995, the WHO estimated 460,000 worldwide cases.In the Philippines, yaws stopped being listed as a notifiable disease in 1973; as of 2020, it is still present in the country.India implemented a successful Yaws eradication campaign that resulted in the 2016 certification by the WHO that India was free of yaws. In 1996 there were 3,571 yaws cases in India; in 1997 after a serious elimination effort began the number of cases fell to 735. By 2003 the number of cases was 46. The last clinical case in India was reported in 2003 and the last latent case in 2006; certification by the WHO was achieved in 2016.In 2012 the WHO officially targeted yaws for eradication by 2020 following the development of orally administered azithromycin as a treatment, but missed that target. The Morges approach (named after Morges, Switzerland, where a meeting on it was held) involved mass treatment with azithromycin. This was safe, but ran into problems with antibiotic resistance, and did not fully interrupt transmission.The discovery that oral antibiotic azithromycin can be used instead of the previous standard, injected penicillin, was tested on Lihir Island from 2013 to 2014; a single oral dose of the macrolide antibiotic reduced disease prevalence from 2.4% to 0.3% at 12 months. The WHO now recommends both treatment courses (oral azithromycin and injected penicillin), with oral azithromycin being the preferred treatment.As of 2020, there were 15 countries known to be endemic for yaws, with the recent discovery of endemic transmission in Liberia and the Philippines. In 2020, 82 564 cases of yaws were reported to the WHO and 153 cases were confirmed. The majority of the cases are reported from Papua New Guinea and with over 80% of all cases coming from one of three countries in the 2010–2013 period: Papua New Guinea, Solomon Islands, and Ghana. A WHO meeting report in 2018 estimated the total cost of elimination to be US$175 million (excluding Indonesia).In the South-East Asian Regional Office of the WHO, regional eradication efforts are focused on the remaining endemic countries in this region (Indonesia and East Timor) after India was declared free of yaws in 2016.Although yaws is highly localized and eradication may be feasible, humans may not be the only reservoir of infection. References External links "Treponema pallidum subsp. pertenue". NCBI Taxonomy Browser. 168.
You act as a bridge between medical jargon and everyday language. Explain medical terms in a manner that is comprehensive yet approachable for non-experts.
I'm trying to expand my medical knowledge. Can you elucidate the term 'Brain tumor'?
A brain tumor occurs when abnormal cells form within the brain. There are two main types of tumors: malignant tumors and benign (non-cancerous) tumors. These can be further classified as primary tumors, which start within the brain, and secondary tumors, which most commonly have spread from tumors located outside the brain, known as brain metastasis tumors. All types of brain tumors may produce symptoms that vary depending on the size of the tumor and the part of the brain that is involved. Where symptoms exist, they may include headaches, seizures, problems with vision, vomiting and mental changes. Other symptoms may include difficulty walking, speaking, with sensations, or unconsciousness.The cause of most brain tumors is unknown. Uncommon risk factors include exposure to vinyl chloride, Epstein–Barr virus, ionizing radiation, and inherited syndromes such as neurofibromatosis, tuberous sclerosis, and von Hippel-Lindau Disease. Studies on mobile phone exposure have not shown a clear risk. The most common types of primary tumors in adults are meningiomas (usually benign) and astrocytomas such as glioblastomas. In children, the most common type is a malignant medulloblastoma. Diagnosis is usually by medical examination along with computed tomography (CT) or magnetic resonance imaging (MRI). The result is then often confirmed by a biopsy. Based on the findings, the tumors are divided into different grades of severity.Treatment may include some combination of surgery, radiation therapy and chemotherapy. Since the brain is the bodys only non-fungible organ, surgery carries a risk of the tumor returning. If seizures occur, anticonvulsant medication may be needed. Dexamethasone and furosemide are medications that may be used to decrease swelling around the tumor. Some tumors grow gradually, requiring only monitoring and possibly needing no further intervention. Treatments that use a persons immune system are being studied. Outcomes for malignant tumors vary considerably depending on the type of tumor and how far it has spread at diagnosis. Although benign tumors only grow in one area, they may still be life-threatening depending on their size and location. Malignant glioblastomas usually have very poor outcomes, while benign meningiomas usually have good outcomes. The average five-year survival rate for all (malignant) brain cancers in the United States is 33%.Secondary, or metastatic, brain tumors are about four times as common as primary brain tumors, with about half of metastases coming from lung cancer. Primary brain tumors occur in around 250,000 people a year globally, and make up less than 2% of cancers. In children younger than 15, brain tumors are second only to acute lymphoblastic leukemia as the most common form of cancer. In NSW Australia in 2005, the average lifetime economic cost of a case of brain cancer was AU$1.9 million, the greatest of any type of cancer. Signs and symptoms The signs and symptoms of brain tumors are broad. People may experience symptoms regardless of whether the tumor is benign (not cancerous) or cancerous. Primary and secondary brain tumors present with similar symptoms, depending on the location, size, and rate of growth of the tumor. For example, larger tumors in the frontal lobe can cause changes in the ability to think. However, a smaller tumor in an area such as Wernickes area (small area responsible for language comprehension) can result in a greater loss of function. Headaches Headaches as a result of raised intracranial pressure can be an early symptom of brain cancer. However, isolated headache without other symptoms is rare, and other symptoms including visual abnormalities may occur before headaches become common. Certain warning signs for headache exist which make the headache more likely to be associated with brain cancer. These are, as defined by the American Academy of Neurology: "abnormal neurological examination, headache worsened by Valsalva maneuver, headache causing awakening from sleep, new headache in the older population, progressively worsening headache, atypical headache features, or patients who do not fulfill the strict definition of migraine". Other associated signs are headaches that are worse in the morning or that subside after vomiting. Location-specific symptoms The brain is divided into lobes and each lobe or area has its own function. A tumor in any of these lobes may affect the areas performance. The symptoms experienced are often linked to the location of the tumor, but each person may experience something different. Frontal lobe: Tumors may contribute to poor reasoning, inappropriate social behavior, personality changes, poor planning, lower inhibition, and decreased production of speech (Brocas area). Temporal lobe: Tumors in this lobe may contribute to poor memory, loss of hearing, and difficulty in language comprehension (Wernickes area is located in this lobe). Parietal lobe: Tumors here may result in poor interpretation of languages, difficulty with speaking, writing, drawing, naming, and recognizing, and poor spatial and visual perception. Occipital lobe: Damage to this lobe may result in poor vision or loss of vision. Cerebellum: Tumors in this area may cause poor balance, muscle movement, and posture. Brain stem: Tumors on the brainstem can cause seizures, endocrine problems, respiratory changes, visual changes, headaches and partial paralysis. Behavior changes A persons personality may be altered due to the tumor damaging lobes of the brain. Since the frontal, temporal, and parietal lobes control inhibition, emotions, mood, judgement, reasoning, and behavior, a tumor in those regions can cause inappropriate social behavior, temper tantrums, laughing at things which merit no laughter, and even psychological symptoms such as depression and anxiety. More research is needed into the effectiveness and safety of medication for depression in people with brain tumors.Personality changes can have damaging effects such as unemployment, unstable relationships, and a lack of control. Cause Epidemiological studies are required to determine risk factors. Aside from exposure to vinyl chloride or ionizing radiation, there are no known environmental factors associated with brain tumors. Mutations and deletions of tumor suppressor genes, such as P53, are thought to be the cause of some forms of brain tumor. Inherited conditions, such as Von Hippel–Lindau disease, tuberous sclerosis, multiple endocrine neoplasia, and neurofibromatosis type 2 carry a high risk for the development of brain tumors. People with celiac disease have a slightly increased risk of developing brain tumors. Smoking has been suggested to increase the risk but evidence remains unclear.Although studies have not shown any link between cell phone or mobile phone radiation and the occurrence of brain tumors, the World Health Organization has classified mobile phone radiation on the IARC scale into Group 2B – possibly carcinogenic. The claim that cell phone usage may cause brain cancer is likely based on epidemiological studies which observed a slight increase in glioma risk among heavy users of wireless and cordless phones. When those studies were conducted, GSM (2G) phones were in use. Modern, third-generation (3G) phones emit, on average, about 1% of the energy emitted by those GSM (2G) phones, and therefore the finding of an association between cell phone usage and increased risk of brain cancer is not based upon current phone usage. Pathophysiology Meninges Human brains are surrounded by a system of connective tissue membranes called meninges that separate the brain from the skull. This three-layered covering is composed of (from the outside in) the dura mater, arachnoid mater, and pia mater. The arachnoid and pia are physically connected and thus often considered as a single layer, the leptomeninges. Between the arachnoid mater and the pia mater is the subarachnoid space which contains cerebrospinal fluid (CSF). This fluid circulates in the narrow spaces between cells and through the cavities in the brain called ventricles, to support and protect the brain tissue. Blood vessels enter the central nervous system through the perivascular space above the pia mater. The cells in the blood vessel walls are joined tightly, forming the blood–brain barrier which protects the brain from toxins that might enter through the blood.Tumors of the meninges are meningiomas and are often benign. Though not technically a tumor of brain tissue, they are often considered brain tumors since they protrude into the space where the brain is, causing symptoms. Since they are usually slow-growing tumors, meningiomas can be quite large by the time symptoms appear. Brain matter The brains of humans and other vertebrates are composed of very soft tissue and have a gelatin-like texture. Living brain tissue has a pink tint in color on the outside (gray matter), and nearly complete white on the inside (white matter), with subtle variations in color. The three largest divisions of the brain are: Cerebral cortex Brainstem CerebellumThese areas are composed of two broad classes of cells: neurons and glia. These two types are equally numerous in the brain as a whole, although glial cells outnumber neurons roughly 4 to 1 in the cerebral cortex. Glia come in several types, which perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Primary tumors of the glial cells are called gliomas and often are malignant by the time they are diagnosed.The thalamus and hypothalamus are major divisions of the diencephalon, with the pituitary gland and pineal gland attached at the bottom; tumors of the pituitary and pineal gland are often benign.The brainstem lies between the large cerebral cortex and the spinal cord. It is divided into the midbrain, pons, and medulla oblongata. Spinal cord The spinal cord is considered a part of the central nervous system. It is made up of the same cells as the brain: neurons and glial cells. Diagnosis Although there is no specific or singular symptom or sign, the presence of a combination of symptoms and the lack of corresponding indications of other causes can be an indicator for investigation towards the possibility of a brain tumor. Brain tumors have similar characteristics and obstacles when it comes to diagnosis and therapy with tumors located elsewhere in the body. However, they create specific issues that follow closely to the properties of the organ they are in.The diagnosis will often start by taking a medical history noting medical antecedents, and current symptoms. Clinical and laboratory investigations will serve to exclude infections as the cause of the symptoms. Examinations in this stage may include the eyes, otolaryngological (or ENT) and electrophysiological exams. The use of electroencephalography (EEG) often plays a role in the diagnosis of brain tumors.Brain tumors, when compared to tumors in other areas of the body, pose a challenge for diagnosis. Commonly, radioactive tracers are uptaken in large volumes in tumors due to the high activity of tumor cells, allowing for radioactive imaging of the tumor. However, most of the brain is separated from the blood by the blood-brain barrier (BBB), a membrane that exerts a strict control over what substances are allowed to pass into the brain. Therefore, many tracers that may reach tumors in other areas of the body easily would be unable to reach brain tumors until there was a disruption of the BBB by the tumor. Disruption of the BBB is well imaged via MRI or CT scan, and is therefore regarded as the main diagnostic indicator for malignant gliomas, meningiomas, and brain metastases.Swelling or obstruction of the passage of cerebrospinal fluid (CSF) from the brain may cause (early) signs of increased intracranial pressure which translates clinically into headaches, vomiting, or an altered state of consciousness, and in children changes to the diameter of the skull and bulging of the fontanelles. More complex symptoms such as endocrine dysfunctions should alarm doctors not to exclude brain tumors.A bilateral temporal visual field defect (due to compression of the optic chiasm) or dilation of the pupil, and the occurrence of either slowly evolving or the sudden onset of focal neurologic symptoms, such as cognitive and behavioral impairment (including impaired judgment, memory loss, lack of recognition, spatial orientation disorders), personality or emotional changes, hemiparesis, hypoesthesia, aphasia, ataxia, visual field impairment, impaired sense of smell, impaired hearing, facial paralysis, double vision, or more severe symptoms such as tremors, paralysis on one side of the body hemiplegia, or (epileptic) seizures in a patient with a negative history for epilepsy, should raise the possibility of a brain tumor. Imaging Medical imaging plays a central role in the diagnosis of brain tumors. Early imaging methods – invasive and sometimes dangerous – such as pneumoencephalography and cerebral angiography have been abandoned in favor of non-invasive, high-resolution techniques, especially magnetic resonance imaging (MRI) and computed tomography (CT) scans, though MRI is typically the reference standard used. Neoplasms will often show as differently colored masses (also referred to as processes) in CT or MRI results. Benign brain tumors often show up as hypodense (darker than brain tissue) mass lesions on CT scans. On MRI, they appear either hypodense or isointense (same intensity as brain tissue) on T1-weighted scans, or hyperintense (brighter than brain tissue) on T2-weighted MRI, although the appearance is variable. Contrast agent uptake, sometimes in characteristic patterns, can be demonstrated on either CT or MRI scans in most malignant primary and metastatic brain tumors. Pressure areas where the brain tissue has been compressed by a tumor also appear hyperintense on T2-weighted scans and might indicate the presence a diffuse neoplasm due to an unclear outline. Swelling around the tumor known as peritumoral edema can also show a similar result.This is because these tumors disrupt the normal functioning of the BBB and lead to an increase in its permeability. More recently, advancements have been made to increase the utility of MRI in providing physiological data that can help to inform diagnosis and prognosis. Perfusion Weighted Imaging (PWI) and Diffusion Weighted Imaging (DWI) are two MRI techniques that reviews have been shown to be useful in classifying tumors by grade, which was not previously viable using only structural imaging. However, these techniques cannot alone diagnose high- versus low-grade gliomas, and thus the definitive diagnosis of brain tumor should only be confirmed by histological examination of tumor tissue samples obtained either by means of brain biopsy or open surgery. The histological examination is essential for determining the appropriate treatment and the correct prognosis. This examination, performed by a pathologist, typically has three stages: interoperative examination of fresh tissue, preliminary microscopic examination of prepared tissues, and follow-up examination of prepared tissues after immunohistochemical staining or genetic analysis. Pathology Tumors have characteristics that allow the determination of malignancy and how they will evolve, and determining these characteristics will allow the medical team to determine the management plan.Anaplasia or dedifferentiation: loss of differentiation of cells and of their orientation to one another and blood vessels, a characteristic of anaplastic tumor tissue. Anaplastic cells have lost total control of their normal functions and many have deteriorated cell structures. Anaplastic cells often have abnormally high nuclear-to-cytoplasmic ratios, and many are multinucleated. Additionally, the nucleus of anaplastic cells is usually unnaturally shaped or oversized. Cells can become anaplastic in two ways: neoplastic tumor cells can dedifferentiate to become anaplasias (the dedifferentiation causes the cells to lose all of their normal structure/function), or cancer stem cells can increase their capacity to multiply (i.e., uncontrollable growth due to failure of differentiation).Atypia: an indication of abnormality of a cell (which may be indicative of malignancy). Significance of the abnormality is highly dependent on context.Neoplasia: the (uncontrolled) division of cells. As such, neoplasia is not problematic but its consequences are: the uncontrolled division of cells means that the mass of a neoplasm increases in size, and in a confined space such as the intracranial cavity this quickly becomes problematic because the mass invades the space of the brain pushing it aside, leading to compression of the brain tissue and increased intracranial pressure and destruction of brain parenchyma. Increased intracranial pressure (ICP) may be attributable to the direct mass effect of the tumor, increased blood volume, or increased cerebrospinal fluid (CSF) volume, which may, in turn, have secondary symptoms.Necrosis: the (premature) death of cells, caused by external factors such as infection, toxin or trauma. Necrotic cells send the wrong chemical signals which prevent phagocytes from disposing of the dead cells, leading to a buildup of dead tissue, cell debris and toxins at or near the site of the necrotic cellsArterial and venous hypoxia, or the deprivation of adequate oxygen supply to certain areas of the brain, occurs when a tumor makes use of nearby blood vessels for its supply of blood and the neoplasm enters into competition for nutrients with the surrounding brain tissue. More generally a neoplasm may cause release of metabolic end products (e.g., free radicals, altered electrolytes, neurotransmitters), and release and recruitment of cellular mediators (e.g., cytokines) that disrupt normal parenchymal function. Classification Tumors can be benign or malignant, can occur in different parts of the brain, and may be classified as primary or secondary. A primary tumor is one that has started in the brain, as opposed to a metastatic tumor, which is one that has spread to the brain from another area of the body. The incidence of metastatic tumors is approximately four times greater than primary tumors. Tumors may or may not be symptomatic: some tumors are discovered because the patient has symptoms, others show up incidentally on an imaging scan, or at an autopsy.Grading of the tumors of the central nervous system commonly occurs on a 4-point scale (I-IV) created by the World Health Organization in 1993. Grade I tumors are the least severe and commonly associated with long-term survival, with severity and prognosis worsening as the grade increases. Low-grade tumors are often benign, while higher grades are aggressively malignant and/or metastatic. Other grading scales do exist, many based upon the same criteria as the WHO scale and graded from I-IV. Primary The most common primary brain tumors are: Gliomas (50.4%) Meningiomas (20.8%) Pituitary adenomas (15%) Nerve sheath tumors (10%)These common tumors can also be organized according to tissue of origin as shown below: Secondary Secondary tumors of the brain are metastatic and have invaded the brain from cancers originating in other organs. This means that a cancerous neoplasm has developed in another organ elsewhere in the body and that cancer cells have leaked from that primary tumor and then entered the lymphatic system and blood vessels. They then circulate through the bloodstream, and are deposited in the brain. There, these cells continue growing and dividing, becoming another invasive neoplasm of primary cancers tissue. Secondary tumors of the brain are very common in the terminal phases of patients with an incurable metastasized cancer; the most common types of cancers that bring about secondary tumors of the brain are lung cancer, breast cancer, malignant melanoma, kidney cancer, and colon cancer (in decreasing order of frequency).Secondary brain tumors are more common than primary ones; in the United States, there are about 170,000 new cases every year. Secondary brain tumors are the most common cause of tumors in the intracranial cavity. The skull bone structure can also be subject to a neoplasm that by its very nature reduces the volume of the intracranial cavity, and can damage the brain. By behavior Brain tumors or intracranial neoplasms can be cancerous (malignant) or non-cancerous (benign). However, the definitions of malignant or benign neoplasms differ from those commonly used in other types of cancerous or non-cancerous neoplasms in the body. In cancers elsewhere in the body, three malignant properties differentiate benign tumors from malignant forms of cancer: benign tumors are self-limited and do not invade or metastasize. Characteristics of malignant tumors include: uncontrolled mitosis (growth by division beyond the normal limits) anaplasia: the cells in the neoplasm have an obviously different form (in size and shape). Anaplastic cells display marked pleomorphism. The cell nuclei are characteristically extremely hyperchromatic (darkly stained) and enlarged; the nucleus might have the same size as the cytoplasm of the cell (nuclear-cytoplasmic ratio may approach 1:1, instead of the normal 1:4 or 1:6 ratio). Giant cells – considerably larger than their neighbors – may form and possess either one enormous nucleus or several nuclei (syncytia). Anaplastic nuclei are variable and bizarre in size and shape. invasion or infiltration (medical literature uses these terms as synonymous equivalents. However, for clarity, the articles that follow adhere to a convention that they mean slightly different things; this convention is not followed outside these articles): Invasion or invasiveness is the spatial expansion of the tumor through uncontrolled mitosis, in the sense that the neoplasm invades the space occupied by adjacent tissue, thereby pushing the other tissue aside and eventually compressing the tissue. Often these tumors are associated with clearly outlined tumors in imaging. Infiltration is the behavior of the tumor either to grow (microscopic) tentacles that push into the surrounding tissue (often making the outline of the tumor undefined or diffuse) or to have tumor cells "seeded" into the tissue beyond the circumference of the tumorous mass; this does not mean that an infiltrative tumor does not take up space or does not compress the surrounding tissue as it grows, but an infiltrating neoplasm makes it difficult to say where the tumor ends and the healthy tissue starts. metastasis (spread to other locations in the body via lymph or blood).Of the above malignant characteristics, some elements do not apply to primary neoplasms of the brain: Primary brain tumors rarely metastasize to other organs; some forms of primary brain tumors can metastasize but will not spread outside the intracranial cavity or the central spinal canal. Due to the BBB, cancerous cells of a primary neoplasm cannot enter the bloodstream and get carried to another location in the body. (Occasional isolated case reports suggest spread of certain brain tumors outside the central nervous system, e.g. bone metastasis of glioblastoma multiforme.) Primary brain tumors generally are invasive (i.e. they will expand spatially and intrude into the space occupied by other brain tissue and compress those brain tissues); however, some of the more malignant primary brain tumors will infiltrate the surrounding tissue. By genetics In 2016, the WHO restructured their classifications of some categories of gliomas to include distinct genetic mutations that have been useful in differentiating tumor types, prognoses, and treatment responses. Genetic mutations are typically detected via immunohistochemistry, a technique that visualizes the presence or absence of a targeted protein via staining. Mutations in IDH1 and IDH2 genes are commonly found in low-grade gliomas Loss of both IDH genes combined with loss of chromosome arms 1p and 19q indicates the tumor is an oligodendroglioma Loss of TP53 and ATRX characterizes astrocytomas Genes EGFR, TERT, and PTEN, are commonly altered in gliomas and are useful in differentiating tumor grade and biology Specific types Anaplastic astrocytoma, Anaplastic oligodendroglioma, Astrocytoma, Central neurocytoma, Choroid plexus carcinoma, Choroid plexus papilloma, Choroid plexus tumor, Colloid cyst, Dysembryoplastic neuroepithelial tumour, Ependymal tumor, Fibrillary astrocytoma, Giant-cell glioblastoma, Glioblastoma multiforme, Gliomatosis cerebri, Gliosarcoma, Hemangiopericytoma, Medulloblastoma, Medulloepithelioma, Meningeal carcinomatosis, Neuroblastoma, Neurocytoma, Oligoastrocytoma, Oligodendroglioma, Optic nerve sheath meningioma, Pediatric ependymoma, Pilocytic astrocytoma, Pinealoblastoma, Pineocytoma, Pleomorphic anaplastic neuroblastoma, Pleomorphic xanthoastrocytoma, Primary central nervous system lymphoma, Sphenoid wing meningioma, Subependymal giant cell astrocytoma, Subependymoma, Trilateral retinoblastoma. Treatment A medical team generally assesses the treatment options and presents them to the person affected and their family. Various types of treatment are available depending on tumor type and location, and may be combined to produce the best chances of survival: Surgery: complete or partial resection of the tumor with the objective of removing as many tumor cells as possible. Radiotherapy: the most commonly used treatment for brain tumors; the tumor is irradiated with beta, x rays or gamma rays. Chemotherapy: a treatment option for cancer, however, it is not always used to treat brain tumors as the blood-brain barrier can prevent some drugs from reaching the cancerous cells. A variety of experimental therapies are available through clinical trials.Survival rates in primary brain tumors depend on the type of tumor, age, functional status of the patient, the extent of surgical removal and other factors specific to each case.Standard care for anaplastic oligodendrogliomas and anaplastic oligoastrocytomas is surgery followed by radiotherapy. One study found a survival benefit for the addition of chemotherapy to radiotherapy after surgery, compared with radiotherapy alone. Surgery The primary and most desired course of action described in medical literature is surgical removal (resection) via craniotomy. Minimally invasive techniques are becoming the dominant trend in neurosurgical oncology. The main objective of surgery is to remove as many tumor cells as possible, with complete removal being the best outcome and cytoreduction ("debulking") of the tumor otherwise. A Gross Total Resection (GTR) occurs when all visible signs of the tumor are removed, and subsequent scans show no apparent tumor. In some cases access to the tumor is impossible and impedes or prohibits surgery. Many meningiomas, with the exception of some tumors located at the skull base, can be successfully removed surgically. Most pituitary adenomas can be removed surgically, often using a minimally invasive approach through the nasal cavity and skull base (trans-nasal, trans-sphenoidal approach). Large pituitary adenomas require a craniotomy (opening of the skull) for their removal. Radiotherapy, including stereotactic approaches, is reserved for inoperable cases.Several current research studies aim to improve the surgical removal of brain tumors by labeling tumor cells with 5-aminolevulinic acid that causes them to fluoresce. Postoperative radiotherapy and chemotherapy are integral parts of the therapeutic standard for malignant tumors.Multiple metastatic tumors are generally treated with radiotherapy and chemotherapy rather than surgery and the prognosis in such cases is determined by the primary tumor, and is generally poor. Radiation therapy The goal of radiation therapy is to kill tumor cells while leaving normal brain tissue unharmed. In standard external beam radiation therapy, multiple treatments of standard-dose "fractions" of radiation are applied to the brain. This process is repeated for a total of 10 to 30 treatments, depending on the type of tumor. This additional treatment provides some patients with improved outcomes and longer survival rates.Radiosurgery is a treatment method that uses computerized calculations to focus radiation at the site of the tumor while minimizing the radiation dose to the surrounding brain. Radiosurgery may be an adjunct to other treatments, or it may represent the primary treatment technique for some tumors. Forms used include stereotactic radiosurgery, such as Gamma knife, Cyberknife or Novalis Tx radiosurgery.Radiotherapy is the most common treatment for secondary brain tumors. The amount of radiotherapy depends on the size of the area of the brain affected by cancer. Conventional external beam "whole-brain radiotherapy treatment" (WBRT) or "whole-brain irradiation" may be suggested if there is a risk that other secondary tumors will develop in the future. Stereotactic radiotherapy is usually recommended in cases involving fewer than three small secondary brain tumors. Radiotherapy may be used following, or in some cases in place of, resection of the tumor. Forms of radiotherapy used for brain cancer include external beam radiation therapy, the most common, and brachytherapy and proton therapy, the last especially used for children. People who receive stereotactic radiosurgery (SRS) and whole-brain radiation therapy (WBRT) for the treatment of metastatic brain tumors have more than twice the risk of developing learning and memory problems than those treated with SRS alone. Results of a 2021 systematic review found that when using SRS as the initial treatment, survival or death related to brain metastasis was not greater than alone versus SRS with WBRT.Postoperative conventional daily radiotherapy improves survival for adults with good functional well-being and high grade glioma compared to no postoperative radiotherapy. Hypofractionated radiation therapy has similar efficacy for survival as compared to conventional radiotherapy, particularly for individuals aged 60 and older with glioblastoma. Chemotherapy Patients undergoing chemotherapy are administered drugs designed to kill tumor cells. Although chemotherapy may improve overall survival in patients with the most malignant primary brain tumors, it does so in only about 20 percent of patients. Chemotherapy is often used in young children instead of radiation, as radiation may have negative effects on the developing brain. The decision to prescribe this treatment is based on a patients overall health, type of tumor, and extent of cancer. The toxicity and many side effects of the drugs, and the uncertain outcome of chemotherapy in brain tumors puts this treatment further down the line of treatment options with surgery and radiation therapy preferred.UCLA Neuro-Oncology publishes real-time survival data for patients with a diagnosis of glioblastoma multiforme. They are the only institution in the United States that displays how brain tumor patients are performing on current therapies. They also show a listing of chemotherapy agents used to treat high-grade glioma tumors.Genetic mutations have significant effects on the effectiveness of chemotherapy. Gliomas with IDH1 or IDH2 mutations respond better to chemotherapy than those without the mutation. Loss of chromosome arms 1p and 19q also indicate better response to chemoradiation. Other A shunt may be used to relieve symptoms caused by intracranial pressure, by reducing the build-up of fluid (hydrocephalus) caused by the blockage of the free flow of cerebrospinal fluid. Prognosis The prognosis of brain cancer depends on the type of cancer diagnosed. Medulloblastoma has a good prognosis with chemotherapy, radiotherapy, and surgical resection while glioblastoma multiforme has a median survival of only 12 months even with aggressive chemoradiotherapy and surgery. Brainstem gliomas have the poorest prognosis of any form of brain cancer, with most patients dying within one year, even with therapy that typically consists of radiation to the tumor along with corticosteroids. However, one type, focal brainstem gliomas in children, seems open to exceptional prognosis and long-term survival has frequently been reported.Prognosis is also affected by presentation of genetic mutations. Certain mutations provide better prognosis than others. IDH1 and IDH2 mutations in gliomas, as well as deletion of chromosome arms 1p and 19q, generally indicate better prognosis. TP53, ATRX, EGFR, PTEN, and TERT mutations are also useful in determining prognosis. Glioblastoma multiforme Glioblastoma multiforme (GBM) is the most aggressive (grade IV) and most common form of a malignant brain tumor. Even when aggressive multimodality therapy consisting of radiotherapy, chemotherapy, and surgical excision is used, median survival is only 12–17 months. Standard therapy for glioblastoma multiforme consists of maximal surgical resection of the tumor, followed by radiotherapy between two and four weeks after the surgical procedure to remove the cancer, then by chemotherapy, such as temozolomide. Most patients with glioblastoma take a corticosteroid, typically dexamethasone, during their illness to relieve symptoms. Experimental treatments include targeted therapy, gamma knife radiosurgery, boron neutron capture therapy, gene therapy also chemowafer implants. Oligodendrogliomas Oligodendrogliomas are incurable but slowly progressive malignant brain tumors. They can be treated with surgical resection, chemotherapy, radiotherapy or a combination. For some suspected low-grade (grade II) tumors, only a course of watchful waiting and symptomatic therapy is opted for. These tumors show a high frequency of co-deletions of the p and q arms of chromosome 1 and chromosome 19 respectively (1p19q co-deletion) and have been found to be especially chemosensitive with one report claiming them to be one of the most chemosensitive tumors. A median survival of up to 16.7 years has been reported for grade II oligodendrogliomas. Acoustic neuroma Acoustic neuromas are non-cancerous tumors. They can be treated with surgery, radiation therapy, or observation. Early intervention with surgery or radiation is recommended to prevent progressive hearing loss. Epidemiology Figures for incidences of cancers of the brain show a significant difference between more- and less-developed countries (the less-developed countries have lower incidences of tumors of the brain). This could be explained by undiagnosed tumor-related deaths (patients in extremely poor situations do not get diagnosed, simply because they do not have access to the modern diagnostic facilities required to diagnose a brain tumor) and by deaths caused by other poverty-related causes that preempt a patients life before tumors develop or tumors become life-threatening. Nevertheless, statistics suggest that certain forms of primary brain tumors are more common among certain populations.The incidence of low-grade astrocytoma has not been shown to vary significantly with nationality. However, studies examining the incidence of malignant central nervous system (CNS) tumors have shown some variation with national origin. Since some high-grade lesions arise from low-grade tumors, these trends are worth mentioning. Specifically, the incidence of CNS tumors in the United States, Israel, and the Nordic countries is relatively high, while Japan and Asian countries have a lower incidence. These differences probably reflect some biological differences as well as differences in pathologic diagnosis and reporting. Worldwide data on incidence of cancer can be found at the WHO (World Health Organisation) and is handled by the IARC (International Agency for Research on Cancer) located in France. United States In the United States in 2015, approximately 166,039 people were living with brain or other central nervous system tumors. Over 2018, it was projected that there would be 23,880 new cases of brain tumors and 16,830 deaths in 2018 as a result, accounting for 1.4 percent of all cancers and 2.8 percent of all cancer deaths. Median age of diagnosis was 58 years old, while median age of death was 65. Diagnosis was slightly more common in males, at approximately 7.5 cases per 100 000 people, while females saw 2 fewer at 5.4. Deaths as a result of brain cancer were 5.3 per 100 000 for males, and 3.6 per 100 000 for females, making brain cancer the 10th leading cause of cancer death in the United States. Overall lifetime risk of developing brain cancer is approximated at 0.6 percent for men and women. UK Brain, other CNS or intracranial tumors are the ninth most common cancer in the UK (around 10,600 people were diagnosed in 2013), and it is the eighth most common cause of cancer death (around 5,200 people died in 2012). White British patients with brain tumour are 30% more likely to die within a year of diagnosis than patients from other ethnicities. The reason for this is unknown. Children In the United States more than 28,000 people under 20 are estimated to have a brain tumor. About 3,720 new cases of brain tumors are expected to be diagnosed in those under 15 in 2019. Higher rates were reported in 1985–1994 than in 1975–1983. There is some debate as to the reasons; one theory is that the trend is the result of improved diagnosis and reporting, since the jump occurred at the same time that MRIs became available widely, and there was no coincident jump in mortality. Central nervous system tumors make up 20–25 percent of cancers in children.The average survival rate for all primary brain cancers in children is 74%. Brain cancers are the most common cancer in children under 19, are result in more death in this group than leukemia. Younger people do less well.The most common brain tumor types in children (0-14) are: pilocytic astrocytoma, malignant glioma, medulloblastoma, neuronal and mixed neuronal-glial tumors, and ependymoma.In children under 2, about 70% of brain tumors are medulloblastomas, ependymomas, and low-grade gliomas. Less commonly, and seen usually in infants, are teratomas and atypical teratoid rhabdoid tumors. Germ cell tumors, including teratomas, make up just 3% of pediatric primary brain tumors, but the worldwide incidence varies significantly.In the UK, 429 children aged 14 and under are diagnosed with a brain tumour on average each year, and 563 children and young people under the age of 19 are diagnosed. Research Immunotherapy Cancer immunotherapy is being actively studied. For malignant gliomas no therapy has been shown to improve life expectancy as of 2015. Vesicular stomatitis virus In 2000, researchers used the vesicular stomatitis virus, or VSV, to infect and kill cancer cells without affecting healthy cells. Retroviral replicating vectors Led by Prof. Nori Kasahara, researchers from USC, who are now at UCLA, reported in 2001 the first successful example of applying the use of retroviral replicating vectors towards transducing cell lines derived from solid tumors. Building on this initial work, the researchers applied the technology to in vivo models of cancer and in 2005 reported a long-term survival benefit in an experimental brain tumor animal model. Subsequently, in preparation for human clinical trials, this technology was further developed by Tocagen (a pharmaceutical company primarily focused on brain cancer treatments) as a combination treatment (Toca 511 & Toca FC). This has been under investigation since 2010 in a Phase I/II clinical trial for the potential treatment of recurrent high-grade glioma including glioblastoma multiforme (GBM) and anaplastic astrocytoma. No results have yet been published. Non-invasive detection Efforts to detect and monitor development and treatment response of brain tumors by liquid biopsy from blood, cerebrospinal fluid or urine, are in the early stages of development. See also Brain Tumor Nervous system neoplasm List of brain tumor cases References External links Brain and CNS cancers at Curlie Brain tumour information from Cancer Research UK Neuro-Oncology: Cancer Management Guidelines MedPix Teaching File MR Scans of Primary Brain Lymphoma, etc.
You are an assistant in medical learning. Your role is to clarify medical terminology, ensuring the information is both accurate and accessible.
Could you please explain the term 'Benzatropine' in simple language?
Benzatropine (INN), known as benztropine in the United States and Japan, is a medication used to treat movement disorders like parkinsonism and dystonia, as well as extrapyramidal side effects of antipsychotics, including akathisia. It is not useful for tardive dyskinesia. It is taken by mouth or by injection into a vein or muscle. Benefits are seen within two hours and last for up to ten hours.Common side effects include dry mouth, blurry vision, nausea, and constipation. Serious side effect may include urinary retention, hallucinations, hyperthermia, and poor coordination. It is unclear if use during pregnancy or breastfeeding is safe. Benzatropine is an anticholinergic which works by blocking the activity of the muscarinic acetylcholine receptor.Benzatropine was approved for medical use in the United States in 1954. It is available as a generic medication. In 2019, it was the 221th most commonly prescribed medication in the United States, with more than 2 million prescriptions. It is sold under the brand name Cogentin among others. Medical uses Benzatropine is used to reduce extrapyramidal side effects of antipsychotic treatment. Benzatropine is also a second-line drug for the treatment of Parkinsons disease. It improves tremor, and may alleviate rigidity and bradykinesia. Benzatropine is also sometimes used for the treatment of dystonia, a rare disorder that causes abnormal muscle contraction, resulting in twisting postures of limbs, trunk, or face. Adverse effects These are principally anticholinergic: Dry mouth Blurred vision Cognitive changes Drowsiness Constipation Urinary retention Tachycardia Anorexia Severe delirium and hallucinations (in overdose)While some studies suggest that use of anticholinergics increases the risk of tardive dyskinesia (a long-term side effect of antipsychotics), other studies have found no association between anticholinergic exposure and risk of developing tardive dyskinesia, although symptoms may be worsened.Drugs that decrease cholinergic transmission may impair storage of new information into long-term memory. Anticholinergic agents can also impair time perception. Pharmacology Benzatropine is a centrally acting anticholinergic/antihistamine agent. It is a selective M1 muscarinic acetylcholine receptor antagonist. Benzatropine partially blocks cholinergic activity in the basal ganglia and has also been shown to increase the availability of dopamine by blocking its reuptake and storage in central sites, and as a result, increasing dopaminergic activity. Animal studies have indicated that anticholinergic activity of benzatropine is approximately one-half that of atropine, while its antihistamine activity approaches that of mepyramine. Its anticholinergic effects have been established as therapeutically significant in the management of Parkinsonism. Benzatropine antagonizes the effect of acetylcholine, decreasing the imbalance between the neurotransmitters acetylcholine and dopamine, which may improve the symptoms of early Parkinsons disease.Benzatropine analogues are atypical dopamine reuptake inhibitors, which might make them useful for people with akathisia secondary to antipsychotic therapy.Benzatropine also acts as a functional inhibitor of acid sphingomyelinase (FIASMA).Benzatropine has been also identified, by a high throughput screening approach, as a potent differentiating agent for oligodendrocytes, possibly working through M1 and M3 muscarinic receptors. In preclinical models for multiple sclerosis, benzatropine decreased clinical symptoms and enhanced re-myelination. Other animals In veterinary medicine, benzatropine is used to treat priapism in stallions. Naming Since 1959, benzatropine is the official international nonproprietary name of the medication under the INN scheme, the medication naming system coordinated by the World Health Organization; it is also the British Approved Name (BAN) given in the British Pharmacopoeia, and has been the official nonproprietary name in Australia since 2015. Regional variations of the "a" spelling are also used in French, Italian, Portuguese, and Spanish, as well as Latin (all medications are assigned a Latin name by WHO)."Benztropine" is the official United States Adopted Name (USAN), the medication naming system coordinated by the USAN Council, co-sponsored by the American Medical Association (AMA), the United States Pharmacopeial Convention (USP), and the American Pharmacists Association (APhA). It is also the Japanese Accepted Name (JAN) and was used in Australia until 2015, when it was harmonized with the INN.Both names may be modified to account for the methanesulfonate salt as which the medication is formulated: the modified INN (INNm) and BAN (BANM) is benzatropine mesilate, while the modified USAN is benztropine mesylate. The modified JAN is a hybrid form, benztropine mesilate.The misspelling benzotropine is also occasionally seen in the literature. See also Gaboxadol Propantheline bromide Glycopyrrolate Oxybutynin References External links "Benzatropine". Drug Information Portal. U.S. National Library of Medicine.
You are a facilitator of medical knowledge. Provide thorough and accessible explanations of medical terms, catering to both specialists and non-specialists.
Could you offer a clear explanation of the term 'Isovaleric acidemia' as used in the medical field?
Isovaleric acidemia is a rare autosomal recessive metabolic disorder which disrupts or prevents normal metabolism of the branched-chain amino acid leucine. It is a classical type of organic acidemia. Symptoms and signs A characteristic feature of isovaleric acidemia is a distinctive odor of sweaty feet. This odor is caused by the buildup of a compound called isovaleric acid in affected individuals.In about half of cases, the signs and symptoms of this disorder become apparent within a few days after birth and include poor feeding, vomiting, seizures, and lack of energy that can progress to coma. These medical problems are typically severe and can be life-threatening. In the other half of cases, the signs and symptoms of the disorder appear during childhood and may come and go over time. They are often triggered by an infection or by eating an increased amount of protein-rich foods. Genetics The disorder has an autosomal recessive inheritance pattern, which means the defective gene is located on an autosome, and two copies of the gene - one from each parent - must be inherited to be affected by the disorder. The parents of a child with an autosomal recessive disorder are carriers of one copy of the defective gene, but are usually not affected by the disorder.Mutations in both copies of the IVD gene result in isovaleric acidemia. Pathophysiology The enzyme encoded by IVD, isovaleric acid-CoA dehydrogenase (EC 1.3.99.10), plays an essential role in breaking down proteins from the diet. Specifically, the enzyme is responsible for the third step in processing leucine, an essential amino acid. If a mutation in the IVD gene reduces or eliminates the activity of this enzyme, the body is unable to break down leucine properly. As a result, isovaleric acid and related compounds build up to toxic levels, damaging the brain and nervous system. Diagnosis The urine of newborns can be screened for isovaleric acidemia using mass spectrometry, allowing for early diagnosis. Elevations of isovalerylglycine in urine and of isovalerylcarnitine in plasma are found. Screening On 9 May 2014, the UK National Screening Committee (UK NSC) announced its recommendation to screen every newborn baby in the UK for four further genetic disorders as part of its NHS Newborn Blood Spot Screening programme, including isovaleric acidemia. Treatment Treatment consists of dietary protein restriction, particularly leucine. During acute episodes, glycine is sometimes given, which conjugates with isovalerate forming isovalerylglycine, or carnitine which has a similar effect. Elevated 3-hydroxyisovaleric acid is a clinical biomarker of biotin deficiency. Without biotin, leucine and isoleucine cannot be metabolized normally and results in elevated synthesis of isovaleric acid and consequently 3-hydroxyisovaleric acid, isovalerylglycine, and other isovaleric acid metabolites as well. Elevated serum 3-hydroxyisovaleric acid concentrations can be caused by supplementation with 3-hydroxyisovaleric acid, genetic conditions, or dietary deficiency of biotin. Some patients with isovaleric acidemia may benefit from supplemental biotin. Biotin deficiency on its own can have severe physiological and cognitive consequences that closely resemble symptoms of organic acidemias. Prognosis A 2011 review of 176 cases found that diagnoses made early in life (within a few days of birth) were associated with more severe disease and a mortality of 33%. Children diagnosed later, and who had milder symptoms, showed a lower mortality rate of ~3%. Epidemiology Isovaleric acidemia is estimated to affect at least 1 in 250,000 births in the United States. See also Maple syrup urine disease Methylmalonic acidemia Propionic acidemia References External links Isovaleric acidemia at NLM Genetics Home Reference GeneReviews: The Organic Acidemias