accession_id
stringlengths
9
11
pmid
stringlengths
1
8
introduction
stringlengths
0
134k
methods
stringlengths
0
208k
results
stringlengths
0
357k
discussion
stringlengths
0
357k
conclusion
stringlengths
0
58.3k
front
stringlengths
0
30.9k
body
stringlengths
0
573k
back
stringlengths
0
126k
license
stringclasses
4 values
retracted
stringclasses
2 values
last_updated
stringlengths
19
19
citation
stringlengths
14
94
package_file
stringlengths
0
35
PMC10789006
38221609
Introduction With its extraordinary potency and sustained clinical response, chimeric antigen receptor (CAR-T) cell therapy represents a breakthrough in the treatment of hematologic cancer. CARs, which are engineered synthetic receptors, enable the redirection of lymphocytes, primarily T cells, to recognize and destroy cells expressing specific target antigens [ 1 ]. The US Food and Drug Administration (FDA) approved anti-CD19 CAR-T cell therapy for B-cell malignancies in 2017 due to its outstanding effectiveness [ 2 ]. Furthermore, the therapeutic potential of CAR T-cell therapy is also being expanded by the novel CAR designs that now undergo clinical testing and target alternative cancer antigens. CAR-T cell therapy exhibits complex in vivo pharmacokinetics influenced by intrinsic and extrinsic factors such as product phenotype, composition, tumor burden, and prior lymphodepletion treatments. Moreover, the functional activity of CAR-T cells differs between patients, making it challenging to establish dosing regimens with consistent efficacy and manageable toxicity [ 3 ]. Consequently, the clinical application of CAR-T cell therapy is currently restricted to medically fit patients at specialized cancer centers due to associated acute and chronic side effects. The most prevalent acute toxicity observed in CAR-T cell therapy is cytokine release syndrome (CRS), which is triggered by the release of inflammatory cytokines from CAR-T cells, leading to the production of critical cytokines like interleukin 6 (IL-6) by innate immune cells [ 4 ]. Current management strategies for CRS include IL-6 neutralization through receptor antagonists and systemic immunosuppression using steroids [ 5 ]. However, the efficacy and safety of these approaches are limited by their high cost [ 6 ] and potential impairment of the infused CAR-T cells' anti-tumor efficacy [ 7 ]. Given these challenges, novel adjunctive therapeutic strategies that are safe, affordable, and easily implemented are urgently needed to improve CAR T-cell therapy clinical outcomes. Melatonin, a naturally occurring hormone primarily synthesized by the pineal gland, possesses anti-inflammatory and immunomodulatory properties [ 8 ]. Notably, melatonin has been found to inhibit the production and release of pro-inflammatory cytokines, including interleukin-6 (IL-6) and interleukin-1β (IL-1β). Some of these actions are certainly mediated by melatonin membrane receptors, such as MT1 and MT2. Melatonin effectively scavenges a wide range of reactive oxygen/nitrogen species (ROS/RNS), including hydroxyl radicals and the commonly overlooked carbonate radical [ 9 ]. Melatonin administration inhibits the secretion of proinflammatory factors in a murine model of atherosclerosis and subarachnoid hemorrhage [ 10 ]. Similarly, melatonin reduces the serum concentration of pro-inflammatory cytokines and oxidative stress markers in patients with relapsing–remitting multiple sclerosis (RRMS) [ 11 ]. Consequently, melatonin has emerged as a potential therapeutic agent for managing cytokine storms, including those associated with CAR T-cell therapies. Furthermore, melatonin exhibits its anticancer effects throughout various stages of tumor development, including initiation, promotion, and progression, while preserving the integrity of normal cells [ 12 – 15 ]. Although the precise mechanisms underlying the pro-apoptotic selectivity and efficiency of melatonin in cancer cells are not fully understood, melatonin likely modulates unique biochemical pathways associated with cancer [ 16 ]. These findings underscore the remarkable potential of melatonin as a promising candidate for selectively targeting cancer cells while preserving the normal cells and tissues, aligning with a long-envisioned strategy for managing side effects. Extensive preclinical studies have greatly facilitated the development and optimization of anti-CD19 CAR-T cells, while cell line-derived xenograft models have been vital for evaluating therapy efficacy and safety. Among the numerous available models, mice xenografted with Raji tumor cells have gained widespread recognition and acceptance for testing CD19-targeting CAR-T cell therapies [ 17 ]. Consequently, this study aimed to use Raji tumor cells and mouse models to investigate the role of melatonin as an immunomodulatory compound in CD19-targeting CAR T-cell therapy, thereby providing valuable insights into improving treatment protocols and ultimately enhancing patient outcomes.
Materials and methods Antibodies and reagents Fluorescence-labeled antibodies for CD3(SK7), CD4 (RM4-5), CD8 (53–6.7), CD11b (M1/70), CD25 (PC61.5), CD44 (IM7), CD62L (MEL-14), CD45 (30-F11), CD69 (H1.2F3), iNOS (CXNFT), and F4/80 (J43) were purchased from eBioscience. FITC-conjugated F(ab')2 fragment of goat anti-human IgG1 Fcγ antibody (Jackson ImmunoResearch Laboratories, West Grove, PA) was utilized to detect anti-CD19 CAR positive cells. Anti-CD3/anti-CD28 (αCD3/αCD28) monoclonal antibodies were purchased from eBioscience. Melatonin was purchased from Sigma (M5250). Cell culture Burkitt’s Lymphoma Raji cells were obtained from the American Tissue Culture Collection (ATCC). To establish single-cell clones expressing luciferase, wild-type tumor cell lines were genetically modified through stable transduction with a lentiviral vector containing the firefly luciferase (FLuc) gene encoding firefly luciferase (Lentigen Technology). Following transduction, the luciferase-expressing cells were selectively isolated. Raji-FLuc cells were cultured in RPMI medium (Invitrogen) supplemented with 10% FBS (Sigma-Aldrich) and, 2 mM GlutaMAX (Thermo Fisher Scientific). Cell viability assay Cell viability was determined using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay as previously described [ 18 ]. Briefly, Raji cells in the exponential growth phase were seeded into 96-well plates. The cells were then exposed to melatonin at concentrations ranging from 0 to 10 mM for 24 h, as well as at different time points (0–24 h) with a fixed concentration of 10 mM. The absorbance of formazan solution was measured at 570 nm after each treatment period using a microplate reader (Multiskan FC, Thermo Scientific). T cell isolation and activation Mouse T cells were isolated from the spleen and lymph nodes using CD90.2 MicroBeads (Miltenyi Biotec). Enriched T cells were subjected to flow cytometric cell sorting, targeting the CD3 + CD44loCD62Lhi surface markers, to isolate purified naïve T cells. These purified naïve T cells were cultured in RPMI-1640 medium supplemented with 10% FBS, 2 mM GlutaMAX, and subsequently activated for T cell activation analysis using plate-bound anti-CD3 (1 μg/ml) and anti-CD28 (1 μg/ml) antibodies. In the in vitro experiment, melatonin (1 mM) was administered simultaneously with T cell receptor (TCR) stimulation of T cells. Buffy coats from anonymous healthy donors were obtained from the Blood Center of Jiangsu Province. All necessary ethical and safety protocols were followed during the handling of buffy coats. A positive selection method was used to isolate CD4-positive and CD8-positive human T cells. This process involved using a 1:1 mixture of CD4-negative and CD8-negative microbeads (Miltenyi Biotec) following the manufacturer's protocol. Isolated T cells were plated onto 24-well tissue culture plates (Corning) precoated with 1 μg/mL anti-CD3 antibody and 1 μg/mL anti-CD28 antibody in RMPI medium supplemented with 10% FBS, 20 IU IL-2 (130–097-745, Miltenyi) and 10 ng/mL IL-7 (130–095-363, Miltenyi), 2 mM GlutaMax. Mice The mice were treated in accordance with all relevant animal use guidelines and ethical regulations based on a protocol approved by the Animal Care Committee of Nanjing First Hospital, Nanjing Medical University. In the CRS model, 6–8-week-old female CB17.Cg-PrkdcscidLystbg-J/CrlBltw (SCID ® beige) mice were intraperitoneally injected with 3 × 10 6 Raji-Fluc cells, and tumors were allowed to grow for 20 days. Melatonin or phosphate-buffered saline (PBS, vehicle) was administered intraperitoneally at 10 mg/kg once per day, beginning 5 h before CAR T cell transfer. For the next three days, inject once every 24 h, a total of four injections. To assess the function of CD19-targeting CAR-T cells in vivo, we utilized 6–8-week-old NSG mice. Mice were injected intravenously (i.v.) with 6 × 10 6 Raji luciferase cells on day 0. Same as the CRS model, Melatonin or phosphate-buffered saline (PBS, vehicle) was administered intraperitoneally at 10 mg/kg once per day, beginning 5 h before CAR T cell transfer. For the next 3 days, inject once every 24 h, a total of four injections. The choice of administering melatonin at a dose of 10 mg/kg in the drinking water to the aged mice was based on previous literature and research findings that suggest this dosage may have beneficial effects in reducing inflammation [ 19 , 20 ]. Tumor burden in mice was assessed by intraperitoneal injection of Raji-Luc cells followed by imaging using the IVIS Spectrum bioluminescence system (PerkinElmer, USA). Before imaging, mice received an intraperitoneal injection of 10 μl/g body weight of 15 mg/ml D-luciferin potassium salt (Beyotime, Shanghai, China) dissolved in PBS. Virus production and transduction of T cell The virus was generated through Lipofectamine-mediated transient transfection of 293 T cells with plasmids containing Moloney murine leukemia virus gag-pol, RD114 envelope, and transfer vector. For transduction of human T cells, isolated T cells were plated onto 24-well tissue culture plates precoated with 1 μg/mL anti-CD3 antibody and 1 μg/mL anti-CD28 antibody in RMPI medium supplemented with 10% FBS, 20 IU IL-2 and 10 ng/mL IL-7, 2 mM GlutaMax on day 0. After 2 days, T cells were transduced with retroviral supernatants through centrifugation on Retronectin-coated plates, resulting in the generation of CD19-CAR T cells. Transduced T cells were cultured at a concentration of 0.5 × 10 6 cells/ml in T cell medium enriched from day 2 after transduction onwards. T cells were typically counted every 2 days. T cell expansion was calculated by dividing the absolute number of expanded T cells at each time point during culture by the respective number on day 0 (T cell transduction). T cell viability was assessed by trypan blue staining. Transduction efficiency was confirmed three days later via flow cytometry (gated on CD3 + T cells). CAR-T cells were administered to mice seven days following the initial T-cell activation. Culture medium was changed and cell factor replenished every 1 to 2 days. Serum collection Blood samples were obtained from the mice using either the tail clip or the retro-orbital bleeding method. The collected blood was allowed to clot for 30 min at 37 °C. To separate serum, the clotted blood samples were centrifuged at 6000 × g for 10 min at 4 °C. For optimal storage conditions, the resulting serum was divided into aliquots and placed into individual tubes to prevent multiple freeze–thaw cycles. Finally, the aliquoted serum samples were promptly stored at a temperature of − 80 °C until further analysis to ensure sample integrity. Cytokine measurements Serum cytokines were measured using ELISA kits for mouse SAA3 (Millipore), IL-6, IL-1β, and human IL-2, IL-10, IFN-γ (Thermo Fisher Scientific) according to the manufacturer’s instructions. During the cytokine measurement assay, T cells were prepared by replacing the culture medium with cytokine-free medium 24 h in advance. This step ensured that the T cells were devoid of any residual cytokines or interference from the culture medium, allowing for accurate cytokine measurements. Isolation of murine peritoneal macrophages Peritoneal macrophages were isolated following a standard procedure [ 21 ]. Briefly, mice were euthanized by CO2 inhalation, and approximately 5 mL of ice-cold PBS was gently injected into the peritoneal cavity using a 25-gauge needle. Abdominal massaging was performed to aid the dispersion of peritoneal cells. The peritoneal fluid was then aspirated utilizing an 18-gauge needle and carefully collected into a sterile tube. The collected peritoneal fluid underwent centrifugation at 300 × g for 5 min at 4 °C to pellet the cells. The resulting cell pellet was resuspended in complete RPMI-1640 medium, which included 10% FBS, 1% penicillin–streptomycin, and 2 mM GlutaMax. Subsequently, the isolated peritoneal macrophages were counted using a hemocytometer, and cell viability was assessed through trypan blue exclusion. The identity of peritoneal macrophages was confirmed by flow cytometry, characterized by high expression of F4/80 and CD11b. Further analysis included gating for iNOS expression. Flow cytometry and intracellular cytokine staining (ICS) Flow cytometric analyses were performed using a DxFLEX flow cytometer (Beckman) as described previously [ 22 ]. For intracellular cytokine staining (ICS), the cells were first stimulated with 50 ng/ml PMA plus 750 ng/ml ionomycin for 6 h in the presence of 10 μg/ml protein transport inhibitor monensin (eBioscience). Subsequently, the cells were fixed and incubated with the appropriate antibodies as indicated. Finally, flow cytometry was employed to analyze the stained cells. Statistical analyses Statistical analyses were performed using GraphPad Prism software. Significant differences between the two groups were analyzed using unpaired two-tailed Student’s t-tests. Kaplan–Meier analyses were performed, and the log-rank Mantel-Cox test was used to determine the statistical differences between the survival curves for the two groups. P values < 0.05 were considered significant, and the levels of significance were indicated as *P < 0.05, **P < 0.01, and ***P < 0.001. All data are presented as the mean ± SEM.
Results Concentration-dependent effects of melatonin on Raji cell line and primary T cells The effects of melatonin on the Raji cell line and primary T cells were investigated by exposing the cells to various concentrations of melatonin and assessing their responses. The concentration-dependent effects of melatonin on cell viability and the induction of cell death were examined. Remarkably, no significant changes in Raji cells or primary T cells viability were observed after 24 h of treatment with melatonin concentrations ranging from 0.01 nM to 1 mM (Fig. 1 A and C). However, high melatonin concentrations (10 mM) elicited a distinct response, leading to cell death (Fig. 1 A and C). Furthermore, the extent of cell death was influenced by the duration of melatonin exposure. Prolonged exposure to melatonin resulted in a more pronounced reduction in cell viability (Fig. 1 B and D). These findings suggest that neither Raji cells nor primary T-cells respond to low melatonin concentrations, but high melatonin concentrations induce cell death. Based on these findings, we selected a maximum concentration of 1 mM melatonin for subsequent experiments, as it did not significantly affect the viability of Raji cells or T cells. Effects of melatonin on T-cell activation in vitro T-cell activation plays a crucial role in anti-tumor immunity because activated T-cells are key effectors in tumor cell recognition and elimination [ 23 ]. To investigate the effects of melatonin on T cell activation, we treated T cells with 1 mM concentrations of melatonin and examined the impact of melatonin on T-cell activation markers and cytokine production in vitro. Flow cytometry analysis of CD25 and CD69 expression revealed no significant differences in the percentage of CD25 + and CD69 + T cells between the melatonin-treated and control groups (Fig. 2 A). This suggests that melatonin does not have a noticeable effect on the expression of T-cell activation markers. Furthermore, we assessed T cell proliferation and measured the production of cytokines associated with T cell activation, including interleukin-2 (IL-2) and interferon-gamma (IFN-γ). Proliferation and ELISA assays revealed no significant differences in IL-2 and IFN-γ proliferation and secretion between the melatonin-treated and control groups (Fig. 2 B and C). These findings indicated that melatonin treatment did not cause significant changes in proliferation and cytokine production associated with T cell activation. Overall, our results suggest that melatonin has no significant effect on T cell activation in vitro, as evidenced by the absence of changes in activation marker expression, proliferation, and cytokine production. Equivalent tumor cell cytotoxicity and CAR-T cell expansion through melatonin supplementation in vitro To investigate the potential impact of melatonin on tumor cell cytotoxicity and the expansion of chimeric antigen receptor CAR-T cells, we used a CD19-CAR construct incorporating 4-1BB co-stimulation (CD19scFv-4-1BB-CD3ζ, as depicted in Fig. 3 A). Clinical trials have demonstrated that this CAR design achieves high rates of complete remission [ 24 ]. The CD19scFv-4-1BB -CD3ζ CAR construct was used in subsequent experiments and is hereafter referred to as CD19 CAR. Through recombinant Fc/CD19 staining, we determined that more than 60% of the T cells expressed CD19 CAR on their surface (Fig. 3 B). Our initial aim was to investigate the expansion potential of CAR-T cells in the presence of melatonin. Interestingly, the addition of melatonin to the culture medium did not augment CAR-T cell expansion (Fig. 3 C). Next, we evaluated the cytotoxicity of CAR-T cells in the presence of melatonin. We co-cultured anti-CD19 CAR T cells with malignant CD19 + B cells (Raji cells) using varying effector:target (E:T) ratios for 4 h (Fig. 3 D). Our findings revealed that anti-CD19 CAR-T cells exhibited specific cytotoxicity against Raji cells, indicating that they are effective at targeting CD19-expressing tumor cells. Notably, the presence of melatonin in the culture medium had no significant effect on CAR-T cell cytotoxicity. The percentage of tumor cell death by melatonin-supplemented CAR-T cells was comparable to that of CAR-T cells without melatonin supplementation (p > 0.05), indicating that the two experimental groups had comparable levels of tumor cell cytotoxicity. Finally, after a 16-h incubation period, we examined cytokine production by CD19 CAR-T cells in the supernatant of effector-target cell co-cultures, which was subsequently analyzed using ELISA. First, we compared the levels of cytokines in the supernatants of melatonin co-cultures containing CD19 CAR or control T cells. Interestingly, the addition of melatonin did not increase IFNγ, IL-2, or IL-10 production by the treated cells (Fig. 3 E–G). Taken together, these findings suggest that adding melatonin to the culture medium does not compromise the expansion ability or cytotoxicity of CAR-T cells targeting CD19-expressing tumor cells. Melatonin attenuates CAR T cell-induced CRS and enhances overall survival in mouse model Next, we aimed to mimic the clinical setting by modeling CAR T cell-induced cytokine release syndrome (CRS) in mice, in which CD19 CAR-T cells experience a high tumor burden and initiate CRS within a few days. We used SCID-beige mice to model CRS, which replicates the key aspects of CAR T cell-induced CRS observed in a clinical setting [ 25 ] (Fig. 4 A). Remarkably, the combination of CD19 CAR-T cells and melatonin resulted in significantly longer overall survival (OS) compared to the mice that received CD19 CAR-T cells alone (median OS 54.7 h vs. 80.9 h, p < 0.05; Fig. 4 B). Furthermore, the melatonin-treated group lost less weight (Fig. 4 C). Of particular significance, the evaluation of serum cytokines on day four revealed a significant reduction in the melatonin-treated mice (Fig. 4 D). Notably, CAR T cell-released cytokines such as IL-6 and IL-1β were significantly lower compared to those in the control group (Fig. 4 E and F). IL-6 and IL-1β from macrophages are key drivers of both CRS and CRES and cause CAR-T cell therapy failure and death, limiting the broad applicability of this treatment [ 25 ]. To further elucidate the role of melatonin in modulating macrophage contribution to CRS, we investigated the activity of inducible nitric oxide synthase (iNOS), an enzyme predominantly expressed by activated macrophages that is induced by IL-6 and IL-1β [ 26 ]. Consistent with previous results, we found that peritoneal macrophages significantly increased iNOS production during CRS [ 25 ]. Notably, melatonin-treated macrophages produced significantly less iNOS during CRS (Fig. 4 G). These findings support the idea that the observed decrease in CRS in vivo could be attributed, at least partially, to the direct modulation of macrophage cytokine production by melatonin. Equivalent antitumor efficacy and persistence of CD19 chimeric antigen receptor T cells through melatonin supplementation in vivo Finally, we investigated the effects of melatonin supplementation on the antitumor efficacy and persistence of CD19 CAR-T cells in vivo. We used a mouse model of CD19-positive tumor xenografts and compared the outcomes between groups that received CAR-T cells with or without melatonin supplementation (Fig. 5 A). Remarkably, both groups showed comparable antitumor efficacy and persistence (Fig. 5 B), as well as tumor growth inhibition and overall tumor burden (Fig. 5 C). Most importantly, the survival rate of the mice receiving melatonin was similar to that of the group treated solely with CAR-T therapy (Fig. 5 D). These results highlight the potential of melatonin as an adjuvant therapy to improve the therapeutic outcomes of CD19 CAR T cell-based treatments, without exerting a negative impact on the antitumor efficacy or persistence of CD19 CAR-T cells.
Discussion Despite rapid advancements in CAR-T cell therapy, CRS management remains a significant challenge [ 1 ]. In this study, we investigated the potential of melatonin as an adjunctive therapy for managing CRS associated with CD19-targeting CAR T-cell therapy. Our findings indicate that melatonin, with its well-established immunomodulatory properties and favorable safety profile, appears to be promising in this context. Importantly, melatonin supplementation did not compromise CD19 CAR-T cell antitumor efficacy. Tumor growth inhibition and overall tumor burden reduction were comparable between the melatonin-supplemented and control groups, indicating that melatonin does not interfere with the ability of CD19 CAR-T cells to effectively target and eliminate CD19-positive tumor cells. This finding suggests that melatonin can be safely administered as an adjuvant therapy without compromising the therapeutic potential of CAR-T cells. In the SCID-beige mouse model of CRS, the combination of CD19 CAR-T cells and melatonin resulted in prolonged survival, reduced systemic toxicity, and lower levels of pro-inflammatory cytokines such as IL-6 and IL-1β. These findings highlight melatonin’s immunomodulatory effects in mitigating CRS and its potential as a therapeutic strategy. Furthermore, melatonin directly modulated macrophage activity, including the suppression of iNOS production. This provides insight into the mechanisms by which melatonin helps to reduce CRS by regulating macrophage function (Fig. 6 ). There were some limitations in this study. Firstly, the specific focus of our investigation on CD19 CAR-T cells and CD19-positive tumor models may limit the generalizability of our results. The effects of melatonin observed in this study may not directly apply to CAR-T cells targeting different antigens or in the context of other tumor models. Therefore, it is crucial for future research to expand the scope and assess the effects of melatonin supplementation in diverse CAR-T cell therapies and tumor models. Furthermore, while our study suggests that melatonin has the potential to modulate macrophage function and attenuate the release of pro-inflammatory cytokines, the underlying mechanisms of these effects remain incompletely understood. Elucidating the precise signaling pathways and molecular mechanisms involved is essential for optimizing the therapeutic use of melatonin in CRS in the future. Additionally, as we explore the potential of melatonin, it is important to recognize that various interventions, including cytokine blockade, corticosteroids, and other immunomodulators, have been employed in clinical practice. Therefore, future studies should aim to directly compare melatonin with other treatments to determine its relative efficacy, safety, and appropriateness in different clinical scenarios, including its potential as a preventive option. Lastly, an important consideration in our study is the absence of a concurrent model of CRS and disease response in CAR-T cell therapy. Future research should aim to bridge this gap by developing preclinical models that replicate CRS and disease response simultaneously. Such models would allow for a more comprehensive assessment of the impact of immunomodulators like melatonin on both the safety and therapeutic efficacy of CAR-T cell therapies. In conclusion, our study demonstrates that melatonin supplementation does not compromise the antitumor efficacy or persistence of CD19 CAR-T cells in a mouse model of CD19-positive tumors. Moreover, melatonin supplementation can reduce CRS-associated toxicity and prolong overall survival. While our study provides important foundational insights, the applicability of our findings to human CAR-T cell therapy and CRS management requires further investigation. Clinical trials specifically designed to evaluate melatonin's efficacy and safety in human subjects, considering appropriate dosing and administration schedules, are essential for establishing its clinical relevance.
Background Chimeric antigen receptor CAR-T cell therapies have ushered in a new era of treatment for specific blood cancers, offering unparalleled efficacy in cases of treatment resistance or relapse. However, the emergence of cytokine release syndrome (CRS) as a side effect poses a challenge to the widespread application of CAR-T cell therapies. Melatonin, a natural hormone produced by the pineal gland known for its antioxidant and anti-inflammatory properties, has been explored for its potential immunomodulatory effects. Despite this, its specific role in mitigating CAR-T cell-induced CRS remains poorly understood. Methods In this study, our aim was to investigate the potential of melatonin as an immunomodulatory agent in the context of CD19-targeting CAR-T cell therapy and its impact on associated side effects. Using a mouse model, we evaluated the effects of melatonin on CAR-T cell-induced CRS and overall survival. Additionally, we assessed whether melatonin administration had any detrimental effects on the antitumor efficacy and persistence of CD19 CAR-T cells. Results Our findings demonstrate that melatonin effectively mitigated the severity of CAR-T cell-induced CRS in the mouse model, leading to improved overall survival outcomes. Remarkably, melatonin administration did not compromise the antitumor effectiveness or persistence of CD19 CAR-T cells, indicating its compatibility with therapeutic goals. These results suggest melatonin's potential as an immunomodulatory compound to alleviate CRS without compromising the therapeutic benefits of CAR-T cell therapy. Conclusion The study's outcomes shed light on melatonin's promise as a valuable addition to the existing treatment protocols for CAR-T cell therapies. By attenuating CAR-T cell-induced CRS while preserving the therapeutic impact of CAR-T cells, melatonin offers a potential strategy for optimizing and refining the safety and efficacy profile of CAR-T cell therapy. This research contributes to the evolving understanding of how to harness immunomodulatory agents to enhance the clinical application of innovative cancer treatments. Keywords
Acknowledgements Not applicable. Author contributions NZ and YL designed and performed the research, prepared the figures, and wrote part of the manuscript; ZB, JL, HW, and DDS contributed to the experiments; HLL and JHS provided essential reagent, scientific advice, and contributed to the experiments; SZ supervised the work and wrote the manuscript. Funding This study was supported by grants from the National Natural Science Foundation of China [Grant No. 82173378; 82273463; 82173205] and Natural Science Foundation of Hebei Province [Grant No. H2022201065]. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The study adhered to the principles outlined in the Declaration of Helsinki and was approved by the institutional review board of Nanjing First Hospital, Nanjing Medical University. All of mouse experiments have been approved by the the Animal Care Committee of the Nanjing First Hospital, Nanjing Medical University and have been performed in accordance with relevant institutions and national guidelines and regulations. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:33
J Transl Med. 2024 Jan 14; 22:58
oa_package/57/17/PMC10789006.tar.gz
PMC10789007
0
Introduction Electronic cigarettes (e-Cig) have gained popularity over combustible cigarettes, especially among young adults worldwide [ 1 ] and most of them are unaware that e-Cig vape contains higher nicotine concentration than combustible cigarettes does [ 2 , 3 ]. Compared to combustible cigarette smokers and non-smokers, e-Cig smokers trigger early chronic inflammatory lung diseases [ 4 ]. Moreover, e-Cig vape induces the release of proinflammatory cytokines, including tumor necrosis factor alpha (TNF-α) and interleukin (IL)-1β, which promote chronic inflammation, pathologic changes in the lung parenchyma, and mitochondrial reactive oxygen species (ROS) buildup in lung epithelial cells [ 5 , 6 ]. Although alcohol consumption is not directly linked to the onset of lung diseases, chronic alcohol exposure weakens lung responses to infections, particularly in the upper respiratory tract, resulting in poor immune response to pre-existing lung diseases or acquired infections [ 7 – 9 ]. Alcoholism is a major factor in the spread of community-acquired pneumonia and other acute respiratory complications, leading to thousands of deaths in the United States [ 10 ]. Excessive alcohol consumption alters mitochondrial structure [ 11 ] and reduces mitochondrial antioxidant glutathione levels, making mitochondria more susceptible to oxidative damage and ROS buildup, thus limiting ATP synthesis [ 12 , 13 ]. Mitochondrial ROS accumulation instigates inflammation and DNA damage in lung epithelial cells [ 14 ]. Chronic alcohol exposure also stimulates purinergic P2X7 receptor (P2X7r), which activates NLRP3-mediated inflammasome in machrophages and releases extracellular ATP (eATP) as secondary messengers [ 15 ]. Lately, the association between inflammation and P2X7r has received much attention, with eATP release being a key source of inflammation [ 16 , 17 ]. Furthermore, P2X7r activation promotes a sustained increase in intracellular calcium (Ca 2+ ) levels, which increases endoplasmic reticulum (ER) stress, eventually leading to inflammation [ 18 , 19 ]. Our prior studies have demonstrated the influence of addictive drugs [ethanol (ETH) and e-Cig vape] on P2X7r activation, which facilitated intracellular Ca 2+ accumulation and eATP release in brain microvascular endothelial cells (BMVECs). Pretreatment with the P2X7r antagonist A804598 (A80) restored homeostasis, prevented the blood-brain-barrier (BBB) compromise [ 20 ]. P2X7r activation by eATP also stimulates the generation of heterogeneous extracellular vesicles (EVs) that carry biomolecular cargoes that can mediate communication between similar (endocrine signaling) or different (paracrine signaling) cell types [ 21 , 22 ]. EVs are bilayered membrane vesicles formed in an endosomal system and discharged into the extracellular space [ 23 ]. Although the cargo-carrying capacity of EVs is well established, the composition of the cargo does not always reflect the contents of the parental cells. Cells incorporate cargo into the EVs in a carefully controlled manner, reflecting the pathophysiological state of the cell [ 24 ]. During alcohol abuse, hepatocyte EVs transport broken mitochondrial DNA (mtDNA), which act as damage-associated molecular patterns (DAMPs), triggering inflammatory responses [ 25 ]. Similarly, EVs in patients with fatty liver conditions carry higher mtDNA and stimulate the innate immune response via the TLR-9 pathway [ 26 ]. In the past, the existence of diverse messengers, including proteins, lipids, nucleic acids, and cell organelles, has been well identified and characterized as EV cargo [ 27 , 28 ], while data on eATP content as secondary messengers are sparse. In our earlier study, we showed the effects of ETH and e-Cig on P2X7r regulation in BMVECs [ 20 ]. In this report, we assessed the neuroinflammatory effects of ETH and e-Cig vape on primary human pulmonary alveolar epithelial cells (hPAEpiC) via EVs. We specifically analyzed the effectiveness of P2X7r blockage by A80 on responses to e-Cig and ETH exposure, focusing on mito-stress regulation in hPAEpiC. We also examined its effect on intracellular Ca 2+ accumulation, size and quantity of EVs released by the hPAEpiC. Finally, we examined the presence of P2X7r in the EVs, measured the eATP- and mtDNA-carrying capacity of hPAEpiC EVs, and tested their potential to mediate long-distance communication between hPAEpiC and BMVECs.
Materials and methods Reagents/Kits ETH 200 Proof, 99.5% pure aldehyde (ALD), and e-Cig (1.8% and 0% nicotine) were procured from Decon Laboratories Inc. (Cat. No. 2716, King of Prussia, PA, USA), ACROS Organics (Cat. No. 402788, Geel, Belgium), and Pure E-Liquids (Peterborough, PE11SB, UK), respectively. A80 was purchased from Tocris Bioscience (Cat. No. 4473, Bristol, UK) and dissolved in DMSO from Sigma-Aldrich (Cat. No. D5879, St. Louis, MI, USA). We purchased the luminescent ATP detection and colorimetric Ca 2+ assay kits from Abcam (Cat. Nos. ab113849 and ab102505, respectively, Cambridge, UK). For EV isolation and capture, we used the ExoQuick-TC from SBI (Cat.No. EXOTC50A, Palo Alto, USA) and Tetraspanin Exo-Flow capture kit (Cat. No. EXOFLOW150A-1, Palo Alto, USA), respectively. anti-P2X7r antibodies (Alomone Labs, Jerusalem, Israel), were labelled using the Zip Alexa FluorTM 647 rapid antibody labeling kit from Invitrogen (Cat. No. Z11235, Waltham, USA). The primary anti-rabbit antibodies for p-IRE1 (Cat. No. 3294T), p-ASK1 (Cat. No. 3765S), CD9 (Cat. No. 13174S), CD81 (Cat. No. 56039S), and beta-actin (Cat. No. 4967S) were procured from Cell Signaling (Danvers, MA, USA). Anti-rabbit Bax inhibitor-1 [BI-1] (Cat. No. ab18852) polyclonal antibodies were procured from Abcam (Cambridge, UK). Human purinergic receptor P2X, Ligand Gated Ion Channel 7 (P2RX7) ELISA Kit (Cat. No. RDR-P2RX7-Hu) obtained from Reddot Biotech (Houston, USA). Lyophilized recombinant human P2X7r protein (Cat. No. LS-G25681-10) procured from LS Bio (Lynnwood, USA). CellTiter 96® AQueous One Solution Cell Proliferation Assay kit (Cat.No. G3580) was procured from Promega (Madison, USA). Cell cultures and treatments hPAEpiC, acquired from Accegen (Cat. No. ABC-TC3770, Fairfield, USA), were cultured in hPAEpiC basal media (Cat. No. ABM-TM3770), supplemented with insulin-transferrin-selenium, epidermal growth factor, hydrocortisone, and 5% fetal bovine serum. hBMVECs, provided by Dr. Pierre-Olivier Couraud, Institut Cochin, INSERM U1016, CNRS UMR 8104, Université Paris Descartes (Paris, France), [ 29 ] were cultured using EBM-2 basal medium (Cat. No. CC-3156), supplemented with EGM-2 SingleQuots (Cat. No. CC-4176) from Lonza Biosciences (St. Bend, USA). According to the experiment, both hPAEpiC and hBMVECs were cultured in 96-well plates, 6-well plates, 100-mm tissue culture plates, and T-75 and T-150 flasks, coated with bovine collagen from CELL applications. Inc (Cat. No. 123-100, San Diego, USA). Confluent hPAEpiC were pretreated with and without A80 (10 μM) for 1 h, followed by overnight stimulation with insults: ETH- (100 mM), ALD- (100 μM), or e-Cig (1.75 μg/mL of 1.8% and 0% nicotine)-conditioned media, with and without A80, respectively. A80 treated cells served as a P2X7r antagonist control. Unless mentioned, all experiments were carried out after overnight incubation. One hundred millimolar ETH concentration was selected based on the dose-dependent MTT assay (Supplementary Figure S 1 ). Mito-stress analysis hPAEpiC (20,000 cells/well) were plated in a seahorse XF96 microplate from Agilent Technologies (Cat. No. 103794-100, Santa Clara, USA) and allowed to attach overnight. Four corner wells were left empty for background correction. The next day, the cells were treated with A80 (10 μM) for 1 h, followed by replacing old media with growth media conditioned with ETH, ALD, and e-Cig (1.8% or 0% nicotine) for overnight, with and without A80. The next morning, mitochondrial stress was measured using the Agilent Seahorse Cell Mito Stress Test Kit (Cat. No.103015-100, Santa Clara, USA). In brief, the growth media was carefully aspirated, and cells were washed twice in bicarbonate-free and phenol red-free DMEM, from Agilent (Cat. No. 103680-100, Santa Clara, USA), supplemented with 5.5 mM glucose, 1 mM pyruvate, and 2 mM glutamine. Lastly, 180 μL DMEM was added in all 96 wells including the four corner wells, and cells were kept in a non-CO 2 incubator at 37 °C. After calibration of Seahorse cartridge, microplate was placed in the Seahorse analyzer, and basal oxygen consumption rates (OCR) were measured. Later, the cells were serially challenged with respiratory inhibitors- 2.5 μM oligomycin (ATP synthase inhibitor), 0.5 μM FCCP (mitochondrial uncoupler), and 0.5 μM rotenone/antimycin A (complex I/III inhibitor) and mitochondrial respiration levels were continuously recorded. After the assay, the spare respiratory capacity (SRC) was measured by subtracting the basal OCR from the maximal OCR. Quantitative RT-PCR Total RNA from hPAEpiC was isolated using the TRIzolTM reagent from Invitrogen (Cat. No. 15596026, Carlsbad, USA). Total RNA (400 ng) was converted into complementary DNA (cDNA) using the RT2 PreAMP cDNA Synthesis Kit (Cat. No. 330451, Qiagen, Germantown, MD, USA). TaqMan probes for human P2X7r (Cat. No. hs00175721), transient receptor potential vanilloid 1 (TRPV1; Cat. No. hs00218912), and GAPDH (Cat. No. hs02786624) were procured from Thermo Fisher (Waltham, USA). The cDNA was further probed for real-time qPCR using TaqMan Fast Advanced Master Mix (Cat. No. 4444557, Applied Biosystems, Waltham, USA). All reactions were performed in triplicate, and the relative fold-change of the P2X7r and TRPV1 gene expressions against the treatments were investigated using delta-delta-Ct (ddCt), and the values were normalized with ddCt values of GAPDH. Intracellular Ca 2+ analysis Confluent hPAEpiC, pre-treated with or without A80, were stimulated overnight with ETH, ALD, and e-Cig (1.8% or 0% nicotine) conditioned media, and intracellular Ca 2+ levels were measured using Abcam Ca 2+ assay kit (Cat. No. ab102505, Cambridge, UK) following the manufacturer’s instructions. Western blot analysis Denatured proteins from hPAEpiC were separated using SDS-polyacrylamide gels (4–20% Mini-PROTEIN TGX TM Precast Gels) and electroblotted onto nitrocellulose membranes. The membranes were blocked (1 h) with Intercept blocking buffer from LI-COR (Cat. No. 927-60001, Lincoln, USA), incubated overnight with primary antibodies (1:1000 dilution), probed with near-infrared secondary antibodies (LI-COR) (1:5000 dilution), and visualized using an Odyssey imaging system (LI-COR, Lincoln, USA). Band intensities were quantified, and protein expression levels were analyzed relative to beta-actin. EV isolation After the overnight stimulation of hPAEpiC with the insults in T-150 flasks, 25 mL culture supernatant was collected in 50 mL FalconTM tubes from Fischer Scientific (Cat. No. 14-959-49 A, Hampton, USA) and centrifuged at 2,000 g for 10 min to remove cell debris. Supernatant was transferred into a fresh 50 mL tube and further centrifuged at 10,000 g for 20 min to remove apoptotic bodies and other large particles from the media. The supernatant was further concentrated (5 mL–7.5 mL) using Amicon® Ultra-15 centrifugal filters from MilliporeSigma (Cat. No. UFC901024, Burlington, USA). Appropriate volume of ExoQuick-TCTM was added to the media (1 mL per 5 mL media), and the contents were mixed thoroughly by inverting the tubes, followed by overnight refrigeration at 4 °C. Next morning, contents were centrifuged at 2000 g for 20 min at 4 °C, and the EV pellet was resuspended in sterile phosphate-buffered saline (PBS) without calcium and magnesium from MilliporeSigma (Cat. No. D8537, Burlington, USA). Nanoparticle tracking analysis of EVs The number and particle size distribution of hPAEpiC-EVs were analyzed by nanoparticle tracking analysis (NTA) using the NanoSight NS300 system (Malvern Technologies, Malvern, UK) fixed with a 488 nm laser. EV samples were diluted (1:500) in 1 mL particle-free MilliQ water (Milliporesigma, Burlington, USA) and injected into NanoSight chamber using 1 mL BD slip-tip syringe (Cat. No. 309659, Franklin Lakes, USA). Sample analysis was carried out under constant particle flow into the NanoSight chamber, and five 30-second videos were recorded for each sample. These videos record and track the path of unlabeled particles/EVs acting as point scatterers, undergoing Brownian motion in the chamber using laser beam [ 30 ]. Data collected in this fashion was later analyzed by NTA 3.3.104 software. Before running the samples, 100 nm latex beads from Malvern (Cat. No. NTA4088) were used to calibrate the system. eATP detection in isolated EVs hPAEpiC-EVs were resuspended in 150 μL PBS (Ca 2+ and Mg 2+ free) and lysed by ultrasonication at 4 °C. The EV suspension was centrifuged at 10,000 g for 10 min, and 50 μL sample was loaded in duplicates in the Corning® black clear bottom 96-well plate (Cat. No. 3603, Corning, USA). Abcam luminescent ATP Detection assay kit (Cat. No. ab113849, Cambridge, UK) was used to measure eATP cargo in EVs by following the manufacturer’s instructions. DNA isolation from EVs The EV suspension (100 μL) was treated with 10U of DNase from LGC Biosearch Technologies (Cat. No. DB0715K, Hoddesdon, UK) for 20 min at 37 °C to eliminate DNA attached to the EV surface. The DNase action was stopped by adding 10 μL 10X DNase stop solution at 65 °C for 10 min. EVs in suspension were further diluted by adding 100 μL nuclease-free water (NFW) and lysed by adding 20 μL proteinase K from Thermo Fisher (Cat. No. 4485229, Waltham, USA) at room temperature. After this step, the DNeasy® Blood & Tissue kit from Qiagen (Cat. No. 69506, Hilden, DE) was used to isolate DNA from EVs. We followed the manufacturer’s instructions for DNA isolation, except for centrifugation at 20,000 g for 1 min after the addition of AW2 buffer [ 31 ]. Similarly, spin columns were preincubated in AE buffer (30 μL) for 5 min before DNA elusion at room temperature. Remaining DNA in spin columns was also eluted by introducing an additional spin step. EV-DNA was quantified and stored at -20 °C prior to digital PCR (dPCR) assay. mtDNA quantification by dPCR A working concentration (1 ng/μL) of EV-DNA samples was prepared in NFW. Mitochondrial gene-specific TaqmanTM probes for ATP8 [mt-ATP8] (Cat. No. 4331182 Hs02596863_g1), NADH dehydrogenase 2 [mt-ND2] (Cat. No. 4331182 Hs02596874_g1), and ferritin heavy chain 1 [mt-FTH1](Cat. No. 4331182 Hs02596865_g1) from Thermo Fisher Scientific (Waltham, USA) were used in dPCR experiments. For 10 μL dPCR reaction, we used 2 μL of 5X Absolute QTM DNA Digital PCR Master Mix (Cat. No. A52490), 2 μL EV-DNA template (2 ng), 0.5 μL FAM-TaqmanTM probe, and 5.5 μL NFW. Nine microliters of the above reaction mixture were loaded onto QantStudio TM MAP16 Digital PCR plate (Cat. No. 10246917). Lastly, 15 μL QuantStudioTM Isolation buffer (Cat. No. A52730) was added on top of each sample, and the wells were sealed with the gaskets supplied with the dPCR plates. The QuantStudioTM Absolute Q Digital PCR System from Thermo Fisher was used for DNA amplification, and QuantStudio dPCR software was used to count the number of microchambers with successful mtDNA amplification. The thermal profile of mtDNA dPCR was as follows: 10 min at 96 °C, followed by 40 cycles of 5 s at 96 °C and 15 s at 60 °C. P2X7r ELISA Cell culture supernatants collected from insult-stimulated hPAEpiC, with and without A80 treatment, were used to detect circulating P2X7r levels. Human purinergic P2RX7 ELISA Kit (Cat. No. RDR-P2RX7-Hu, Houston, USA) was used for this assay. In brief, 200 μL medium was added in appropriate wells, covered with a plate sealer, and incubated at 37 °C for 90 min. Culture media was removed from the wells and replaced with one hundred microliter detection solution ‘A’ followed by 45 min incubation at 37 °C. The wells were washed thrice with 300 μL 1X wash buffer. One hundred microliter detection solution ‘B’ was added to the wells and incubated at 37 °C for 45 min. The washing step was repeated as mentioned earlier, and 90 μL ‘substrate solution’ was added to the wells. The ELISA plate was incubated at 37 °C in the dark until a blue color developed in the wells (for 15–30 min). The enzymatic reaction was stopped by adding 50 μL stop solution. Absorbance was measured at 450 nm using a microplate reader (SpectraMax® M5). Flow cytometry analysis of EVs Magnetic streptavidin beads were conjugated with tetraspanin-coupled, biotinylated anti-CD9 or anti-CD63 antibodies provided in the Tetraspanin Exo-Flow Combo Capture Kit (System Biosciences, Palo Alto, USA). These magnetic beads were incubated with EV suspension overnight on a rotating mixer at 4 °C. During this step, EVs were captured on to the conjugated magnetic beads. The next morning, magnetic beads were washed thrice in 1X wash buffer to remove any unbound EV particles. EVs captured on to magnetic beads were resuspended in 500 μL wash buffer and incubated with 5 μg of anti-P2X7r antibody conjugated with Alexa FluorTM 647 overnight on a rotating mixer at 4 °C. The magnetic beads were washed thoroughly to eliminate unbound P2X7r antibodies. EVs were stained with exo-FITC dye (System Biosciences). Cytometric acquisition was performed using an Aurora flow cytometer (Cytek®, San Diego, USA) and analyzed using FlowJo software v10 (Tree Star Inc., Ashland, USA) to check the distribution of P2X7r on EVs. Intracellular Ca 2+ analysis in hBMVECs after EV stimulation Intracellular Ca 2+ levels in hBMVECs were measured after overnight incubation with cell culture supernatant or EVs from hPAEpiC. Confluent hBMVECs cultured in their native growth media were stimulated with freshly collected hPAEpiC supernatant conditioned in ETH, ALD and e-Cig (1.8% or 0% nicotine), with and without A80 pre-treatment. hBMVECs incubated with fresh hPAEpiC-cultured media were used as media control. In another experiment, hBMVECs cultured in 12-well plate (4 × 10 5 ) were incubated with freshly isolated hPAEpiC-EVs (1:300). After 5 h incubation with EVs, intracellular Ca 2+ levels in hBMVECs were measured using the calcium assay kit. Optimal condition for EV number and incubation time were determined from preliminary experiments (Supplementary Figure S 2 ). During this experiment, hBMVECs were never exposed with either insults or A80 directly.
Results P2X7r inhibition normalized mitochondrial oxidative phosphorylation (OXPHOS) in insult-exposed hPAEpiC hPAEpiC exposed with ETH-, ALD-, and e-Cig (1.8% nicotine)-conditioned media increased the mito-stress levels in cells, resulting in reduced SRC levels as shown in line graph (Fig. 1 A). Whereas, Fig. 1 B-D illustrate 30–42% reduction in the SRC against ETH, ALD, and e-Cig (1.8% nicotine) exposure. Importantly, A80-pretreated hPAEpiC were protected from insult-driven mitochondrial stress, with restored SRC. Increased P2X7r and TRPV1 channel expression led to Ca 2+ accumulation in the insult-exposed hPAEpiC After the overnight exposure of hPAEpiC to the insults, P2X7r and TRPV1 expression increased by 4–6-fold and 3–4-fold, respectively. A80 pretreatment significantly lowered the overexpression of both channels (Fig. 2 A). e-Cig with 0% nicotine had no significant impact on the expression levels of P2X7r and TRPV1 channels. Furthermore, we found a 20-30-fold increase in intracellular Ca 2+ accumulation in the insult-exposed hPAEpiC compared with untreated control cells. Interestingly, e-Cig with 0% nicotine also increased intracellular Ca 2+ levels by 4-fold, which was a non-acute buildup and did not impact other functional readouts. A80 pretreatment significantly decreased the intracellular Ca 2+ levels in the insult-exposed HPAEpiC (Fig. 2 B). Alteration in Ca 2+ homeostasis upregulated the ER stress in the insult-exposed hPAEpiC Transient stimulation of P2X7r and TRPV1 is known to facilitate Ca 2+ influx into cells, stimulating diverse pro/anti-apoptotic pathways in a cell-specific manner [ 32 , 33 ]. Unrestricted Ca 2+ influx into the cytosol often interferes with ER Ca 2+ levels, as most ER-localized chaperones depend on Ca 2+ ions for their function. Disruption in ER Ca 2+ levels causes protein aggregation, followed by unfolded protein response (UPR) [ 34 ]. The UPR promotes the phosphorylation of ER-specific, pro-apoptotic inositol-requiring enzyme 1 alpha (IRE1α) and its downstream regulator, apoptosis signal-regulating kinase (ASK1) MAP3K, forcing cells to undergo apoptosis [ 20 , 35 ]. Using western blotting, we showed that hPAEpiC exposed with ETH-, ALD-, or e-Cig (1.8% nicotine)-conditioned media increased the phosphorylation of IRE1α and ASK1 by 2 to 3-fold, respectively. Simultaneously, the expression of anti-apoptotic protein Bax inhibitor-1 (BI-1) was down-regulated by 50%, potentially stimulating the apoptosis of hPAEpiC (Fig. 3 ). A80 pre-treatment reversed the expression level of ER stress markers in insult-exposed HPAEpiC. Enhanced ER stress after ETH, ALD, or e-Cig (1.8% nicotine) stimulation increased lung epithelial EV numbers and particle-size UPR stimulated by alcohol and other abusive drugs facilitates ER stress, followed by ER-Ca 2+ efflux into the mitochondrial matrix, resulting in mitochondrial OXPHOS reduction and ROS accumulation [ 36 , 37 ]. Mitochondrial dysfunction and ROS buildup promote EV release in mouse myeloblast cells [ 38 ]. Similarly, morphine-exposed BMVECs revealed redox imbalance, resulting in unwarranted EV release [ 39 ]. In this context, hPAEpiC exposed with ETH, ALD, and e-Cig (1.8% nicotine) conditioned media increased the EV release by 2 to 3 folds and average size of EVs were inflated by 20–30%. Pretreatment of hPAEpiC with A80 significantly reduced the EV number (Table 1 ). We further confirmed the tetraspanin profile (CD81 and CD9 expression) of isolated EVs by western blots (Fig. 4 ). Larger EVs carried more eATP and mtDNA Abusive drugs like cigarette smoke and alcohol are known to influence the EV cargo in liver and lung cells [ 40 , 41 ]. In this direction, we measured eATP levels in isolated EVs after overnight stimulation of hPAEpiC with ETH-, ALD-, or e-Cig-conditioned media. ETH and ALD induced a 55-fold and 70-fold increase in eATP levels, respectively, while e-Cig (1.8% nicotine) stimulation resulted in a 110-fold increase. Although e-Cig (0% nicotine) stimulation increased the eATP levels in EVs, a 2-fold change was insufficient to influence downstream events. Simultaneous pretreatment with A80 significantly lowered eATP levels in hPAEpiC-EVs (Fig. 5 A). We used high-throughput dPCR to measure the absolute copy numbers of mtDNA in the EVs using TaqmanTM probes targeting various segments of the mtDNA. Overnight stimulation with ETH-, ALD-, or e-Cig (1.8% nicotine)-conditioned media increased the copies of mt-ATP8, mt-ND2, and mt-FTH1 in the EVs by 2-fold, whereas pretreatment with A80 effectively diminished the mtDNA cargo (except mt-FTH1 levels after ETH stimulation) (Fig. 5 B). e-Cig (0% nicotine) did not show significant changes in the mtDNA levels when compared to the untreated control group. We presented dPCR data in fold change to show statistical significance between experimental replicates. ETH, ALD, or e-Cig (1.8% nicotine) exposure promoted P2X7r shedding via cell supernantant and EVs In hPAEpiC supernatant,ETH and e-Cig (1.8% nicotine) stimulations elevated the P2X7r levels by 6-fold. ALD exposure led to an 8-fold increase in P2X7r levels. Cell supernatant from A80-pretreated cells had significantly lower P2X7r levels compared to insult-exposed cells. e-Cig (0% nicotine) had no effect on P2X7r shedding (Fig. 6 A). Flow cytometric analysis of EVs was performed to assess the potential distribution of P2X7r on the EV surface. ETH, ALD, and e-Cig (1.8% nicotine) stimulations increased median fluorescence intensity (MFI) by 50 to 60%, showing greater P2X7r cargo on the EVs compared with the EVs from the unexposed-cells. EVs isolated from A80-pretreated, insult-exposed cells showed lower P2X7r MFI than only insult-exposed cells. E-Cig (0% nicotine) stimulation showed no effects on MFI (Fig. 6 B). Soluble P2X7r and eATP released from hPAEpiC-EVs induce paracrine signaling in BMVECs Extracellular ATP-gated P2X7r activation stimulates Ca 2+ influx into the cytosol, activating inflammasome assembly and caspase-1 [ 42 ]. Recently, circulating P2X7r was shown to trigger inflammasome formation in the brain of epilepsy patients [ 43 ]. In our studies, we examined the paracrine signaling between hPAEpiC and human BMVECs by culturing BMVECs in hPAEpiC-conditioned media. After overnight stimulation, intracellular Ca 2+ levels were measured in BMVECs. Media from ETH-exposed hPAEpiC increased the BMVEC’s Ca 2+ levels by 2-fold, whereas ALD or e-Cig (1.8% nicotine)-conditioned media escalated the BMVEC’s Ca 2+ levels by 4-fold. Media preconditioned with A80 had considerably reduced intracellular Ca 2+ levels in BMVECs (Fig. 7 A). BMVECs incubated in media from unexposed-hPAEpiC were used as a media control. To prove the specific effects of the signaling abilities of lung EVs, we incubated BMVECs with freshly isolated lung EVs (300 EVs/cell). After 5 h incubation, intracellular Ca 2+ levels were measured using a calcium assay. EVs isolated after ETH, ALD, and e-Cig (1.8% nicotine) stimulation amplified the intracellular Ca 2+ levels by 2-3-fold, whereas EVs from A80-pretreated cells presented lower Ca 2+ levels (Fig. 7 B). Of note, recombinant P2X7r (used as a control) also increased Ca2 + accumulation in BMVECs, establishing the functional role of P2X7r as a Ca2 + channel.
Discussion In this report, we studied the harmful effects of alcohol, e-Cig vaping, and their byproducts on mitochondrial homeostasis in hPAEpiC, EV shedding, and EV cargo content. In mitochondria, respiratory capacity depends on the efficiency of electron transport complexes and mitochondrial membrane potential [ 44 ]. In the liver, chronic alcohol consumption and cigarette smoke accelerate reactive oxygen and nitrogen species (ROS/RNS) accumulation through NADPH oxidase (NOX) and cytochrome P450-2E1 (CYP2E1) enzyme activation [ 45 ]. In an intracerebral hemorrhage mouse model, P2X7r activation is shown to mediate NOX-dependent ROS production, followed by mitochondrial degradation [ 46 ]. NOX is a catalytic enzyme that transfers electrons (e − ) from NADPH to oxygen, generating superoxide radicals (O 2 •− ) [ 47 ]. CYP2E1 is an ethanol-catabolizing enzyme, known for ROS/RNS generation in significant amount [ 48 ]. Upon ROS/RNS buildup, the inner mitochondrial membrane quickly depolarizes and limits the OXPHOS levels [ 49 ]. In our Seahorse experiments, hPAEpiC exposed with ETH-, ALD-, or e-Cig-conditioned media significantly reduced the OXPHOS levels, confirming the detrimental effects of abusive drugs on mitochondrial function (Fig. 1 ). Also, P2X7r inhibition restored the mitochondrial OXPHOS levels, confirming the role of P2X7r in regulating mitochondrial health in hPAEpiC. P2X receptors are a family of ligand-gated ion channels, gated by eATP, and exist in seven isoforms, P2X1 to P2X7 receptors [ 50 ]. Unlike other P2X receptors, P2X7r needs excess ATP for its activation (three ATP molecules for one P2X7r). Activated P2X7r known to regulate Ca 2+ and sodium (Na + ) influx and potassium (K + ) efflux in cells [ 51 ], mediate actin and tubulin rearrangement [ 52 ], promote inflammation [ 53 ], and encourages mitochondrial swelling/rupture to release pro-apoptotic cytochrome C into the cytosol [ 54 ]. While the involvement of P2X7r in various pathophysiological conditions is well reported, its potential role in substance abuse attracted attention only recently [ 55 , 56 ]. Similarly, TRPV1 is a highly selective Ca 2+ channel, which facilitates cigarette smoke-associated airway inflammation [ 57 ] and opioid-induced hyperglycemia [ 58 ]. In our studies, ETH, ALD, and e-Cig (1.8% nicotine) stimulation increased the gene expression of both P2X7r and TRPV1, enhancing Ca 2+ influx into the hPAEpiC (Fig. 2 ). Such a sudden increase in intracellular Ca 2+ levels can promote cytoskeletal remodeling [ 59 ], alter Ca 2+ levels in the ER lumen, and activate Ca 2+ -dependent ER stress, leading to cell death [ 20 ]. As expected, the P2X7r inhibitor A80 successfully curbed the P2X7r and TRPV1 overexpression and reduced the Ca 2+ influx into the hPAEpiC. ER stores the large amount of Ca 2+ with a steep concentration gradient between the ER (up to 1 mM) and cytosol (approximately 100 nM) [ 60 ]. ER-Ca 2+ levels are vital for the post-translational modifications of transmembrane proteins in the ER lumen [ 34 ]. Cytosolic Ca 2+ acts as an intracellular messenger that controls diverse cellular functions, and any disruption in cytosolic Ca 2+ homeostasis can be toxic and cause cell death [ 61 ]. In our experiments, the P2X7r and TRPV1 overexpression significanly increased the intracellular Ca 2+ levels in hPAEpiC, exposed to ETH, ALD, or e-Cig (1.8% nicotine). Under these pathological conditions, excess Ca 2+ can be shuttled and stored in the ER by the energy-consuming sarcoplasmic/endoplasmic reticulum calcium ATPase (SERCA2b) [ 62 ]. SERCA2b overexpression during chronic inflammation promotes dramatic Ca 2+ uptake by the ER, resulting in increased UPR in the ER [ 63 ]. IRE1α is an evolutionarily conserved ER membrane protein involved in the regulation of both cell survival and death mechanisms [ 64 ]. As discussed earlier, most secretory proteins are produced in the ER lumen and ER-Ca 2+ levels are vital for proper protein folding [ 65 ]. Any fluctuations in ER-Ca 2+ levels lead to protein misfolding, followed by UPR, which serve as direct ligands for IRE1α activation [ 66 ]. Its prolonged activation triggers the apoptosis-inducing molecule, tumor necrosis factor receptor-associated factor 2 (TRAF2), through its cytosolic domain. This further activates its downstream pASK1, a MAP kinase kinase kinase (MAP3K), which later phosphorylates c-Jun N-terminal kinase and p38, leading to apoptotic cell death [ 67 , 68 ]. On the contrary, BI-1 plays a protective role against ER-Ca 2+ buildup. BI-1 facilitates Ca 2+ flow from the ER into the mitochondrial matrix via the mitochondrial permeability transition pore, thereby restoring Ca 2+ levels in the ER lumen [ 69 , 70 ]. In the present study, Ca 2+ influx triggered by ETH, ALD, or e-Cig stimulation increased the expression of IRE1α and pASK1proteins, leading hPAEpiC to undergo severe stress. By lowering BI-1 protein expression, ETH, ALD, or e-Cig (1.8% nicotine) stimulation promoted toxic ER-Ca 2+ levels (Fig. 3 ). P2X7r inhibition by A80 restored the ER-Ca 2+ levels and reduced the expression of IRE1α and pASK1proteins, ensuring lung epithelial cell survival. EVs released from injured cells differ significantly in their structure and function. EVs carry and transport unique biomolecules depending on the disease conditions, making them perfect biomarkers [ 71 ]. In lung carcinoma cells, nicotine stimulation increases EV number and transforms EVs’ morphology with an altered miRNA profile [ 72 ]. Active inhalation of nicotine-containing e-Cig vape increased circulating EV number, shed by endothelial cells and loaded with proinflammatory CD40 markers [ 73 ]. In patients, nicotine consumption aggravates the spread of atherosclerotic lesions, potentially via EVs containing miR-21-3p cargo [ 74 ]. Likewise, liver injury inflicted by alcohol abuse also exaggerates EV release, carrying inflammatory signaling molecules (NFκB, TLR4, IL-1 receptors, caspase-1) into the circulation [ 75 ]. In the brain, cocaine-induced oxidative stress weakens mitochondrial membrane potential, forcing the mitochondria to rupture and release their contents via EVs [ 76 , 77 ]. P2X7r activation by eATP and/or NAD + molecules promotes P38-MAPK-facilitated cytoskeletal restructuring in macrophages, resulting in EV release [ 78 ]. However, under normal conditions, intercellular ATP and NAD + levels remain low for P2X7r activation. Chronic alcohol exposure in humans stimulates inflammasome activation in the liver and brain, followed by tissue damage, resulting in substantial eATP release [ 79 ]. Once released, eATP acts as an endogenous mediator and enhances EV release [ 80 , 81 ]. eATP that are endocytosed into EVs also mediate actin rearrangement and influence EV size, shape, and adhesion properties [ 82 ]. According to our NTA data, hPAEpiC stimulated with ETH-, ALD-, or e-Cig (1.8% nicotine)-conditioned media, generated more EVs (2-fold to 3-fold increase) with larger size than unexposed hPAEpiC (Fig. 4 ). Pretreatment with A80 reverted the EV numbers and size to those shed by untreated hPAEpiC. In the mouse brain, cocaine-induced inflammation promotes the release of small EVs (exosomes) loaded with mtDAMPs, including misfolded mito-proteins, eATP, ROS, and degraded mtDNA [ 83 ]. When released, these mtDAMPs can activate numerous proinflammatory autocrine and paracrine signaling in recipient cells, producing several inflammation-associated diseases [ 84 ]. In our studies ETH, ALD, or e-Cig (1.8% nicotine) exposure increased eATP cargo in EVs. dPCR analyses showed the large quantities of mtDNA embedded in the lung epithelial EVs (Fig. 5 ), which can act as mtDAMPs. P2X7r inhibition in lung cells exposed with ETH-, ALD-, or e-Cig-conditioned media reversed EV cargo, confirming the role of the P2X7r pathway on lung-EV release and on its cargo. In addition to their unique cargo-carrying capacity, EVs also carry surface ligands/receptors, allowing EVs to target other cells [ 85 ]. Once attached on the recipient cell, EVs transmit signals via receptor-ligand interaction or internalized by endocytosis or fused with the recipient cell membrane, delivering their cargo into its cytosol, thereby altering the functional state of the recipient cell [ 86 ]. In human macrophages, P2X7r stimulation by eATP promotes inflammation and release of EVs loaded with IL-1b and P2X7r [ 87 , 88 ]. Similarly, chronic inflammatory responses seen in diabetic and COVID19 patients resulted in P2X7r release into the circulation [ 89 , 90 ], most likely through EVs. Our studies demonstrated significant quantities of circulating P2X7r in the lung epithelial cell media and EVs with greater P2X7r expression on their surface (Fig. 6 ) against ETH, ALD, or e-Cig (1.8% nicotine) stimulation. P2X7r can further stimulate inflammation in recipient cells directed by NLRP3 activation [ 91 ]. Interestingly, we detected a variety of cargoes (eATP, mtDNA) in hPAEpiC-EVs that can act as mtDAMPs, which can trigger NLRP3 inflammasome mediated BBB damage [ 92 , 93 ]. P2X7r found on EVs is known to activate NLRP3-mediated Ca 2+ accumulation in bone marrow cells [ 94 ]. In this cotext, we assessed the paracrine signaling efficacy of biomolecules detected in lung EVs by incubating BMVECs (cells constituting BBB) with lung epithelial cell-spent media and freshly isolated lung EVs. In both experiments, in line with circulating eATP (Fig. 5 ) and P2X7r levels (Fig. 6 ), we noticed a significant amount of Ca 2+ accumulation in hBMVECs (Fig. 7 ), whereas spent media and EVs derived from epithelial cells pretreated with A80 displayed lower Ca 2+ levels, endorsing the paracrine signaling induced by alcohol or nicotine-containing e-Cig in hPAEpiC and BMVECs. Notably, recombinant P2X7r added to hBMVECs resulted in the similar functional changes as ETH, ALD, or e-Cig (1.8% nicotine) executed (Fig. 7 B). In all our experiments, we exposed hPAEpiC with e-Cig (0% nicotine)-conditioned media, as our earlier studies with nicotine-free e-Cig vape produced pathological effects on mouse brain and lung tissues [ 3 ]. Airway epithelail cells exposed with nicotine-free e-Cig vape also increased the IL-6 levels [ 95 ] and limited the oxygen levels in circulation [ 96 ]. Some studies in patients with asthma have also shown that nicotine-free e-liquids, made of high grade, contaminant-free mixture of propylene glycol and glycerol, did not impact lung function [ 97 ]. In this report, nicotine-free e-Cig media increased P2X7r levels marginally, resulting in a partial increase in intracellular Ca 2+ levels, which did not affect any of our functional assays.
Conclusion The current study demonstrated the harmful effects of e-Cig (1.8% nicotine), ETH, and its main metabolite ALD on mitochondrial function in hPAEpiC, which form an alveolar barrier. Ca 2+ accumulation promoted by drugs of abuse stimulates ER stress, increase EV release bolstering greater eATP, P2X7r cargo transport. Mito-stress induced by ETH and e-Cig (1.8% nicotine) stimulation induce the mitochondrial membrane damage, letting mtDNA escape through EVs. A variety of cargo detected in hPAEpiC-EVs act as potential stimulants of inflammation and trigger functional changes in BMVECs, indicative of BBB injury. These observations also demonstrate the mechanisms of distant organ injury by e-Cig or alcohol. Similar functional changes exerted by recombinant P2X7r in BMVECs, confirms its role as a paracrine signaling molecule for the first time. Inhibition of P2X7r diminished all pathological effects caused by ETH, ADH or e-Cig (1.8% nicotine) in hPAEpiC. We will continue to test the impacts of abusive drugs on P2X7r expression and its extracellular sheding in the mouse model and test P2X7r paracrine siganling in the BBB damage.
Background Use of nicotine containing products like electronic cigarettes (e-Cig) and alcohol are associated with mitochondrial membrane depolarization, resulting in the extracellular release of ATP, and mitochondrial DNA (mtDNA), mediating inflammatory responses. While nicotine effects on lungs is well-known, chronic alcohol (ETH) exposure also weakens lung immune responses and cause inflammation. Extracellular ATP (eATP) released by inflammatory/stressed cells stimulate purinergic P2X7 receptors (P2X7r) activation in adjacent cells. We hypothesized that injury caused by alcohol and e-Cig to pulmonary alveolar epithelial cells (hPAEpiC) promote the release of eATP, mtDNA and P2X7r in circulation. This induces a paracrine signaling communication either directly or via EVs to affect brain cells (human brain endothelial cells - hBMVEC). Methods We used a model of primary human pulmonary alveolar epithelial cells (hPAEpiC) and exposed the cells to 100 mM ethanol (ETH), 100 μM acetaldehyde (ALD), or e-Cig (1.75 μg/mL of 1.8% or 0% nicotine) conditioned media, and measured the mitochondrial efficiency using Agilent Seahorse machine. Gene expression was measured by Taqman RT-qPCR and digital PCR. hPAEpiC-EVs were extracted from culture supernatant and characterized by flow cytometric analysis. Calcium (Ca 2+ ) and eATP levels were quantified using commercial kits. To study intercellular communication via paracrine signaling or by EVs, we stimulated hBMVECs with hPAEpiC cell culture medium conditioned with ETH, ALD or e-cig or hPAEpiC-EVs and measured Ca 2+ levels. Results ETH, ALD, or e-Cig (1.8% nicotine) stimulation depleted the mitochondrial spare respiration capacity in hPAEpiC. We observed increased expression of P2X7r and TRPV1 genes (3-6-fold) and increased intracellular Ca 2+ accumulation (20-30-fold increase) in hPAEpiC, resulting in greater expression of endoplasmic reticulum (ER) stress markers. hPAEpiC stimulated by ETH, ALD, and e-Cig conditioned media shed more EVs with larger particle sizes, carrying higher amounts of eATP and mtDNA. ETH, ALD and e-Cig (1.8% nicotine) exposure also increased the P2X7r shedding in media and via EVs. hPAEpiC-EVs carrying P2X7r and eATP cargo triggered paracrine signaling in human brain microvascular endothelial cells (BMVECs) and increased Ca 2+ levels. P2X7r inhibition by A804598 compound normalized mitochondrial spare respiration, reduced ER stress and diminished EV release, thus protecting the BBB function. Conclusion Abusive drugs like ETH and e-Cig promote mitochondrial and endoplasmic reticulum stress in hPAEpiC and disrupts the cell functions via P2X7 receptor signaling. EVs released by lung epithelial cells against ETH/e-cig insults, carry a cargo of secondary messengers that stimulate brain cells via paracrine signals. Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01461-1. Keywords
Supplementary Information
Abbreviations Ethanol Aldehyde Electronic cigarette Human pulmonary alveolar epithelial cells Human brain microvascular endothelial cells Mitochondrial DNA Extracellular ATP NLR family pyrin domain containing 3 P2X purinergic receptor 7 Transient receptor potential vanilloid 1 Carbonyl cyanide p-trifluoro methoxyphenylhydrazone Oxidative phosphorylation Spare respiratory capacity Reactive oxygen species Reactive nitrogen species Endoplasmic reticulum Unfolded protein response Mitochondrial damage-associated molecular patterns Authors’ contributions Conceptualization, N.M. and Y.P.; Methodology, N.M., J.T. and N.T.; Validation, N.M. and N.T.; Formal Analysis, N.M. and S.R.; Investigation, N.M. and S.R.; Resources, Y.P.; Writing – Original Draft Preparation, N.M.; Writing – Review & Editing, U.S., P.B. and Y.P.; Supervision, Y.P.; Funding Acquisition, Y.P. Funding This research was funded by NIH grants R01 DA040619, 1R01AA030841 (Y.P.). Availability of data and materials Not applicable. Declarations Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:33
Cell Commun Signal. 2024 Jan 15; 22:39
oa_package/ef/e8/PMC10789007.tar.gz
PMC10789008
0
Introduction Chemoradiotherapy plays a critical role in the management of patients with locally advanced and unresectable non-small cell lung cancer (NSCLC) [ 1 ]. Advances in the delivery of thoracic radiotherapy (RT) have the potential for local tumor control and improved prognosis in patients with NSCLC. However, these benefits come at the expense of an increased risk of radiation-induced lung toxicity (RILT) [ 2 , 3 ]. Radiation pneumonitis (RP) is the major dose-limiting RILT for RT, and approximately 15−40% of patients with NSCLC experience symptomatic RP [ 4 ]. In severe cases, RP can lead to mortality rates as high as 1.9%, as reported in an international meta-analysis of individual patient data [ 5 ]. Traditionally, RP prediction is based on dosimetric parameters and clinical characteristics [ 6 ]. However, the current prediction models based on these factors have not yielded satisfactory results. Therefore, there is an urgent need for a novel predictive model that considers the individual heterogeneous responses to radiation. Such a model may help identify and prevent RP in high-risk patients before the onset of symptoms. Computed tomography (CT) plays a pivotal role in diagnosis and treatment of RP. In recent years, with the rapid development of image-based radiomics analysis technology, there has been an increasing focus on using radiomics features to predict the effects of RT and adverse events, as they can provide additional information based on high-dimensional quantitation of medical images [ 7 ]. Du et al. successfully established a predictive model for RP by analyzing the region of interest (ROI) of whole lung tissue using cone-beam CT radiomics [ 8 ]. A nomogram model combining radiomics and clinical features showed superior predictive ability compared with other predictors. Given the increasing attention paid to endogenous factors and radiosensitivity in the context of radiation-related adverse effects [ 9 ], building predictive models solely from clinical and imaging perspectives may not be sufficient. Microarray-based gene expression signatures are used in cancer diagnostics, tumor classification, prognosis, and the prediction of treatment responses [ 10 , 11 ]. Gene expression status between individuals or tumors contributes significantly to differences in the occurrence of RP [ 12 ]. With the advent of genomic sequencing in the era of personalized medicine, it is imperative to explore multiple integrated approaches that incorporate genome-wide genotypic data to predict radiation-induced toxicity. However, there is a lack of studies investigating whether a model combining image biomarkers with genetic biomarkers can achieve superior RP identification in patients with lung cancer following RT. In this study, we first aimed to evaluate the capability of CT-based radiomics to characterize RILT and determine whether our constructed radiomic features could be potential imaging markers for predicting RP. Additionally, we developed a comprehensive nomogram model incorporating radiomics features with gene expression alteration signatures for individualized risk assessment and precise prediction of RP.
Methods Ethics statements This study was approved by the Ethical Review Board of Shandong Cancer Hospital and Institute (ethics approval number: SDTHEC2020004042), and all patients provided written informed consent. The present study was conducted in compliance with the standard TRIPOD guidelines for prediction models. The workflow of this study is shown in Fig. 1 . Study design and population This retrospective study, which aimed to evaluate the capability of CT-based radiomics to characterize RILT and determine whether our constructed radiomic features could be potential imaging markers for predicting RP, included 100 patients with NSCLC who were treated with chemoradiotherapy at multiple centers between October 2014 and March 2019. Patients were eligible for this study based on the following criteria: histological diagnosis of unresectable stage IIIA-C NSCLC based on the 8th edition of AJCC TNM staging system without severe pleural or pericardial effusion; age >18 years; adequate lung, bone marrow, renal, hepatic, and cardiac function; and no history of systemic treatment or radiotherapy for thoracic cancers. CT images, gene panels, and the clinical characteristics of each patient were available. Treatment and evaluation of RP All patients underwent standard definitive chemoradiotherapy (dCRT). A median of five cycles of cisplatin- or paclitaxel-based chemotherapy was administered sequentially or concurrently with RT. The choice of the chemotherapy regimen was left to the investigator’s discretion. Intensity-modulated radiation therapy or three-dimensional conformal radiation therapy was administered at a total dose of 50−70 Gy. Follow-up visits were conducted 1 month after RT and every 3months during the first year. Subsequently, the patients were followed up every 3−6 months. RT-associated pneumonitis was graded according to the toxicity criteria of the Radiation Therapy Oncology Group and the European Organization for Research and Treatment of Cancer [ 13 ]. The primary outcome of RP was defined as symptomatic RP of ≥grade 2 within 6 months after RT. RP monitoring was based a combination of clinical symptoms, outpatient medical records, laboratory test results, and visual inspection of follow-up CT scans. CT image acquisition, segmentation and feature extraction CT images in the Digital Imaging and Communications in Medicine format were extracted from the PACS system and then imported into 3D Slicer software (version 5.0.3; http://www.slicer.org ) to extract and analyze radiomics features. The region of the entire (left plus right) lung, regarded as the ROI, was semi-automatically delineated and then manually modified in a slice-by-slice manner on lung-window CT baseline images by an experienced radiation oncologist. Tumors, diaphragms, trachea and mainstem bronchi were excluded. Another radiation oncologist independently reviewed lung organ segmentation, and any disputes were resolved by direct consultation between the two radiologists. All features were extracted using the radiomics plug-in of the 3D Slicer software. Radiomics feature selection and signature construction To validate the reproducibility of the extracted features and minimize operator bias, 20 patients were randomly selected for repeated segmentation by an experienced radiologist at 2-month intervals from the initial evaluation to reacquire imaging features. Subsequently, the intraclass correlation coefficient (ICC) was calculated, and only features with an ICC of ≥ 0.8 were selected for further analysis. To avoid heterogeneity bias, normalization (z-score transformation) of the image intensity was performed on the entire image to transform the CT values into standardized intensity ranges. The least absolute shrinkage and selection operator (LASSO) algorithm was used to determine the most predictive features. During the model building process,. the optimum parameter lambda (λ) was selected from the LASSO model using 10-fold cross validation with the minimum mean square error. With an increasing penalty, more regression coefficients are reduced to zero [ 14 , 15 ], and the remaining non-zero coefficient is selected. After feature selection, a radiomics signature, also termed the Rad-score, was established from a linear combination of the selected features and the corresponding coefficients derived from the LASSO model. Gene mutation signatures As our previously described [ 16 ], target sequencing of 474 cancer- and radiotherapy-related gene panels was performed on tumor tissue samples from each patient to identify genetic markers associated with the incidence of radiation-induced thoracic toxicity. Our result demonstrated that single nucleotide polymorphisms in XRCC5 (rs3835), XRCC1 (rs25487), MTHFR (rs1801133), and NQO1 (rs1800566) and somatic alterations in ZNF217 and POLD1 were associated with an increased risk of RP. Radiomics nomogram construction The receiver operating characteristic (ROC) curve for the ability of the Rad-score to predict RP was plotted, and the point on the curve with the largest Youden index was selected as the cut-off value for the Rad-score. Radiomics features, gene mutation signatures, and clinical characteristics were first evaluated in univariate logistic regression analysis to determine whether they were candidate predictors for grade ≥ 2 RP. The confirmed related predictors were then included in multivariate logistic regression analysis to screen for independent risk factors. Finally, a comprehensive nomogram was established on the basis of multivariate analysis. The ROC curve and area under curve (AUC) were used to evaluate the predictive power of the model. Statistical analysis LASSO regression analysis was performed using Python software (version 3.9, https://www.python.org/ ). The characteristics of patients in the RP and non-RP groups were compared using the chi-square test. Binary logistic regression analysis was used to determine independent predictors of RP using univariate and multivariate analyses. The DeLong test was used to analyze the statistical differences in the AUC values between the different models. Factors with a p -value < 0.10 in univariate analysis were included in the multivariate analysis. ROC curve analysis was conducted using MedCalc software (MedCalc Software Ltd.). The nomogram, calibration curve, and decision curve were calculated using R software (R Foundation for Statistical Computing). All statistical analyses were conducted using SPSS software (version 25.0; IBM Corp.). All tests were two-tailed, and p < 0.05 was considered to be statistically significant.
Results Patient characteristics We retrospectively analyzed 100 patients with unresectable stage III NSCLC who underwent dCRT. The main patient characteristics are summarized in Table 1 . A total of 29 (29%) developed grade ≥2 RP in the whole cohort. The median interval from RT completion to grade ≥ 2 RP occurrence was 54 days. There was a difference in histology between the RP and non-RP groups; however, this difference was not statistically significant ( p = 0.073). Other baseline data showed no significant differences between the two groups (Table 1 ). Abbreviations: RP: radiation pneumonitis, ADC: adenocarcinoma, SCC: squamous cell carcinoma, SCRT: sequential chemoradiotherapy, CCRT: concurrent chemoradiotherapy, IMRT: intensity modulated radiation therapy, 3D-CRT: 3-dimensional conformal radiation therapy. Radiomics feature selection and signature construction A total of 851 features from each patient’s ROI were extracted using the radiomics plug-in of 3D Slicer, including 18 first-order statistic features, 14 shape features, 75 texture features (24 Gy-level co-occurrence matrixes, 14 Gy-level dependence matrixes, 16 Gy-level run-length matrixes, 16 Gy-level size zone matrixes, 5 neighbor gray-tone difference matrixes), and 744 wavelet features. After excluding 264 ineligible features (features with an ICC of <0.8) (Figs. 2 ), 587 features were included in the subsequent data analysis as stable feature parameters. After LASSO regression analysis, nine radiomics features with non-zero coefficients were screened to develop a radiomics signature Rad-score (Fig. 3 A and B). Subsequently, a fitting formula for the Rad-score was constructed on the basis of a linear combination of these selected features and the corresponding non-zero coefficients (Table 2 ). The calculation method of the Rad-score is as follows: The optimal cutoff value of the Rad-score was 0.32, and patients were divided into two groups with high and low Rad-scores. As shown in Fig. 4 , the incidence of grade ≥ 2 RP was 52.4% (22/42) in the high Rad-score group compared with 12.1% (7/58) in the low Rad-score group ( p < 0.001). Radiomics nomogram construction According to univariate analysis (Table 3 ), histology, Rad-score, and gene mutations in XRCC1 (rs25487) and NQO1 (rs1800566) were the potential high-risk factors that contributed to grade ≥ 2 RP development (all, p < 0.10). Multivariate analysis revealed that histology ( p = 0.049), Rad-score ( p < 0.001) and XRCC1 (rs25487) allele mutation ( p = 0.004) were independent predictors for grade ≥ 2 RP (Table 3 ). On the basis of multivariate analysis, we developed a visible radiomics nomogram by combining histology, Rad-score, and XRCC1 (rs25487) allele mutation (Fig. 5 ). Typical representative images of patients with and without RP and corresponding high-risk factors are shown in Fig. 6 . To test the predictive power of the nomogram model, ROC curves were constructed to compare the predictive performance of the nomogram and the other three independent predictors for RP. Based on the clinical factors, radiomics, and genomics models, the AUCs were 0.594, 0.738, and 0.641, respectively (Fig. 7 A). The nomogram model obtained an AUC of 0.827 (Fig. 7 A), which was significantly higher than those of histology (Delong test, p < 0.001), Rad-score (Delong test, p = 0.005), and XRCC1 (rs25487) (Delong test, p < 0.001). The consistency between the prediction of RP by the nomogram and actual observations was confirmed using calibration curves, and the Hosmer-Lemeshow test indicated no statistical difference between the predictive and actual values ( p = 0.959). (Fig. 7 B). The decision curves exhibited satisfactory positive benefits for the nomogram at the threshold probabilities (Fig. 7 C).
Discussion In this study, we observed a strong quantitative relationship between CT image-based radiomics features and RP in patients with unresectable stage III NSCLC. Accordingly, the derived radiomics features proved to be promising CT-based biomarkers for predicting RP. We developed a nomogram model combining clinical factors, radiomics, and genomics on the basis of multivariate logistic regression analysis. More importantly, the combined model showed the best predictive ability compared with any the clinical factor, radiomics, or genomics model alone. This result showed that the combined nomogram model improved the ability of radiomics features and gene mutation signatures to predict the risk of RP development. Radiomics is an emerging image analysis technique that can extract an amount of quantitative features from image data to quantify tumor heterogeneity, which is useful for personalized predictions [ 17 , 18 ]. By providing a three-dimensional characterization of lesions, models based on radiomic features from CT have been developed to detect nodules [ 19 ], discriminate between malignant and benign lesions [ 20 ], and characterize histology [ 21 ], stage [ 22 ], and genotype [ 23 ]. Furthermore, radiomics has shown promising results in predicting radiation-induced lung injury [ 24 ]. Krafft et al. demonstrated that the addition of CT radiomics features extracted from the whole lung volume could improve prediction of RP in patients with NSCLC [ 25 ]. However, regarding the influence of endogenous factors and the radiosensitivity of lung tissue on RP occurrence [ 9 ], relying on standard clinical and radiomic features alone may not provide sufficient predictive accuracy. Thus, the present study established a combined model that incorporated clinical characteristics, radiomic features, and gene mutation signatures. Our research provides a new direction for individualized response-adapted decision-making for radiotherapy in NSCLC. Gene-expression signatures, each composed of dozens to hundreds of genes, have the potential to improve diagnosis, prognosis, and prediction of treatment response [ 26 ]. Recently, the association between genetic factors and toxicity has been demonstrated in studies of genetic variants implicated in radiation-induced pneumonitis in patients with lung cancer [ 27 ]. These findings contribute to the identification of biological mechanisms and increase our understanding of the genetic factors that contribute to the susceptibility to radiation-induced adverse effects. To our best knowledge, several studies have suggested that the XRCC1 (rs25487) allele mutation serves as a potential biomarker for predicting RP in patients with NSCLC [ 28 , 29 ]. However, these studies were based on traditional low-throughput sequencing methods. Taking advantage of the next-generation sequencing technology, our study provides further support for the association between the XRCC1 (rs25487) allele mutation and grade ≥ 2 RP. In the era of modern personalized medicine, integrated multiomics approaches offer improved diagnostic accuracy and precise predictions. The integrated model combining radiomics with genomics outperformed either one alone in predicting prognosis or assessing postoperative recurrence risk in NSCLC [ 30 , 31 ]. However, no previous studies have integrated radiomics and genomics to predict the risk of RP in patients with NSCLC. The present study aimed to fill this research gap and provide a unique perspective for identifying RP, which differs from conventional methods. Similarly, our combined model showed an optimal predictive performance. The current paradigm of gene expression profiling involves invasive surgery or biopsy procurement of tissue specimens. Unfortunately, this method presents considerable challenges including elevated costs, extended turnaround times, and technical complexity. These obstacles hamper the widespread implementation of gene expression profiling and limit its utility in a diverse range of patients with cancer. Radiogenomics, which highlights the link between radiomic features and gene expression patterns in patients with cancer, can be considered a substitute for genetic testing [ 32 ]. Thus, in future studies, we can investigate and establish correlations between low-cost and non-invasive image-based radiomic signatures and specific gene expression status in patients with RP. Despite these findings, we acknowledge that our study had some limitations. This retrospective study had a small sample size, which may explain the low predictive accuracy of the model. A multi-center collaborative work was performed in our study to overcome this limitation, but external validation is lacking. The integrated prediction model developed in this study should be further validated using data from larger sample sizes.
Conclusion This study explored the utility of radiomics and genomics models as a feasible approach to predict grade ≥ 2 RP in patients with unresectable stage III NSCLC treated with dCRT. Compared with any clinical factor, radiomics model, or genomics model, the integrated model showed superior predictive performance. Our integrated model may be useful for early screening to identify patients wtih NSCLC who are predicted to be at a substantially greater risk of developing RP resulting from radiation exposure.
Background Chemoradiotherapy is a critical treatment for patients with locally advanced and unresectable non-small cell lung cancer (NSCLC), and it is essential to identify high-risk patients as early as possible owing to the high incidence of radiation pneumonitis (RP). Increasing attention is being paid to the effects of endogenous factors for RP. This study aimed to investigate the value of computed tomography (CT)-based radiomics combined with genomics in analyzing the risk of grade ≥ 2 RP in unresectable stage III NSCLC. Methods In this retrospective multi-center observational study, 100 patients with unresectable stage III NSCLC who were treated with chemoradiotherapy were analyzed. Radiomics features of the entire lung were extracted from pre-radiotherapy CT images. The least absolute shrinkage and selection operator algorithm was used for optimal feature selection to calculate the Rad-score for predicting grade ≥ 2 RP. Genomic DNA was extracted from formalin-fixed paraffin-embedded pretreatment biopsy tissues. Univariate and multivariate logistic regression analyses were performed to identify predictors of RP for model development. The area under the receiver operating characteristic curve was used to evaluate the predictive capacity of the model. Statistical comparisons of the area under the curve values between different models were performed using the DeLong test. Calibration and decision curves were used to demonstrate discriminatory and clinical benefit ratios, respectively. Results The Rad-score was constructed from nine radiomic features to predict grade ≥ 2 RP. Multivariate analysis demonstrated that histology, Rad-score, and XRCC1 (rs25487) allele mutation were independent high-risk factors correlated with RP. The area under the curve of the integrated model combining clinical factors, radiomics, and genomics was significantly higher than that of any single model (0.827 versus 0.594, 0.738, or 0.641). Calibration and decision curve analyses confirmed the satisfactory clinical feasibility and utility of the nomogram. Conclusion Histology, Rad-score, and XRCC1 (rs25487) allele mutation could predict grade ≥ 2 RP in patients with locally advanced unresectable NSCLC after chemoradiotherapy, and the integrated model combining clinical factors, radiomics, and genomics demonstrated the best predictive efficacy. Keywords
Acknowledgements Not applicable. Author contributions LL and SHY designed and supervised the study. JRL participated in the study and drafted the manuscript. SST, WJL, NL, and ZDX collected data. QXY and FCY analyzed data. All authors read and approved the final manuscript. Funding This study was supported in part by the National Natural Science Foundation of China (NSFC82073345), Natural Science Innovation and Development Joint Foundation of Shandong Province (ZR202209010002), Jinan Clinical Medicine Science and Technology Innovation Plan (202019060) and the Taishan Scholars Program to Shuanghu Yuan, and Major Basic Research Program of National Natural Science Foundation of Shandong (ZR2022ZD16) and the Natural Science Youth Foundation of Shandong Province (ZR2023QH155) to Li Li. Data availability The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate The study was approved by the Ethical Review Board of Shandong Cancer Hospital and Institute (ethics approval number: SDTHEC2020004042). Consent for publication Not applicable. Competing interests The authors declare no competing interests. Abbreviations Non–small cell lung cancer Radiotherapy Radiation-induced lung toxicity Radiation pneumonitis Computed tomography Region of interest Definitive chemoradiotherapy Intraclass correlation coefficient Least absolute shrinkage and selection operator Receiver operating characteristic Area under the curve Region of interest
CC BY
no
2024-01-16 23:45:33
BMC Cancer. 2024 Jan 15; 24:78
oa_package/1b/b2/PMC10789008.tar.gz
PMC10789009
38221610
Background The constant need to improve the quality of healthcare in the NHS is reliant on the ability to assess the quality of existing and new services over time. With recent emphasis in the NHS on value-based commissioning, it is necessary to monitor and measure outcomes [ 1 ]. Quality Adjusted Life Years (QALYs) are composite measures of length of life and quality of life and provide a way of measuring the impact of the health care interventions on health-related quality of life (HRQoL). Cost per QALY is commonly used to assess the cost-effectiveness of interventions to inform resource allocation. The use of outcome measures in the United Kingdom (UK) has increased over the last decade. The Short Warwick-Edinburgh Mental Well-being Scale (SWEMWBS) is commonly used in the UK to measure mental wellbeing [ 2 , 3 ]. The SWEMWBS is a validated scale capturing the positive effect of mental wellbeing. The SWEMWBS was developed from the original 14-item version, which in turn, was developed from Affectometer 2 in New Zealand and has been used with the general population, deaf people, and clinical populations including those experiencing mental health difficulties [ 4 – 8 ] in different settings. While a statistical relationship has been estimated between life satisfaction and SWEMWBS and is available to estimate the social value from SWEMWBS [ 9 ], it cannot be used to generate QALYs. Utility mapping is a technique where utilities are estimated in instances when data have not been collected from preference-based measures. To develop such an algorithm, it is recommended that there is both conceptual and empirical overlap between the source measure (generally a non-preference-based measure that is being mapped from) to the target measure (generally a preference-based measure for which utilities need to be calculated) [ 10 ]. In the UK, EQ-5D is the most commonly used measure to generate QALYs in economic evaluation due to the recommendations of the National Institute for Health and Care Excellence (NICE) reference case [ 11 ]. Concerns have been raised in the literature about the validity of EQ-5D to capture health-related quality of life in the area of mental health or wellbeing [ 12 , 13 ]. The focus of EQ-5D is on physical health with only one question on mental health and therefore, one can expect little conceptual overlap between EQ-5D and SWEMWBS, making EQ-5D a less suitable source measure to develop a mapping algorithm. The Recovering Quality of Life (ReQoL) measures are validated outcome measures developed mainly for a mental health population aged 16 and over [ 14 – 16 ] and are being increasingly used in the UK in the general population. ReQoL-10 and ReQoL-20 comprise 10 and 20 mental health items respectively and one physical health item [ 17 ]. The first 10 items of ReQoL-20 are identical to the ReQoL-10. ReQoL-UI is the preference-based measure consisting of six mental health items and one physical health item from ReQoL-10. Preference weights for the UK were estimated from a sample of 305 from the general population using the time trade-off method [ 18 ]. Previous work has reported a large Pearson’s coefficient correlation of 0.90 between SWEMWBS and ReQoL scores [ 17 ]. Given that conceptual overlap between the two measures has been established, mapping between these two measures is a viable option. Only very recently, after the generation of our mapping algorithm, a UK preference-based value set for the SWEMWBS has been published [ 19 ]. The primary aim of this paper is to estimate an algorithm as an alternative way to predict utilities from the SWEMWBS to the ReQoL-UI. The secondary aim is to compare the different traditional mapping methods to add to the evidence base around mapping techniques.
Methods Data Data were collected from two separate studies between November 2017 and September 2018 from 18 secondary care mental health services and one general practitioner surgery across England. Participants from secondary care and primary care were recruited face-to-face (94%) and by post (6%) respectively. Participants were aged 16 and over and were mental health service users with diagnoses such as anxiety, depression, schizophrenia, other psychotic disorders (including schizo-affective disorders), bipolar disorder and personality disorder. While all participants completed SWEMWBS and demographics questions, those in Study 1 and Study 2 completed ReQoL-20 and ReQoL-10 respectively. Data were pooled to maximise sample size with a view to reducing uncertainty around estimates. Measures The SWEMWBS contains seven positively worded items in which each item is answered on the following 1 to 5 frequency-based Likert scale: ‘none of the time’, ‘rarely’, ‘some of the time’, ‘often’ and ‘all of the time’. Transformed scores using Rasch analysis are recommended for the SWEMWBS, but in routine practice items are summed to produce a total score ranging from a minimum of 7 to a maximum of 35, with higher scores representing higher levels of mental wellbeing [ 3 ]. The items are around feeling optimistic about the future, feeling useful, feeling relaxed, being able to deal with problems well, thinking clearly, feeling close to other people and, being able to make up one’s own mind about things. The ReQoL measures contains a mixture of positively and negatively worded items scored from 0 to 4 or 4 to 0 respectively where 0 represents the poorest quality of life and 4 the highest. The frequency-based response options are: ‘none of the time’, ‘only occasionally’, ‘sometimes’, ‘often’ and ‘most or all of the time’. The themes of the ReQoL measures are activity; belonging and relationships; choice; control and autonomy; hope; self-perception; wellbeing; and physical health. The ReQoL-UI is not administered as a separate measure but consists of seven items from ReQoL-10 with one item from each theme. Utilities range from − 0.195 to 1 where one represents full health and zero, the state of being dead. Values less than zero represent a perceived health state that is worse than death. Mapping statistical analyses To develop mapping functions, we used both direct and indirect or response mapping. Before undertaking the mapping, it was important to determine whether to use all the SWEMWBS items or only selected ones. First, we calculated Spearman correlation where SWEMWBS items with coefficients less than 0.4 with ReQoL-UI would be considered to be weakly correlated [ 20 ]. For this study, we had decided that items with correlation coefficients of less than 0.2 would not be included unless there were deliberative reasons as to why they should be. Choice of covariates For direct mapping, the chosen SWEMWBS items were mapped to ReQoL-UI scores to capture the granularity provided by each item. The squared terms of the chosen SWEMWBS items were also included in order to capture a nonlinear relationship. For indirect mapping, we regressed each ReQoL-UI item on all the SWEMWBS items [ 21 ] and their squared terms In both types of mapping, age and sex were included as covariates as they are likely to improve the mapping functions and are usually available for participants. Model types Three model types were chosen for the direct mapping: Ordinary Least Squares (OLS), Tobit and Generalised Linear Model (GLM) (Gaussian and Gamma both with the log link). Despite its limitations, OLS remains the most used technique for mapping [ 22 ]. Therefore, ReQoL-UI was regressed on all SWEMWBS items to derive a preliminary mapping function. Given the bounded distribution of the ReQoL-UI, we also considered Tobit. However, neither of these models could take into consideration the non-normal distribution of the ReQoL-UI and therefore we estimated the GLM regressions. The GLM, an extension of OLS allows for a non-normal distribution of the dependent variable and can account for skewed and bimodal data. For the Indirect mapping, we used seemingly unrelated ordered probit and calculated the margins after each regression. We considered the significance of marginal effects [ 21 ]. Performance of mapping algorithms Following the guidelines in the literature, we considered a number of measures of model fit to compare results across models [ 23 ]:mean absolute error (MAE), root mean square error (RMSE), percentage of observations with absolute errors greater than 0.1 [ 22 ], Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC) and visual representation of model fit. We plotted the mean of the predicted and actual ReQoL-UI scores across the range of overall SWEMWBS scores. We also performed a simulation of patients (1000 repetitions), in order to add heterogeneity to the sample, rather than a single mean with no variation for each of the mapping models. To visually display the results of these simulations, we plotted cumulative distribution functions (CDFs). The simulations allow us to assess how well the models predict, not only at the mean (which we assess using traditional model fit statistics) but also at the extremes of the distribution. This is important for cost-effectiveness analysis when patient populations are unlikely to be the ‘average’ person and often have values that are far from the mean [ 24 ]. Throughout the study and reporting, we followed the most recent set ‘good practices’ on mapping to estimate utilities from non-preference-based measures [ 23 ]. All analyses were undertaken in STATA 17 and a mapping calculator was created in Excel 2016.
Results Data were collected from 2638 participants with mental health difficulties. Analyses were conducted on participants with complete data for the ReQoL-UI items, SWEMWBS items, age and sex, which led to the removal of 65 observations leaving a sample of 2573 participants. The mean (sd) age was 42 (14) years. The participants’ characteristics for the whole sample are presented in Table 1 (Table S 1 presents these details for each study separately). Both ReQoL and SWEMWBS scores spanned the entire range of possible values (Table 2 ). We have included the seven ReQoL items that are used to calculate the ReQoL-UI. Figure 1 shows the distributions of ReQoL-UI and SWEMWBS. The ReQoL-UI distribution is not normally distributed but instead, it is multimodal with a spike at full health. The SWEMWBS distribution is more normally distributed but, with gaps at some scores. For the ReQoL-UI, there are 64 (2.5%) and 41 (1.6%) observations at the best and worst health state respectively. For the SWEMWBS, there are 72 (2.8%) and 57 (2.2%) observations at the highest and lowest possible scores respectively. The frequency endorsement for ReQoL-UI and SWEMWBS are presented in Tables S 2 a-b (Supplementary materials). Correlation of items The Spearman rank correlation between ReQoL-UI and each SWEMWBS item ranged between 0.498 and 0.599, which indicated that better predictions would be obtained if all items were used. The correlation between SWEMWBS score and the ReQoL-UI score was 0.593 (Table S 3 -S 4 Supplementary materials). The correlations between SWEMWBS items and ReQoL mental health items ranged from 0.382 to 0.607 with the smallest correlations observed between SWEMWBS items and the physical item in ReQoL-UI with correlation coefficients ranging from 0.204 to 0.266. Therefore, in the mapping models, all SWEMWBS items were included in the regression. Model performance The results by model type are presented in Table 3 below. Direct mapping The model fits for all the three models were very similar. MAE (RMSE) were 0.147 (0.197) for both OLS models and Tobit models. MAE (RMSE) were 0.149 (0.198) for the GLM specification. The number of observations with absolute error (AE) greater than 0.05 ranged from 53 to 55%. From the graphical representations (Fig. 2 ), there is no systematic pattern of predictions over and below the observed values by SWEMWBS scores. However, the results from the simulations, which present the model performance across the spectrum of utility (Fig. 3 ), show that the direct mapping methods has a clear disparity between the observed and predicted data across the entire distribution of SWEMWBS. The GLM models with the Gaussian log link had lower AIC and BIC compared with the Gamma log link, therefore the Gamma log link results are not presented in this paper. The regression coefficients generated from the three model specifications can be found in Table S 6 in the Supplementary Materials. Indirect mapping The MAE and RMSE for the response mapping were 0.156 and 0.199 respectively, marginally higher than the errors produced using the direct mapping methods. However, Fig. 3 shows that there is much less bias, regardless of ReQoL status when using the response mapping, which fits the data very closely across all SWEMWBS scores.
Discussion This study aimed to develop a mapping algorithm to predict ReQoL-UI scores from the widely used SWEMWBS. We have mapped SWEMWBS to ReQoL using different regression techniques from the simplest one to more sophisticated ones. Given the previous inability to calculate utilities from the SWEMWBS, the mapping algorithms developed will enable researchers to produce utilities from the ReQoL-UI. We have considered not only the model fit for the means of the distribution, but also used simulated data to consider heterogeneity making the mapping algorithm more appropriate for use in cost-utility studies. The detailed results are presented in the Supplementary materials of this paper. An algorithm for the response mapping has been estimated to generate the ReQoL-UI scores and is available in Excel in the Supplementary materials . Physical health was identified as an important theme in the life of people with mental health conditions in the early development of the ReQoL [ 17 , 18 , 25 ]. This theme is not captured by the SWEMWBS, hence the weak correlations observed between the SWEMWBS items and the physical item of the ReQoL-UI. While this is likely to make predictions less accurate, until preference weights are elicited for the SWEMWBS, ReQoL-UI remains the most appropriate measure to generate utilities from SWEMWBS given that both measures capture mental wellbeing. For the direct mapping methods, we found very little differences among the three regression specifications used in terms of model fit and visual inspection of modelled and actual utility values across the SWEMWBS score range. The response mapping showed the highest proportion (60%) of observations with AE > 0.05. However, the comparison of mapping techniques and model specifications in this paper illustrates the importance of looking at uncertainty around model predictions and the model outputs once patient variability is considered. All models estimated mean utility well, including when looking specifically at observations grouped by total SWEMWBS score. However, using simulated data, we showed that response mapping outperformed the other mapping techniques once patient variability was taken into account. This is particularly important if the mapping algorithm is to be used for cost-utility analysis. The mean errors do not always give a good representation of model fit if the majority of observations are at the same part of a distribution where a model fits well. Observations are more difficult to estimate for parts of the distribution (for example at the severe end of utilities) and may be under represented in the data, but it is important that they are also estimated accurately for cost-effectiveness analysis, in line with findings from other papers [ 24 , 26 – 29 ]. Therefore, we recommend the response mapping to generate ReQoL-UI scores from SWEMWBS responses if estimates are going to be used for economic evaluation. The algorithms presented here are also a useful way of comparing SWEMWBS scores and scores from ReQoL-10 and ReQoL-20. In the UK, mental health trusts and other charities have either used one of the ReQoL measures or SWEMWBS. There may be reasons to compare the SWEMBWS and ReQoL scores when only one of the measures has been administered. For this purpose, ideally, we would produce separate mapping functions between the two measures because the correlation between SWEWMBS and ReQoL-10 is higher than with ReQoL-UI. This difference can be accounted for by the fact that the ReQoL scores do not include the physical item while ReQoL-UI does. However, in the absence of mapping functions between SWEMWBS and ReQoL-10 and ReQoL-20, the algorithms presented here can be used to compare the two measures. This study has several limitations. First, the mapping was performed using data from a population experiencing a broad range of mental health difficulties. The mapping functions need to be tested in other populations to assess where their use could be extended to the general population and other populations. Second, it is recognised that, while the algorithm is recommended for use for populations similar to the ones in this study, it may not be applicable in very different populations. Third, we have not explored more recently developed mapping techniques like the use of mixture models. There is some evidence that mixture models can produce more accurate predictions because they better estimate the unusual, non-normal and limited distributions common among health utility data [ 24 ]. However, future research is needed into how mixture models predict ReQoL utilities. In this case, by using indirect mapping, we can overcome some of the problems associated with more commonly used traditional mapping methods. Using OLS can lead to predictions outside the feasible range of utility values. The Tobit model can handle the limited nature of preference-based measures by limiting predicted values at 1 (full health). The GLM models are limited as they are unable to predict negative values. The OLS, Tobit and GLM models also fail to capture the multimodal nature of ReQoL. The indirect mapping method used in this study allows for a more flexible approach whilst also predicting values within the feasible range, by estimating the probabilities of each ReQoL dimension score, then calculating the expected ReQoL utility value by the weighted probabilities.
Conclusions This is the first study to map from SWEMWBS to any preference-based measure. The paper presents mapping functions to generate utility values from SWEMWBS to ReQoL-UI. When only point estimates are considered, there is little difference between the various mapping methods. However, when heterogeneity is considered, response mapping outperforms the direct mapping methods. The use of the algorithm using the indirect mapping technique is therefore recommended to generate utilities for use in cost-utility analyses. We have produced a tool in the form of a calculator to help research to easily compute utilities from SWEMWBS. Future research is needed to compare the values generated from the mapping algorithm with those directly generated from the new set of preference weights elicited using health states from the SWEMWBS.
Background The Short Warwick and Edinburgh Mental Wellbeing Scale (SWEMWBS) is a widely used non-preference-based measure of mental health in the UK. The primary aim of this paper is to construct an algorithm to translate the SWEMWBS scores to utilities using the Recovering Quality of Life Utility Index (ReQoL-UI) measure. Methods Service users experiencing mental health difficulties were recruited in two separate cross-sectional studies in the UK. The following direct mapping functions were used: Ordinary Least Square, Tobit, Generalised Linear Models. Indirect (response) mapping was performed using seemingly unrelated ordered probit to predict responses to each of the ReQoL-UI items and subsequently to predict using UK tariffs of the ReQoL-UI from SWEMWBS. The performance of all models was assessed by the mean absolute errors, root mean square errors between the predicted and observed utilities and graphical representations across the SWEMWBS score range. Results Analyses were based on 2573 respondents who had complete data on the ReQoL-UI items, SWEMWBS items, age and sex. The direct mapping methods predicted ReQoL-UI scores across the range of SWEMWBS scores reasonably well. Very little differences were found among the three regression specifications in terms of model fit and visual inspection when comparing modelled and actual utility values across the score range of the SWEMWBS. However, when running simulations to consider uncertainty, it is clear that response mapping is superior. Conclusions This study presents mapping algorithms from SWEMWBS to ReQoL as an alternative way to generate utilities from SWEMWBS. The algorithm from the indirect mapping is recommended to predict utilities from the SWEMWBS. Supplementary Information The online version contains supplementary material available at 10.1186/s12955-023-02220-z. Keywords
Supplementary Information
Abbreviations Absolute error Akaike information criteria Bayesian information criteria Generalised linear model Health-related quality of life Mean absolute error National institute for health and care excellence Ordinary least squares Quality adjusted life years Recovering Quality of Life Recovering Quality of Life - Utility Index Root mean square error Short Warwick-Edinburgh Mental Well-being Scale United Kingdom Acknowledgements The authors would like to thank all the participants in the study, the healthcare professionals and other staff involved in the recruitment of participants and conduct of the study. Authors’ contributions ADK: Conceptualisation, funding acquisition, investigation, data curation, supervision, methodology, formal analysis, Writing – original draft, Writing – review and editing; LAG: Conceptualisation, supervision, methodology, formal analysis, visualisation, Writing – original draft, Writing – review and editing (LAG produced the Excel calculator in the supplementary materials); EM: Methodology, formal analysis, visualisation, Writing – review and editing; HW: Methodology, formal analysis, visualisation, Writing – review and editing; GOL: Methodology, formal analysis, visualisation, Writing – review and editing. All authors read and approved the final manuscript. Funding Financial support for this independent research was provided by the National Institute for Health Research (NIHR) Policy Research Programme, conducted through the Policy Research Unit in Economic Methods of Evaluation in Health and Social Care Interventions (EEPRU), PR-PRU-1217-20401. The data collection was part funded by The Health Foundation’s Efficiency Research Programme. LG is funded by a Medical Research Council (MRC) fellowship (Grant Number MR/S009868/1). EM is funded by a Wellcome Trust Doctoral Grant [224852/Z/21/Z]. HW is funded by the National Institute for Health Research (NIHR) Applied Research Collaboration East Midlands (ARC EM). The views and opinions expressed are those of the authors, and not necessarily those of the NHS, the NIHR, the Health Foundation, the Department of Health and Social Care, or the Wellcome Trust. Availability of data and materials The data that support the findings of this study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate Ethics approval was obtained from the Edgbaston National Research Ethics Service Committee, West Midlands (14/WM/ 1062). Informed consent was obtained from all participants in the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:33
Health Qual Life Outcomes. 2024 Jan 15; 22:7
oa_package/eb/4a/PMC10789009.tar.gz
PMC10789010
0
Background Nearly all plants contain a small, but significant, fraction of their nuclear genomes composed of DNA sequences derived from their chloroplasts [ 1 ]; these nuclear integrants of plastid DNA are commonly known as nuclear plastid DNA sequences (NUPTs) [ 2 ] The process of NUPTs ́ formation has been commonly associated to the process by which most genes present in the bacterial ancestor of plastids were transferred to the nuclear genome and their products eventually retargeted to their ancestral compartment after the endosymbiotic event that gave rise to the chloroplast organelle. However, whereas the latter entails the loss of vast amounts of DNA with the subsequent reduction of its size and the transfer of most of the genes originally present in the protoorganelle organism to the nuclear genome [ 3 , 4 ], the former involves the copy of stretches of DNA from the chloroplast genome. Even though most NUPTs are less than 1 kb in length, NUPTs of recent origin spanning the whole chloroplast chromosome have been detected in Oryza sativa (rice) and Populus trichocarpa [ 5 , 6 ], and did not result in the shrinking of the plastid genome. Although the process of NUPTs’ formation is still poorly understood, it is expected to involve the following sequence of events. First, the duplication of a stretch of DNA present in the chloroplast genome. Second, the lysis of chloroplast organelle membranes to allow the leakage of duplicated plastid DNA. Third, the import to the nucleus of the leaked plastid DNA. Fourth, the integration of plastid DNA into the nuclear genome. At present, no mechanism has been formally proposed to explain the recurrent duplication of stretches of plastid DNA of varying sizes that are at the origin of NUPTs. The biological mechanisms involved in the leakage of plastid DNA to the cytoplasm and its subsequent import by the nucleus are not yet completely elucidated either, although gametogenesis and cell stress (especially pollen development and mild heat stress, respectively) have been reported to induce the disruption of chloroplast organelle membranes [ 2 , 7 – 10 ]. It has been also suggested that certain kinds of stresses, such as ionizing radiation and pathogen infections, may, not only trigger the leakage of plastid DNA to the nucleocytosolic compartment, but also favor its integration into the nuclear genome [ 11 ]. The molecular mechanisms of NUPTs ́ integration into the nuclear genome are not fully described either, but they are probably diverse and generally involve double-stranded breaks (DSBs) and DNA damage and thus are potentially mutagenic. For example, it has been hypothesized that NUPTs ́ integration is mediated by non-homologous end joining (NHEJ) during DSB repair events [ 12 – 14 ], most NUPTs are expected to be rapidly fragmented and shuffled away through transpositions and genome arrangements and, eventually, purged from the nuclear genome [ 15 – 17 ]. As a consequence, the distribution of NUPTs by age should follow an exponential distribution, indicating a continuous rate of NUPTs’ formation and decay throughout time [ 15 ]. Although such a pattern has been suggested for rice, Medicago truncatula , P. trichocarpa and Zea mays [ 15 , 17 , 18 ], different patterns have been observed in other species such as Arabidopsis, Carica papaya , Fragaria vesca , Moringa oleifera (moringa) and Vitis vinifera [ 17 – 19 ]. A second consequence is the expected positive correlation between NUPTs’ size and age, an observation that has been suggested for several species, despite not being explicitly tested statistically [ 7 , 16 , 17 , 20 ]. Indeed, the fraction of nuclear genomes occupied by NUPTs varies enormously among species and even within different populations of the same species [ 5 , 21 , 22 ]. Most species showed around 0.1% of plastid DNA in their nuclear genome, with very few showing more than 1% [ 1 ] These large variations in the fraction of nuclear genomes occupied by NUPTs raise the question of what evolutionary forces may lie behind the fixation of variable fractions of plastid DNA in plant nuclear genomes. However, previous studies on the mechanisms of origin and evolutionary fate of NUPTs were mostly focused on a limited number of species and involved a reduced number of NUPTs. A more detailed picture will certainly benefit from a larger number of NUPTs and a higher fraction of the nuclear genome occupied by plastid DNA. So far, the largest fraction of DNA of plastid origin found in any plant nuclear genome (4.71%) has been detected in the orphan crop moringa [ 19 ]. In the present study, we leveraged a recent chromosome-scale version of the moringa genome [ 23 ] to examine the spatial distribution and arrangement in clusters of NUPTs, to explicitly model and test the correlation between their age and size distribution, to characterize their origin within the chloroplast genome and their sites of insertion at the nuclear one, as well as to investigate their arrangement in clusters. Our results reveal an unanticipated complexity of the mechanisms at the origin of NUPTs as well as of the evolutionary forces behind their fixation.
Methods Detection and analysis of plastid DNA in the nuclear genome NUPTs in the published versions of the moringa nuclear genome [ 25 – 27 ] were detected using the BLASTN local alignment tool from the BLAST+ program package v2.12.0+ [ 37 ]. The chloroplast genome sequence of moringa [ 24 ] (Table 1 ) was used as query and the published versions of its nuclear genome sequence (Table 1 ) as databases. The parameters were as follows: -evalue 1e-5 -word_size 9 -penalty − 2 -show_gis -dust no -num_threads 8. In order to deal with low complexity regions putatively present in the chloroplast genome that might result in spurious alignments wrongly detected as homologous regions, the analyses were repeated by turning on the -dust setting (−dust yes). Results in terms of sequence identity and density of NUPTs were represented as circular plots, constructed using Circos version 0.69–8 [ 38 ]. In order to correct for redundancy of NUPTs resulting from the IR region of the chloroplast genome, BLASTN hits involving IR regions were counted only once. In order to detect NUPTs showing 100% identity with the chloroplast genome plus their 100 bp flanking regions in the previous published versions of the moringa nuclear genome, BLASTN alignments were firstly performed using the whole set of 100% identity NUPTs as query and the genome sequence of each version as database. NUPTs and their best scoring hits detected in each version of the genome were then aligned using the MUSCLE algorithm [ 39 ] through the SeaView v5.0.5 program [ 40 ]. The resulting multiple sequence alignments were edited using GeneDoc v2.7 [ 41 ]. In order to examine whether NUPTs in clusters were arranged collinearly with the donor regions of the chloroplast genome or shuffled in some way, the corresponding BLASTN alignments were visualized through the R genoPlotR v 0.8.11 package [ 42 ]. Gaussian mixture modeling of NUPTs’ percent identity distribution In order to detect peaks in the distribution of percent identity values putatively corresponding to episodic events of NUPTs integration in the nuclear genome, Gaussian mixture models were fitted to the corresponding distribution by employing the Expectation-Maximization (EM) algorithm for mixtures of normal distributions. We first determined the optimal number of Gaussian components (k) using the boot.comp() function from the R mixtools v1.2 package [ 43 ], which performs a parametric bootstrap by producing B bootstrap realizations (replicates) of the likelihood ratio statistic for testing the null hypothesis of a k-component fit versus the alternative hypothesis of a (k + 1)-component fit to various mixture models. For this step, we used 1000 replicates, a significance level of 0.01, and set the maximum number of components to nine. The number of components determined in the previous step was then used to fit a mixture of Gaussian models to the distribution of percent identity values, utilizing the normalmixEM() function from the same package and the following parameters: maxit = 1e-30, maxrestarts = 1e− 3, epsilon = 1e− 10. Each peak was characterized by an age (expressed in percent identity values) that corresponded to the mean of the Gaussian mixture component. Several other parameters were estimated from each of the models, including the standard deviation of each component, as well as the mixing probabilities of each NUPT of belonging to each retrieved peak.
Results Widespread distribution of NUPTs in the moringa nuclear genome In order to detect NUPTs present in the moringa nuclear genome, a chromosome-scale assembly of the moringa genome, AOCCv2 [ 23 ], was scanned using BLASTN and the moringa chloroplast genome sequence (NCBI RefSeq number: NC_041432.1) [ 24 ] as query, resulting in 13,901 total alignments. We visually inspected the alignments and detected a significant fraction of them (8657; 62.28%) arising from two specific regions of the chloroplast genome. Those two regions were 200 bp and 350 bp in length and were essentially composed by As and Ts (Additional File 1 ), thus likely corresponding to low complexity regions, which are known to result in spurious alignments not reflecting true homology but artifacts. Indeed, BLASTN searches on NCBI databases using those two regions as queries resulted in matches to seemingly unrelated genomes with high percent of identity, indicating they probably correspond to artifacts (results not shown). Therefore, we reran BLASTN with the -dust option turned on in order to mask alignments resulting from low complexity regions. 5203 NUPTs were now detected, which were confidently defined as NUPTs in our analysis (Supplemental Table S 1 ). 11 out of the 14 chromosomes hosted more than 100 NUPTs (ranging from 118 to 1072) and seven chromosomes plus one scaffold contained NUPTs summing up above 160,600 bp (i.e., the size of the moringa chloroplast genome) (Supplemental Table S 1 ). The total aligned region between the chloroplast genome and the nuclear genome, i.e., the total region of the nuclear genome occupied by NUPTs, summed up a total of 9,781,275 bp, which represents a 4.14% of the size of the nuclear genome assembly, close to estimations obtained with previous versions of the genome [ 25 – 27 ] (Table 1 ). After correcting for redundancy in BLASTN hits resulting from Inverted Repeat (IR) regions of the moringa chloroplast genome (1272), the fraction of the moringa nuclear genome corresponding to NUPTs was of 3.29%, again pretty similar to estimations obtained with the three other versions of the moringa genome [ 25 – 27 ] (Table 1 ), and further supporting these results were not due to genome assembly errors. Most NUPTs in moringa originated through two distinct formation episodes separated in time In order to gain insights on the timing of plastid DNA acquisition by the moringa nuclear genome, we examined the relative age distribution of NUPTs using the percent identity of the corresponding BLASTN hits as a proxy of evolutionary time. Assuming the mutation rate is proportional to evolutionary time , i. e ., the molecular clock hypothesis holds, the lower the percent identity, the older the NUPTs. Percent identity of BLASTN hits ranges between 72.37 and 100% and shows an apparent bimodal distribution (Fig. 1 A). Indeed, when Gaussian mixture models were fitted to the corresponding density curves, two clear peaks, centered around 79.05 and 93.1%, respectively, were detected (Fig. 1 A). According to the posterior probabilities of assigning a NUPT to either one or another peak, using a threshold of 95%, 776 NUPTs (14.91% of the total) summing up a total of 253,096 bp (2.59% of the total) belonged to the older peak (from now on Episode I, or NUPTs-I), while 3855 NUPTs (74.09% of the total) summing up a total of 9,189,682 bp (93.95% of the total) belonged to the younger peak (from now on Episode II or NUPTs-II). The rest of NUPTs (572, summing up a total of 338,497 bp, i.e., 3.46% of the total) were not confidently assigned to either one or the other peak. Taking as a whole, these results support two main episodic formation events at the origin of most NUPTs. Next, we examined the size distribution of NUPTs, partitioned by each of the retrieved episodes. While NUPTs-I ranged in size from 69 to 3591 bp, NUPTs-II ranged from 33 to 71,935 bp (Fig. 1 B). Both followed a non-normal right-skewed unimodal distribution (Fig. 1 B), with a mean and a median size of 326.2 and 127 or 2384 and 778 bp for NUPTs-I and NUPTs-II, respectively. From studies in rice and other plant species, it had been suggested an apparent positive correlation between size and sequence identity of NUPTs, i.e., larger NUPTs tend to be more conserved at the sequence level. This observation can be interpreted as young, larger conserved NUPTs declining and fragmenting over time, and eventually being purged from the genome [ 7 , 15 – 17 , 20 , 28 ]. To test whether this observation also applied to moringa NUPTs, we studied the correlation between size and sequence identity by means of two different tests appropriate for not-normally distributed data, again partitioned by every episode detected (Fig. 1 C and Table 2 ). Interestingly, while for younger NUPTs from episode II size negatively correlated with sequence identity in both tests (Table 2 ), no significant correlation was found for NUPTs-I (Table 2 ), suggesting different mechanisms might have been at the origin of NUPTs from every episode and / or, once integrated, they might also have followed different evolutionary trajectories. To provide further support to the accuracy of the obtained results and discard their origin through genome assembly errors, we repeated all the analysis using the three previously published versions of the moringa nuclear genome assembly available [ 25 – 27 ]. In each case, when fitting Gaussian mixture models to each distribution of percent identities, the two main peaks could be similarly retrieved (Supplemental Fig. S 1 and Supplemental Table S 2 ). Negative correlations between size and sequence identity were also similarly retrieved for NUPTs-II (Supplemental Table 3 ), while not significant or only marginally significant positive correlation was found for NUPTs-I. We found 61 NUPTs, 51 of them not redundant, spanning a total of 14,177 bp, showing 100% identity with the chloroplast genome. These NUPTs might not represent a real biological phenomenon but be the result of a misassembly that erroneously incorporated plastid regions into the nuclear genome sequence. In order to discard this possibility, we sampled the sequences from six representative NUPTs showing 100% identity and various sizes plus 100 bp of their flanking regions in the nuclear genome and scanned for their occurrence in the three additional versions of the moringa nuclear genome available. As revealed by the corresponding multiple sequence alignments, the six NUPTs plus flanking regions selected could be identically retrieved in at least one of the remaining three genome versions (Additional files 2 , 3 , 4 , 5 , 6 and 7 ), further validating our findings. Characterization of the differential distribution of NUPTs ́ insertion sites in the moringa nuclear genome The distribution and frequency of NUPTs across the 14 chromosomes conforming the moringa nuclear genome was represented in a Circos plot as independent density plots for every episode (Fig. 2 ). In contrast to NUPTs-I, which showed an apparent homogenous distribution throughout the moringa nuclear genome, most NUPTs-II appeared to be highly concentrated in some specific regions of chromosomes one, four, five, six and 10, which showed prominent peaks in the density plots, likely corresponding to hotspots where NUPTs integration and / or subsequent fixation is favored (Fig. 2 ). A recent survey in African and Asian rice reported a compositional bias at the flanking regions of NUPTs’ insertion sites [ 22 ]. Similarly, we examined whether the 100 bp regions flanking regions of NUPTs in moringa also showed any compositional bias. While the 100 bp flanking regions of NUPTs-I were featured by a greater GC content on average (36.4%) than the rest of the genome after excluding NUPT sequences (35.72%), the opposite trend was observed for NUPTs-II, which displayed a lower GC content on average (32.3%) with differences being significant according to Mann-Whitney U-tests ( P = 2.07 × 10 −14 ; P = 2.99 × 10 −103 , respectively). Moreover, previous analysis on NUPTs from Arabidopsis and rice identified their tendency to group in clusters, defined as a group of two or more non-overlapping NUPTs where the distance between two consecutive integrants was less than 5 kb [ 7 ]. We tried to determine whether NUPTs in moringa were also forming clusters. 880 NUPTs (16.91% of the total) summing up a total of 1,232,888 bp (12.6% of the total) were found grouping into 282 clusters, which were detected in the 14 chromosomes plus nine scaffolds, and whose sizes ranged from 122 to 46,929 bp ( Supplemental Table S 4 ). Then we examined separately clusters grouping NUPTs from every episode. 56 NUPTs-I (i.e., 7.22%) summing up a total of 18,145 bp (i.e., 7.17%) were found forming 24 clusters which hosted up to five integrants (Fig. 3 ) (Supplemental Table S 4 ), whereas 476 NUPTs-II (i.e., 12.35%) summing up a total of 976,761 bp (i.e., 10.63%) were found inside 150 clusters which hosted up to 11 integrants (Fig. 3 ) (Supplemental Table S 4 ). The rest of the clusters (108) hosted 380 NUPTs from either one or both episodes and / or unclassified NUPTs (Supplemental Table S 4 ). We further checked whether the ordering of NUPTs within individual clusters were arranged collinearly with respect to the chloroplast genome or were rather shuffled in some way. For this purpose, we graphically represented the ten largest clusters in terms of number of integrants from every episode and the corresponding donor regions in the chloroplast genome (Fig. 4 ). While clusters formed by NUPTs-I showed a tendency to be arranged collinearly with the chloroplast genome (Fig. 4 A), no such collinearity could be observed for clusters of NUPTs-II (Fig. 4 B). The grouping into clusters of NUPTs at specific positions might be reflecting either large NUPTs fragmenting over time after their integration into the nuclear genome or chromosomal hotspots. If the former were the case, the sequence identity of NUPTs should correlate with their tendency to group into clusters. To test this hypothesis, we examined the correlation between the average sequence identity of the NUPTs in every cluster and the number of integrants. The tests were performed independently on clusters formed exclusively by NUPTs-I and NUPTs-II. No significant correlation was found for NUPTs from either episode (Supplemental Table S 5 ). Biased distribution of NUPTs-I in the moringa chloroplast genome Finally, we studied the distribution of NUPTs across the moringa chloroplast genome. For this purpose, we divided the corresponding DNA sequence into 100 bp regions and represented the frequency of occurrence of NUPTs as density plots (Fig. 2 ). We performed the analysis considering separately NUPTs-I and NUPTs-II. From the density plots of NUPTs-I, four peaks were apparent, which accounted for 354 NUPTs-I, i.e., 45.61% of the total. Two of the peaks, designated 1 and 2 (Fig. 2 ), spanned 200 bp each and were located in almost consecutive regions of the Large Single Copy (LSC) region of the chloroplast genome. The remaining two, designated 3 and 4 (Fig. 2 ), were of 3800–3900 bp in size and corresponded to redundant sequences from the IR regions of the chloroplast genome. In contrast, NUPTs-II were found to be almost uniformly distributed across the chloroplast genome, except for the IR regions, where, as expected, around twice the number of NUPTS-II could be observed (Fig. 2 ).
Discussion By leveraging a recently obtained high-quality long-read chromosome-scale assembly of the nuclear genome of moringa (i.e., AOCCv2) [ 23 ], we gained a finer characterization of the rich fraction of plastid DNA originally detected in an older, less contiguous, version (i.e., AOCCv1) [ 26 ], the highest reported for any plant species so far [ 19 ]. While the total fraction of plastid DNA was similar using both versions of the genome, differences were observed regarding the events underlying such enrichment. Our previous report [ 19 ], using the distribution of synonymous substitutions rates as a proxy of evolutionary time, attributed such enrichment in plastid DNA to a recent single burst of plastid gene duplicates relocating to the moringa nuclear genome. Here, in turn, by fitting Gaussian mixture models to the distributions of sequence identity of NUPTs (taken instead as a proxy of evolutionary time), two distinct main episodic events of NUPTs’ formation could be detected, namely NUPTs-I and NUPTs-II. The reason for this discrepancy likely resides in errors in the annotation of the AOCCv1 moringa nuclear genome, featured by an overrepresentation of small genes annotated with chloroplast and photosynthetic functions. While 656 and 114 genes were annotated with the terms “chloroplast” or “photosynthesis”, respectively, in the AOCCv1 moringa genome, only 378 and 51 genes were annotated with such terms in AOCCv2 [ 23 ]. For example, while 45 fragmented nuclear genes were annotated as encoding for the plastid-encoded large subunit of ribulose-1,5-bisphosphate carboxylase/oxygenase (RBCL) in AOCCv1, only three were annotated as such in AOCCv2, although all of them could be mapped to specific genomic regions in AOCCv2. Altogether suggests the previous enrichment in chloroplast related functions observed among nuclear genes was likely due to fragmented DNA of plastid origin, i.e., NUPTs, encompassing coding regions, wrongly annotated as gene coding models. Hitherto, relative ages of NUPTs’ formation in different plant species had been reported to be featured by either exponentially decreasing or uniformly constant distributions [ 15 , 17 , 18 ], which fit, respectively, into two different modes of NUPTs ́ formation, i.e., single events and hotspots [ 7 , 28 ]. The single event mode commonly results in long continuous NUPTs collinear with specific regions of the chloroplast genome, which are concentrated in specific regions of the nuclear genome, e.g., (peri)centromeric regions [ 7 , 15 , 16 , 28 ], and are expected to decay into smaller fragments and relocate as a consequence of chromosomal rearrangements and reshuffling involving transposable element activity [ 16 ]. In contrast, hotspots result in the concomitant integration of multiple short NUPTs from different origins arranged as a mosaic in specific loci of the nuclear genome [ 28 , 29 ]. To the best of our knowledge, no previous studies have reported the bimodal distribution of NUPT relative ages observed here for moringa. The observed bimodal distribution implies NUPTs in moringa were formed through two events separated in time. Furthermore, NUPTs from every event showed markedly distinctive features, suggesting they originated through distinct mechanisms. For example, according to the relative distribution of sizes, younger NUPTs from episode II showed seemingly random origins throughout the chloroplast genome and were featured by a wide range of sizes, their preferential location in hotspots across the nuclear genome and negative correlation between sequence identity and size. However, although some NUPTs-II may have originated as long fragments subsequently breaking into smaller pieces arranged collinearly as clusters throughout the nuclear genome, in accordance with the single event mode [ 28 ], no correlation was observed between the number of NUPTs-II grouping in clusters and sequence identity. This lack of correlation suggests at least some NUPTs-II may have also originated as smaller fragments landing in specific landmarks of the nuclear genome, i.e., chromosomal hotspots, eventually further dispersing trough different kinds of genome rearrangements. This was also in agreement with the observation that NUPTs-II grouped in clusters tended to be found shuffled in some way rather than arranged collinearly with the chloroplast genome. Altogether supports the origin of NUPTs-II through both single events and hotspots modes of origin. In turn, older NUPTs from episode I, featured by a narrower distribution of sizes, no correlation between sequence identity and size and a tendency to be arranged colinearly with the chloroplast genome when found grouped in clusters, do not seem to fit into any of the two modes of NUPTs’ formation previously described. Moreover, almost half of the NUPTs from episode I originated from four specific regions in the chloroplast genome, an observation only reported previously for Asparagus officialis [ 20 ] and in contrast to previous studies in Arabidopsis, rice and other species, which showed a homogenous distribution of NUPTs throughout the chloroplast genome [ 15 , 17 ]. We therefore propose here a third mode of NUPTs’ formation through small-scale recurrent events. Once individual NUPTs are formed, two scenarios are plausible i) multiple copies of NUPTs firstly forming in the chloroplast and later relocating to the nucleus, or ii) individual NUPTs recurrently duplicating once integrated into the nuclear genome. In respect of the possible evolutionary forces underlying the leakages and subsequent fixation of variable amounts of plastid DNA in plant nuclear genomes, these might be related to the different stressful conditions to which every species would have been subjected throughout their recent evolutionary history; different stresses have been shown to promote DNA migration from chloroplasts to the nucleus [ 10 , 30 ]. The massive amounts of plastid DNA found in the moringa nuclear genome might be well related to the exposure to stressful conditions during its recent evolutionary history [ 31 , 32 ]. Indeed, domestication of moringa from the sub-Himalayan lowlands in NW India, its putative location of origin where mean annual precipitations exceed 1100 mm, to tropical and sub-tropical areas around the world where its culture has spread [ 31 ] likely involved the selection of varieties better adapted to drier and hotter environments [ 32 , 33 ]. Furthermore, moringa shows a great adaptive potential to successfully cope with multiple stresses, particularly water deficit and UVB radiation [ 34 ]. At this respect, it has been noted that the 11 giant NUPTs found in Asian rice trended to distribute in natural populations from higher latitude regions featured by lower temperatures and light intensities [ 22 ]. This observation led the authors to attribute NUPTs a potential role in enhancing environmental adaptation by increasing the number of chloroplast-derived genes which might, in turn, improve photosynthesis [ 22 ]. However, we believe this adaptive-to-stress hypothesis seems unlikely given that “recent” plastid-to-nuclear gene transfers are exceedingly rare, especially for photosynthetic genes, with the genes most frequently transferred in extant lineages being ribosomal proteins [ 35 ]. Whatever the specific forces that are at the origin of the fixation of the massive amounts of plastid DNA found in the moringa nuclear genome, they appear to be of a different nature for every independent event of NUPTs formation detected here.
Conclusions Results presented here reveal an unanticipated complexity of the mechanisms at the origin of NUPTs and of the evolutionary forces behind their fixation. Comparative genomics of domesticated moringa together with that of the 12 wild Moringa species that make up the taxonomic family Moringaceae within the Brassicales order [ 36 ], emerges as an excellent model for reconstructing the mechanisms of origin and evolutionary fixation of plastid DNA in the nuclear genome.
Background Beyond the massive amounts of DNA and genes transferred from the protoorganelle genome to the nucleus during the endosymbiotic event that gave rise to the plastids, stretches of plastid DNA of varying size are still being copied and relocated to the nuclear genome in a process that is ongoing and does not result in the concomitant shrinking of the plastid genome. As a result, plant nuclear genomes feature small, but variable, fraction of their genomes of plastid origin, the so-called nuclear plastid DNA sequences (NUPTs). However, the mechanisms underlying the origin and fixation of NUPTs are not yet fully elucidated and research on the topic has been mostly focused on a limited number of species and of plastid DNA. Results Here, we leveraged a chromosome-scale version of the genome of the orphan crop Moringa oleifera , which features the largest fraction of plastid DNA in any plant nuclear genome known so far, to gain insights into the mechanisms of origin of NUPTs. For this purpose, we examined the chromosomal distribution and arrangement of NUPTs, we explicitly modeled and tested the correlation between their age and size distribution, we characterized their sites of origin at the chloroplast genome and their sites of insertion at the nuclear one, as well as we investigated their arrangement in clusters. We found a bimodal distribution of NUPT relative ages, which implies NUPTs in moringa were formed through two separate events. Furthermore, NUPTs from every event showed markedly distinctive features, suggesting they originated through distinct mechanisms. Conclusions Our results reveal an unanticipated complexity of the mechanisms at the origin of NUPTs and of the evolutionary forces behind their fixation and highlight moringa species as an exceptional model to assess the impact of plastid DNA in the evolution of the architecture and function of plant nuclear genomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-024-09979-5. Keywords
Supplementary Information
Abbreviations nuclear plastid DNA sequence double-stranded break non-homologous end joining single strand annealing Acknowledgements Not applicable. Authors’ contributions LC-P conceived and designed the project and all research activities. JPM-R performed all the analyses, with contributions from AMA-S. AS contributed to the statistical analysis implemented in the paper. VI and AA contributed to coding scripts used in the paper and provided computational support. All authors contributed to data analysis and interpretation. LC-P wrote and edited the manuscript with substantial contributions from JPM-R. All authors reviewed the manuscript. Funding This work was supported by a “Proyectos I+D Generación de Conocimiento” grant from the Spanish Ministry of Science and Innovation (grant code: PID2020-113277GB-I00) to LC-P, and by funds received by the “Sistema de Información Científica de Andalucía” Research Group id BIO359 to LC-P. Partially funded by grants PID2019-106758GB-C32 by MCIN/AEI/10.13039/ 501100011033, FEDER “Una manera de hacer Europa” funds, and Junta de Andalucía grant P20–00091 to AS. Availability of data and materials All data generated or analyzed during this study are included in this article and its Supplemental information files. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:33
BMC Genomics. 2024 Jan 15; 25:60
oa_package/c9/db/PMC10789010.tar.gz
PMC10789011
0
Correction: Cell Commun Signal 21, 112 (2023) https://doi.org/10.1186/s12964-023-01132-1 Following publication of the original article [ 1 ], the authors identified an error in the author name of Markus Kranzler. The incorrect author name is: Markus Kanzler The correct author name is: Markus Kranzler The author group has been updated above and the original article [ 1 ] has been corrected.
CC BY
no
2024-01-16 23:45:33
Cell Commun Signal. 2024 Jan 15; 22:37
oa_package/17/f9/PMC10789011.tar.gz
PMC10789012
38221611
Background Domestication is an evolutionary process that usually results in a well described set of phenotypical traits [ 1 ] that separate domesticated species from their wild relatives. In case of the dog ( Canis familiaris ), the oldest domesticated animal (e.g., [ 2 ]), the most typical species-specific features are those socio-cognitive and behavioral phenotypes that enable the dog to coexist with humans in an intricately complex system of dependency [ 3 ]. Among these, preference towards humans over conspecifics [ 4 ], attachment to the owner [ 5 ], human-directed referencing [ 6 ], and cooperating with humans rather than with other dogs [ 7 ], all manifest in dogs much more readily, than in tame specimens of their closest relatives, the gray wolf ( Canis lupus ). Dogs also show a wide array of communicative skills that seem to be honed towards understanding various human signals (visual, [ 8 ]; acoustic, [ 9 ]; and olfactory [ 10 – 11 ]. Dogs themselves have particular communicative features that could show the effect of domestication, one of the more notable ones is their most abundant type of vocalization: barking [ 12 ]. Dog barks show unique features compared to the barks of wolves. Beyond the obvious quantitative difference (dogs bark more, [ 12 ]), most remarkably dog barks became acoustically diverse, covering such contextual diversity that in wolves would be covered by other sorts of vocalizations (such as growling or howling, [ 12 ]). According to one of the explanatory theories [ 13 – 14 ], dog barks became the main acoustic signal type that dogs ‘use’ towards a new ‘audience’: humans. It was found that barks convey reliable information about the inner state of the dog [ 15 ]; as well as contextual information [ 16 ] for human listeners. Compared to the predominantly low-pitched and noisy (atonal) barks of wolves [ 17 ], dog barks show a more variable acoustic nature. It was found that for humans, the combination of fundamental frequency, harmonic-to-noise ratio (tonality) and pulse of the barking (the length of inter-bark intervals) all carry emotional information. Deep, noisy and fast pulsing barks are thought to belong to an aggressive dog, while-high pitched, tonal and slow barks convey fear and despair. Other combinations of these three parameters may convey playfulness and happiness [ 15 ]. These effects are rather robust, and they work similarly in children and adults [ 18 ], independent of dog-related experiences [ 16 ], or cultural background [ 19 ]. Vocalizations across a wide selection of mammalian (and avian) species have a conservative and highly similar nature in how they encode the inner state of the signaler. The explanation for this is two-fold. From the aspect of signal evolution, according to the structural-functional ‘rule’ of Morton [ 20 ], particular acoustic parameters became typical for the emitter of a given signal because the anatomical features of the individual (including its vocal apparatus) were highly predictive for its likely intentions. For example, larger individuals were likely to be aggressive, and the larger body and larger vocal apparatus were likely to produce deep and noisy vocalizations. Small individuals were likely to signal submission and lack of aggression in case of conflict, and their smaller vocal organs were more likely emitting higher pitched, cleaner vocalizations. Besides Morton’s theory, the so-called ‘source-filter’ theory of acoustic production, provides a mechanistic explanation for the similarities of how the vocalizations of various species can have similarly encoded indexical and emotional content [ 21 ]. The source–filter theory states that vocal signals result from a two-stage production, with the glottal wave generated in the larynx (the source), being subsequently filtered in the supralaryngeal vocal tract (the filter). Physiological fluctuations in emotional or motivational state have been found to influence the acoustic characteristics of signals in a reliable and predictable manner. As the innervation of the production and filtering components of the vocal tract shows high similarity across mammals [ 22 ], this explains how particular affective states will be expressed in a similar acoustic way in various species. Although the theory of domestication-related changes in the dog’s vocal output [ 13 – 14 ] suggests that the qualitative and quantitative proliferation of dog barks would serve a more effective communication of the dogs’ inner states towards humans with basically the same type of vocalization, there are increasing parallel concerns about the negative effect of dog barks on the coexistence of the two species. Nuisance barks represent a worldwide concern [ 23 ], resulting in anti-dog keeping legislation at the community level [ 24 ], relinquishment of dogs [ 25 ], anti-barking interventions that can range from training [ 26 ] to punitive devices [ 27 ] and the controversial process of surgical de-barking of the offending dogs [ 28 ]. It would be easy to consider dog barks as being just one more component of noise pollution, where barks would be annoying because they are too loud, emitted too abundantly, or in the wrong time [ 29 ], but that is only part of the reason we find dog barks annoying. Related to the previously detailed communicative function of barking towards humans, recently we proposed a new theory that focuses on particular acoustic components that could make particular bark types more annoying than others [ 30 ]. While we acknowledge that to a certain extent every dog bark type represents a rather unpleasant acoustic experience to human listeners (which fits with the assumption that dog barks can serve as mobbing signals, [ 31 ], according to our ‘ communicative relevance of nuisance barks’ theory [ 30 ], humans would become more annoyed by those barks that have a high attention-eliciting effect because of their particular communicative content. We found that barks which convey negative emotions (aggression, fear, despair) would cause stronger nuisance for the humans than barks with ‘positive’ emotional content. In a follow-up study, it was also shown that there is a specific combination of fundamental frequency and tonality that was especially annoying to human listeners [ 32 ]. As these high-pitched and noisy dog vocalizations were acoustically similar to babies’ crying, and young (reproductive age) adults reacted the strongest to them, we hypothesized that there was a past selective emergence of an especially effective attention-eliciting type of bark. According to this theory, barks mostly become annoying if the human listener cannot intervene, which eventually leads to frustration [ 32 ]. It is important to see that we do not propose that dog barks were evolved to be ‘annoying’ for humans. Dog barks convey vital information about the dynamically changing inner state of the signaler to human listeners and the acoustic variability of dog barks that enables this function can be regarded as a new feature related to domestication [ 14 ]. Barking of the wolf only convey agonistic content, and other inner states are expressed with other types of vocalizations [ 12 ]. In contrast, dogs can express a wide array of inner states with barking only (from happiness to fear), and the acoustic characteristics of dog barking changed to a much more variable phenotype [ 33 ] compared to the generally low-pitched and noisy wolf barks. This leads to two parallel theories that could explain why dog barks elicit annoyance in humans. One of these theories (i) focuses on affective communication [ 30 ], where humans can read the inner state-related information in dog barks [ 15 ], and mostly the perceived negative emotions elicit nuisance in the receivers. This mechanism could be explained on the basis of inter-specific empathy [ 34 ], which has an important role in dog-human interactions [ 19 ]. The other explanation (ii) is that specific barks have a strong attention-eliciting effect [ 32 ] and just like baby cries, in case of prolonged exposure they may elicit stress and eventually frustration and anger from the listeners [ 35 ]. As the attention-eliciting and affective content of dog barks would be hard to disentangle acoustically, or with behavioral tests alone, in this study we opted for applying intranasal oxytocin treatment to the human participants with the aim of getting a clearer picture of how the mechanisms of particular dog barks affect humans. The neuropeptide hormone oxytocin has a complex and widespread effect in the body [ 36 ], and in this study we will focus only on its mediating effect on emotional understanding (as a ‘central’ effect, influencing affective empathy, [ 37 ]) and its attenuating effect on psycho-social stress reactions (as a ‘peripheral’ effect, e.g., [ 38 ]. There are many indications that oxytocin has a positive effect on trusting others [ 39 ] and recognizing other humans’ emotions (e.g., facial expressions, [ 40 ]). In that double-blind study [ 40 ], with the use of fMRI technology, it was found that intranasal oxytocin suppressed the right hemisphere’s amygdale activation, thereby reducing the participants’ fear reactions towards angry and frightened human faces. Kosfeld and colleagues [ 41 ] found that oxytocin takes a role in the formation of positive, prosocial behavioral patterns, what they considered as of fundamental importance in the formation of ‘trust’ (“an individual’s willingness to accept social risks arising through interpersonal interactions”). Although Singer and colleagues [ 42 – 43 ] did not find direct association between intranasally administered oxytocin and the neural mechanisms responsible for emotional distress, it was later found that the polymorphism of the OXTR gene shows an association with emotional empathy [ 44 ]. There are numerous studies either showing supporting evidence of the connection between oxytocin and the participants’ affective empathy performance (e.g., [ 45 ]) or the lack of such associations (e.g., [ 46 ]). Regarding stress-attenuation, oxytocin has a negative effect on cortisol levels in the case of physical exercise [ 47 ] and social stress [ 48 ]. Interestingly, it was found that emotional support and oxytocin together had the strongest stress-attenuation, probably because positive human interactions themselves enhance oxytocin production [ 49 ]. It is worth mentioning at the same time that recently some authors did not find connection between the effect of particular stress-reducing mental training methods and their connection with modulating the stress-induced acute plasma oxytocin release, and they emphasize the need of further investigations [ 50 ]. Goals, hypotheses, predictions In this study we wanted to find out whether dog barks affect human listeners predominantly through their emotional content (affective inner state communication), or because they evoke attention from the listeners (‘alarm calls’). Because we previously found that male participants were more annoyed by barking dogs [ 30 ], and young adults responded most intensely to nuisance barks [ 32 ], we tested only young men in a double-blind, placebo controlled experiment with intranasal oxytocin administration. As we mentioned previously, researchers so far did not arrive to an unambiguous consensus regarding the exact effect of intranasally administered oxytocin on affective empathy and mediating social stress. In the framework of the present study, we assumed an overall positive effect of oxytocin on the participants’ affective empathy, and we assumed an attenuating effect of oxytocin on the participants’ reactions to the attention-grabbing (‘alarm’) vocalizations. According to our first hypothesis , nuisance barks [ 30 , 51 ] cause stress through their unique acoustic structure [ 32 , 52 ], thus we predicted that intranasal oxytocin, through its stress-attenuator effect [ 53 ], would lessen the elicited nuisance in the listeners. Our second hypothesis considered that the oxytocin would affect how participants would react to the emotional content of dog barks. As there are indications that oxytocin has an effect on affective empathy (emotional understanding) [ 54 – 55 ], here we predicted that intranasal oxytocin would modify the participants’ reactions to particular (especially the negative valence) dog barks in the playback study.
Materials and methods Overall description and participants Altogether 40 men, between 18 and 35 years of age, participated in our test. The control and the oxytocin-treated group included 20 male participants in each. We made a playback test in which the participants were asked to listen to recordings of dog barks, which they had to assess one by one, with the help of scoring sheets. Before the test we included a two-phase pre-treatment. In the first phase, in a double blind procedure, we administered either oxytocin hormone or a placebo (NaCl solution) via intranasal spray. Following this treatment, we kept the participants isolated throughout the 40 min long incubation period (phase 2), to eliminate the chance that the results would be influenced by any external social effect. After the 40 min elapsed, they listened to the dog bark sequences. Based on the bark samples, the participants had to evaluate the apparent inner states of the barking dogs and additionally, they had to rate each bark sequence according to the level of annoyance they triggered in the given participant. All ratings were done with the help of a Scoring Sheet (Fig. 8 ), on 7-grade Likert scales. Procedure The experiment was conducted in the laboratory of the Department of Ethology (Fig. 9 . shows the arrangement of the testing room). The whole test took 70 min, and each participant was tested only once. First, the participants received the informed consent form and the handout, which explained what the purpose of the study was and it also provided a short description of the experiment. After completing the consent form, the experimenter informed the participant about the procedure. Next, we administered, via intranasal spray, the oxytocin hormone, or in the case of the control group, the placebo solution. Independently of their group-assignments, the participants were uniformly told that they were given oxytocin treatment. The nasal-spray bottles (10 ml) were identical in look between the oxytocin and control treatments, and could be identified only by their colored labels. Based on these labels, we created two groups, red and blue, that (unknown to the experimenter and participants) designated the oxytocin and placebo during the double blind procedure. The bottle with the red label contained oxytocin hormone, therefore the placebo was in the blue one. Filling of the bottles (with oxytocin (Syntocinon, Producer: Defiante Farmaceutica S.A., Germany) and with physiological NaCl solution used as placebo) was done by a third party, who did not participate in the study, so neither the experimenter, nor the participants, knew what the color codes meant. According to the storage instructions, the oxytocin (Syntocinon nasal-spray) should be kept cool (between + 2 and + 8 °C), so we kept both the control and oxytocin bottles refrigerated at the recommended temperature. In all cases, participants were requested to perform the intranasal treatment (3–3 shots in both nostrils) for themselves. One spray shot contains approximately 4 IU (International Unit) oxytocin, therefore we used 24 IU/person. We chose this amount, as this is the most widely documented dosage in scientific literature [ 66 – 73 ]. The amount of oxytocin used in these studies is shown in Table 2 . Following the pre-treatment with the intranasal sprays, subjects had to wait 40 min in social isolation. Based on previous results this time frame of 40–45 min is necessary for the intranasal oxytocin treatment to reach effect [ 37 , 40 , 55 , 66 , 69 , 71 , 73 , 74 ] with a maximum efficiency reached in 40 min [ 75 ]. During this time interval, the participants could not be affected by any external, social influences. Before the oxytocin pre-treatment, we asked them to turn off and put away every communication device, thus no social impact or interaction could happen on their end. During this isolation period they could not communicate with the experimenter either. Also, as demonstrated in Fig. 9 , the experimenter was sitting behind the participant, therefore no social interaction could happen between them. During the 40-minute isolation period the participants had to solve a 500-piece Ravensburger puzzle depicting a Mediterranean cityscape. There were no people depicted on the puzzle. The participants were previously told that they should solve this task at their own tempo. They were told that the puzzle was a part of the study, during which we do not evaluate the amount of solved puzzle pieces (thereby we were able to decrease the stress arising from the possibility of a competitive situation), only the strategy and colors they used were examined. We decided upon this task, because we wanted to occupy the participants with an action that had no social influence on them. After the 40-minute isolation we took a photo of the puzzle for documentation, imitating that it was a part of the study. Then the second main part began, which was the playback test. The playback test During the playback test the participants listened to the bark recordings through noise-filtering Sennheiser headphones and a media player program (Winamp) in a closed, quiet room, where external noises did not disturb the test. During the tests only the experimenter and the participant were present. All participants listened to a playlist including 12 bark sequences. The individual sequences were at least 3 and maximum 8 s long, depending on the interval between the bark units. The experimenter stopped the recording after each sequence, thus the participant could rate them one by one using the Likert scale (Fig. 8 ). The task was to evaluate the barks according to the assumed inner state of the dog (three separate scales for ‘happiness’, ‘fear’ and ‘anger’) and the annoyance level the barking triggered in the participant. Every sequence was typically played only once, but upon request by the participant, the experimenter could play it one more time. The scoring sheet We asked the participants to rate each bark sequence individually. Besides the annoyance ratings, they also had to assess the inner state of the dog. As these recordings were created artificially, in reality, the participants did not evaluate the inner state of a particular dog/sequence. Instead, they rated the perceived emotional state, based on the acoustic parameters of the assembled bark sequences. We provided the participants with individual scoring sheets for each bark sequence (Fig. 8 ) with three questions regarding the dog’s inner state (‘Happy, playful’, ‘Scared, desperate’, ‘Aggressive, angry’) and one question regarding the degree of annoyance triggered by the barking (”How annoying was this dog barking to you?”). Next to each of the questions they could find a scoring scale, which was a modified Likert scale with stylized dog and human faces, which in turn represented the values of the scale. Linearly increasing, the smallest picture represented the ‘weakest’ and the largest indicated the ‘strongest’ value on the given scale. The scoring scales about the dog’s emotional state were illustrated by stylized dog faces showing different emotions, while the scale indicating the participant’s annoyance elicited by the given bark sequence was pictured with annoyed human faces. This scoring sheet was based on a scoring system created for children in previous research [ 18 ]. In this study we used the same sheet as in our previous study [ 32 ], because we wanted to use a standardized method for our new experiment, which was based on our previous findings. From the scoring sheets the participants’ responses were entered to a database in a digitalized format for further analysis. The sound samples We used artificially assembled sequences of dog barking events, which were created from original recordings taken during field work [ 16 ] in different contexts. The bark recordings were at first segmented to individual bark units, then with a computer-based algorithm the new artificial sequences [ 30 ] were created. We used artificially assembled sequences, because this way we could control those acoustic characteristics that would impact what we intended to investigate. Also, we were able to exclude the individual characteristics of the barking dogs and the well-recognizable acoustic characteristics of context-specific barks [ 15 , 30 ]. In the pool of bark units, we had recordings from 26 different dogs, all of them were from the Mudi breed. This herding breed is strongly vocal as a result of their original function. The original bark sequences were recorded in six different social contexts during a previous study (the methodological description about the process of the recording is accessible: [ 16 ]. The situations in which the recordings were taken are: “Stranger at the fence”, “Schutzhund/Fight training”, “(left) Alone”, “Before walk”, “(asking for) Ball”, “Play with owner” (for detailed description of the situations see: [ 16 ]). For selecting the bark units of the artificially created bark sequences, we based our choice on two categories (low and high) of tonality and fundamental frequency in the case of each unit. From the original recordings 1452 bark units were selected, based on their tonality (Harmonic to Noise Ratio (HNR): low: -2.1–4.6, high: 11.6–35.4) and pitch (fundamental frequency: low: 401–531 Hz, high: 732–1883 Hz). Based on the previous research results [ 30 ], we excluded the ’medium’ values from the tonality and pitch samples, because in the original study these did not have a significant effect on annoyance ratings. This allowed retention of the most and the least annoying bark sequences based on the study of Jégh-Czinege et al. [ 32 ]. From the selected bark units we created artificial bark sequences (with 10 individual bark units in each). For the assembly of the bark sequences we used three categories of between-bark time intervals (short: 0.1 s, medium: 0.3 s, long: 0.5 s long break). As a result, we ended up with 12 types (2 × 2 × 3) of bark sequences (Table 6 ). We created playlists from these, each containing only one from each of the 12 sequence types in random order. We only used each playlist once, so every participant listened to a different playlist, this way we could avoid the effect of pseudoreplication in our study. Statistical analysis All statistical analyses were performed with R Studio (RCoreTeam, 2017). We used Cumulative Link Mixed Models fitted with the Laplace approximation (ordinal package, clmm function) to investigate which factors influenced the scoring of the dogs’ inner states and annoyance scores that were elicited by the dog barks in the participants. Oxytocin or placebo treatments, and acoustic parameters (the high and low levels of fundamental frequency and tonality, besides three levels of inter-bark intervals: short, medium, long) were used as independent factors. Two way interactions of Treatment and acoustic categories were also included in the initial models. ID of the participants was used as random factor. On the initial models we ran AIC-based backwards model selection (drop1 function). During this, the effects contributing least to the model fit were eliminated one by one from the model until the simplest, yet best-fitting model was obtained We run Tukey’s Post-Hoc test for pairwise comparisons (emmeans package, emmeans function).
Results ‘Annoyance’ scores showed a significant association with the fundamental frequency of barks. The higher was the pitch of the barks, the participants considered it as being more annoying than dogs barking in a low pitch (see Table 1 ; Fig. 1 ). Furthermore, we found significant interaction between the tonality of barks and the treatment of subjects (see Table 2 ; Fig. 2 ). With Tukey’s post hoc test we found that participants who received placebo treatment rated the low tonality (noisy) barks more annoying than high tonality (clear) barks (cum. prob ± SE = 0.056 ± 0.001; z = 47.542; p < 0.001; Fig. 1 ). In the case of participants receiving the oxytocin treatment we did not find significant difference between the annoyance scores of noisy and tonal barks (cum. prob ± SE=-0.039 ± 0.033; z=-1.177; p = 0.239). We found a significant association between the fundamental frequency and the perceived emotional content of dog barks (see Figs. 3 , 4 and 5 ; Table 3 ). Participants gave higher happiness scores to the low fundamental frequency barks than to high fundamental frequency barks (Fig. 3 ; Table 3 ). However, the intranasal oxytocin treatment had no significant effect on either the assessment of inner state ‘happy, playful’ or ‘desperate, fearful’. Fear scores were also affected by the fundamental frequency and tonality (Table 4 ). Tonal barks with high fundamental frequency were considered as more fearful (Fig. 6 ) than low pitched and noisy, i.e., atonal barks (Fig. 4 ). Tonality had significant main effect on ‘angry, aggressive’ ratings (see Fig. 7 ; Table 5 ). Both treatment groups found noisy, atonal barks as being significantly angrier, than the high tonality ones. In addition, there was a two-way interaction between the treatments and the fundamental frequency (see Table 5 ). The Tukey’s Post-Hoc tests showed that participants treated with intranasal oxytocin perceived low-pitch barking as being more aggressive than the high-pitch barks (cum. prob ± SE = 0.152 ± 0.036; z = 4.172; p < 0.001; Fig. 5 ). In contrast, in the case of the participants who were treated with the placebo, we did not find significant association between the fundamental frequency of barks and the assessment of aggression (cum. prob ± SE = 0.016 ± 0.035; z = 0.464; p = 0.642). As a summary, it can be said that the intranasally administered oxytocin reduced the stress that the barks may have elicited in the participants. The subjects who received placebo treatment found the noisy barks to be more annoying, while in the case of the oxytocin-treated participants we found no difference in the annoyance scores based on the tonality of barks. However, oxytocin also had a sensitizing effect in the case of the perceived inner state of the dogs, connected to the association between the fundamental frequency of the barks and the aggression scores. The two treatment groups showed a notable difference in their assessment of dogs’ aggressiveness based on the fundamental frequency values of the barks.
Discussion In this study, we confirmed that particular acoustic features of dog barks may not only convey emotional information about the signaler [ 15 ], but they also affect the level of annoyance these barks elicit [ 30 , 32 ]. Barks with high fundamental frequency values were given higher scores of fear, than low-pitch barks. Tonality also played an important role in the emotional evaluation of barks. Tonal (clear) barks were more likely to be found as being fearful, than atonal (noisy) ones. Thus for the listeners the emerging negative state conveyed by the barks was influenced by both the noisy and low fundamental frequency sounds. The scores of anger/aggression were negatively correlated with the fundamental frequency and the tonality of dog barks. The earlier found positive association between the high fundamental frequency and ‘happy, playful’ scores described in previous research [ 16 , 32 ] was not confirmed by our current results. In contrast to our previous study, high-pitched barks were given very high ‘fear’ values, probably because when the participants had to evaluate the ‘happiness’, they were reluctant to provide high positive scores to the barks with high fundamental frequency values. Fundamental frequency had a strong effect on the assessment of the elicited annoyance. Barks with high fundamental frequency were scored as being more annoying, than low fundamental frequency barks in both treatment groups. Similar to the findings of Pongrácz et al. [ 30 ], barks with mainly strong negative emotional state scores, received higher annoyance scores. Based on a double-blind, placebo controlled intranasal oxytocin treatment for young adult male participants, we found support for both of our hypotheses. One of these hypotheses was that particular dog barks have a strong nuisance effect on human listeners through the (negative) emotions they convey. This hypothesis was confirmed with the result that those participants who received the oxytocin treatment, evaluated low fundamental frequency barks with higher aggression scores than the placebo-treated participants did. Thus, in the case of the connection between low-pitch barks, oxytocin had a sensitizing effect on the emotional understanding of the listeners. Based on our other hypothesis, we predicted that oxytocin will reduce the stress that may be caused by the alarming/attention calling function of particular barks. We confirmed this prediction with the results, which showed that participants who received placebo treatment found the atonal barks as being more annoying than the tonal barks (’tonality effect on annoyance’), while we did not find association between annoyance scores and the tonality in the case of participants who received intranasal oxytocin. Previous research has shown that oxytocin increases the ability to recognize the inner state of another person. In the research of Guastella and colleagues [ 56 ], oxytocin treated participants remembered faces better that they had seen before, and they more easily recognized happy faces than angry or neutral ones. Evans and colleagues [ 57 ] detected that oxytocin reduced the aversion to anger seen on another person’s face. According to the results of the study by Domes et al. [ 39 ], intranasal oxytocin treatment made people more successful in recognizing another person’s mood based on a photo of the facial area around the eyes. We provided the first indications of a dual effector system regarding the interspecific acoustic communication between dogs and humans. In our study it has been proven that oxytocin increased the ability of humans to assess the dog’s perceived inner state, especially emotions with negative valence, based on artificially assembled bark sequences. The ability of the listener to recognize and take into account the assumed emotional state based on dog barks plays a major role in the development of annoyance. However, at the same time, oxytocin attenuated the stress effect caused by (alarm) dog barking. According to the two-way interaction between tonality and oxytocin treatment, it has been shown that placebo treated participants found noisy (atonal) barks more annoying, than tonal barks. However, in the case of the group treated with oxytocin, there was no such difference between tonal and atonal barks. It can be assumed that oxytocin-treatment detectably reduced the stress-enhancing and intervention-inducing effect [ 52 – 53 ] of nuisance barks [ 30 , 51 ]. On the other hand, it is important to note that oxytocin only had an annoyance-reducing effect in the case of certain acoustic parameters. It modified the effect influenced by tonality, but it had no influence on the effect induced by fundamental frequency. During the assessment of inner states, it has been found that as a function of fundamental frequency, participants who received oxytocin treatment gave higher anger scores to particular barks. As a result of oxytocin treatment, participants found low-pitched barks angrier than high frequency barks, therefore, anger is determined by tonality (noisier barks were found as being angrier) and frequency together. This result can be paralleled with the findings that in the case of vocal communication signals with an attention-eliciting function, e.g., baby cries [ 58 ], and low tonality (noisy) voices, are more likely to cause stress in people, than tonal (clear) voices. Our current research confirmed that the most annoying dog barks also have a strong attention-grabbing role for humans. We can assume that these barks may signal such changes in the environment (i.e., a threat) that people would normally react to [ 16 ]. This could be one of the reasons why barks became the most variable and ubiquitous type of dog vocalizations during domestication [ 12 ], when intraspecific communication with humans was the new driving force behind the evolution of vocal signaling [ 12 – 14 , 33 ]. There is acoustic similarity between a baby’s cry and dog barks [ 32 , 59 ], and this is why these barks were coined as attention-grabbing barks. We conducted our research on that age group (potential parents) and sex (men) for whom barks with the specific attention-grabbing acoustic parameters were found to be the most annoying in earlier studies [ 30 , 32 ]. Our research provides first-hand information, that in most cases, it is not the bark itself that bothers the listeners, but the dual components of stress and emotional reaction that particular barks may provoke. Under the influence of such barks, listeners are urged to intervene and preferably change the situation that triggered the barking. A similar effect was described with baby cries [ 60 ]. If intervention is not possible (or unsuccessful), it induces frustration that causes stress and increases annoyance to the listener in the cases of baby cries [ 61 ] and dog barks (present study). The administration of intranasal oxytocin proved to be effective against developing higher levels of annoyance (i.e., frustration stress) in our participants; furthermore, it helped to perceive the (negative) emotional content of dog barks. Our study has the limitation of using a male-only sample, which we opted for because earlier results showed a stronger nuisance effect of dog barks in young men than in other age cohorts and in women in general. Therefore, the results are not necessarily fully representative to other age classes and women. As dog owners are more likely female than male [ 62 ] and women show stronger emotional understanding towards animals as well [ 63 ], similar investigations would be worthy to be conducted on a more representative sample in the future.
Conclusions The coexistence of dogs and humans in crowded neighborhoods is often compromised by debates over nuisance barking. Our results emphasize that dogs can cause acoustic disturbance, not only because their barks are excessively abundant or loud, but also due to evolution causing dog barks to have specifically effective attention eliciting dual purpose attributes (informative-alarm and inner state-emotion), that causes humans to automatically want to investigate the reasons for the barking event. When there is a complaint issued about an excessively barking dog [ 51 , 64 ], an ethological approach would be required to evaluate the situation, as well as registering the duration, volume and timing of the barking events. During the assessment, the situations when the dog most often barks (eliciting factors) should also be analyzed, as well as the acoustic parameters and unique characteristics of the barking event. In this way, humane corrective measures could be implemented to possibly avoid drastic solutions that involve negative consequences for the dog and its owner [ 65 ].
Background Barks play an important role in interspecific communication between dogs and humans, by allowing a reliable perception of the inner state of dogs for human listeners. However, there is growing concern in society regarding the nuisance that barking dogs cause to the surrounding inhabitants. We assumed that at least in part, this nuisance effect can be explained by particular communicative functions of dog barks. In this study we experimentally tested two separate hypotheses concerning how the content of dog barks could affect human listeners. According to the first hypothesis, barks that convey negative inner states, would especially cause stress in human listeners due to the process called interspecific empathy. Based on the second hypothesis, alarm-type dog barks cause particularly strong stress in the listener, by capitalizing on their specific acoustic makeup (high pitch, low tonality) that resembles to the parameters of a baby’s cry. We tested 40 healthy, young adult males in a double-blind placebo controlled experiment, where participants received either intranasal oxytocin or placebo treatment. After an incubation period, they had to evaluate the (1) perceived emotions (happiness, fear and aggression), that specifically created dog bark sequences conveyed to them; and (2) score the annoyance level these dog barks elicited in them. Results We found that oxytocin treatment had a sensitizing effect on the participants’ reactions to negative valence emotions conveyed by dog barks, as they evaluated low fundamental frequency barks with higher aggression scores than the placebo-treated participants did. On the other hand, oxytocin treatment attenuated the annoyance that noisy (atonal) barks elicited from the participants. Conclusions Based on these results, we provide first-hand evidence that dog barks provide information to humans (which may also cause stress) in a dual way: through specific attention-grabbing functions and through emotional understanding. Supplementary Information The online version contains supplementary material available at 10.1186/s12862-024-02198-2. Keywords Open access funding provided by Eötvös Loránd University.
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements Attila Jégh provided his help in conducting the double-blind treatment. Celeste R. Pongrácz kindly helped with proofreading the manuscript. Author contributions Conceptualization: PP and NJC; Methodology: PP and TF; Validation: TF and NJC; Formal Analysis: TF; Investigation: GA and NJC; Resources: PP and TF; Data Curation: TF and NJC; Writing– Original Draft Preparation: PP, CL, LS, GA; Writing– Review & Editing: PP, NJC, TF; Visualization: TF, CL; Supervision: PP; Project Administration: NJC. Funding Péter Pongrácz was supported by the Hungarian National Research, Development and Innovation Office (NKFIH, Grant # K143077). Tamás Faragó was supported by the Hungarian Academy of Sciences via the János Bolyai Research Scholarship (BO/751/20), ÚNKP-22-5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund (ÚNKP-22-5-ELTE-475) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (950159). Open access funding provided by Eötvös Loránd University. Data availability All data generated or analyzed during this study are included in this published article [and its supplementary information files]. Table S1 shows the raw data used for the analyses. Declarations Ethics approval and consent to participate All methods of this research were carried out in accordance with relevant guidelines and regulations in the Declaration of Helsinki. All participants attended voluntarily and signed the informed consent form. The participants’ data were used anonymously and for the purpose of this study only, and the participants were fully informed about this in the consent form. The protocol has been reviewed and accepted by the EPKEB (United Ethical Review Committee for Research in Psychology). The case number of the ethical permission is: 2016/022. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:33
BMC Ecol Evol. 2024 Jan 14; 24:8
oa_package/5c/ad/PMC10789012.tar.gz
PMC10789013
0
Introduction Coronavirus Disease-2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been a major public health threat leading to a significant socio-economic burden worldwide [ 1 ]. The Omicron variant is the most recent variant and has occupied 48% of all prevalent virus strains by 15 Dec 2022 [ 2 ]. While a high transmissibility remains, the Omicron variant has been reported to cause significantly more severe infections in the unvaccinated population, especially in older adults [ 3 – 5 ]. Severe COVID-19 may induce respiratory failure, septic shock, or organ dysfunctions, leading to high mortality rates [ 6 ]. Hence, identifying patients at risk of disease progression to provide timely medical intervention becomes the key to managing Omicron, and particular concern should be given to the most vulnerable population of geriatric patients. In geriatric patients, suboptimal nutritional status is common on admission to hospital, which may adversely affect their clinical outcomes [ 7 – 9 ]. Nutritional status has been reported to impact clinical outcomes in respiratory diseases including chronic obstructive pulmonary disease (COPD) [ 10 ] and asthma [ 11 ]. As the immune system and multiple organ functions are regulated by nutritional status, nutritional deficiencies and inadequate nutrients may lead to latent systemic inflammation and secondary organ dysfunction, resulting in susceptibility and vulnerability to infectious diseases [ 12 , 13 ]. Previous studies have identified risk factors of Omicron infection severity, including old age, male gender, hypertension, obesity, malignancies, etc. [ 14 , 15 ]. Although poor nutritional status is common in geriatric Omicron infected patients, limited evidence has been revealed on the relationship between baseline nutritional status and disease severity in this population [ 16 ]. Therefore, the present study is conducted to elaborate the baseline nutritional status of older Omicron infected patients and explore the association of baseline nutritional status and disease severity in a designated hospital for COVID-19 treatment.
Method Study design and participants We conducted this retrospective cross-sectional study in the hospital between April 2022 and June 2022. All participants were selected based on electronic medical records. Inclusion criteria were as follows: (1) age ≥ 65; (2) a COVID-19 diagnosis confirmed by SARS-CoV-2 real-time polymerase reaction chain (RT-PCR) tests and (3) with complete medical history. Exclusion criteria were as follows: (1) with a second Omicron infection with in a month and (2) unable to cooperate on nutrition assessment. Informed consent was obtained from all subjects and/or their legal guardian(s). The protocol was approved by the hospital ethics review board (No. 2,022,373). The COVID-19 was clinically classified into mild disease (non-pneumonia), moderate disease (pneumonia), severe disease (dyspnoea, respiratory frequency over 30/min, oxygen saturation less than 93%, PaO2/FiO2 ratio less than 300 and/or lung infiltrates more than 50% of the lung field within 24–48 h) and critical (respiratory failure, septic shock and/or multi-organ dysfunction/failure). Patients were divided into two groups: mild group and moderate to severe group. Participants received nasopharyngeal swab SARS-CoV-2 RT-PCR test using SARS-CoV-2 Z-RR-0479-02-200 kit (Liferiver, Shanghai, China) during hospitalization, and will be considered as virus clearance when two consecutive negative nucleic acids of SARS-CoV-2 RT-PCR test were reported (cycle threshold value large than 35 in both ORF1ab and N genes), tested at intervals of at least 24 h. Nutritional assessment The Mini Nutritional Assessment short-form (MNA-SF) was routinely administered on admission to assess the nutritional status of older patients, which is a valid nutritional assessment tool [ 17 ]. It categorizes patients as malnutrition, at risk of malnutrition, and normal if they score 0–7, 8–11, 12–14, respectively. And MNA-SF score of 0–11 was defined as poor nutritional status in this study. Albumin and body mass index (BMI) were obtained from electronic medical record system. And patients’ food intake, weight change, psychological and functional capability were obtained from medical records or assessed by trained dietitians. Clinical data collection Clinical data were retrieved from electronic medical records. We recorded demographic information, comorbidities, laboratory examinations, cycle threshold value in both ORF1ab and N genes (OR CT value and N CT value), hospitalization days and the virus shedding duration. Viral shedding time was defined as the first day of the positive nucleic acid test to the date of the first negative test of the consecutive negative results. Nine types of comorbidities were confirmed in this study, including diabetes, hypertension, coronary heart disease, cerebral infarct, tumor, chronic kidney disease (CKD), dementia, Parkinson disease and gout. CKD included chronic kidney failure, chronic nephritis and diabetic nephropathy. All laboratory measurements were performed within 24 h after admission according to physicians’ instruction, including white blood cell count (WBC), lymphocyte count (LC), C-reactive protein (CRP), interleukin-6 (IL-6) and albumin (ALB). All blood samples were analyzed in the hospital clinical laboratory, and the cut-off values were determined by the clinical laboratory. Systematic inflammation indexes were calculated as follows: systemic inflammatory index (SII) = absolute platelet (PLT) count × absolute neutrophil count/ absolute lymphocyte count. Neutrophil lymphocyte ratio (NLR) = absolute neutrophil count/absolute lymphocyte count. Platelet lymphocyte ratio (PLR) = absolute PLT count/absolute lymphocyte count [ 18 ]. Statistical analysis Continuous variables were described as mean and standard deviation (SD) if they were normal distribution by the Kolmogorov-Smirnov test. Otherwise, they were expressed as median and interquartile range (IQR). Student t-test or Wilcoxon rank-sum test was used to compare differences. Categorical variables were presented as proportions and compared by using the Chi-square test. Univariate logistics regression was performed to find out factors related to moderate to severe omicron infection. Multivariate logistic model was used to assess the association between nutritional status and moderate to severe Omicron infection and factors with a P -value of less than 0.05 were included in the model, and several factors were not included due to collinearity. Adjusted odds ratio (aOR) and 95% confidence interval (CI) were reported. All statistical analyses were performed using SPSS (IBM SPSS Statistics 26). Statistical significance was defined as P < 0.05.
Results Clinical characteristics of enrolled patients at admission Baseline characteristics are shown in Table 1 . A total of 324 patients were included in analysis. Among all the patients, 47.2% (153/324) were male and the median (IQR) age was 73 (17) years old, and the proportion of patients aged 65–70, 71–80 and over 80 years old was 36.1%, 30.6% and 33.3%, respectively. Severe disease accounted for 2.5% (8/324), moderate disease occupied 23.1% (75/324) and mild took a percentage of 74.4% (241/324). Most patients were unvaccinated or partial vaccinated (69.1%, 224/324), followed by patients received the third booster dose vaccination (13.6%, 44/324) and two doses vaccination (13%, 42/324). The median hospitalization day was 9.5 (12) and the median (IQR) virus shedding duration was 16 (10) days. Hypertension was the most common comorbidity (50.9%), followed by diabetes (20.4%), cerebral infarct (14.2%) and coronary heart disease (11.4%). We analyzed the baseline laboratory examinations at admission. The median WBC, LC and ALB was 5.5 (3) *10^9/L, 1.38 (0.73) *10^9/L and 38 (6) g/L, respectively. Additionally, 35.8% and 51.9% patients had raised CRP and IL-6 level. The medians of combined blood cell count indexes of inflammation are as follows: NLR (median: 2.38, IQR: 2.15), PLR (median:148.28, IQR:86.73) and SII (median:493.23, IQR: 505.18). Characteristics based on nutritional status The incidence of poor nutritional status was 54.3% (40.7% were at risk of malnutrition and 13.6% were malnourished) (Table 1 .). The characteristics between the different levels of nutritional status are presented in Table 2 . Patients with poor nutritional status were found to be older (49.7% > 80 years old) ( P < 0.001), less vaccinated ( P < 0.001), had more moderate to severe status ( P < 0.001), a longer virus shedding duration ( P = 0.022) and more comorbidities (≥ 2) ( P = 0.004). When comparing laboratory biochemical indicators at admission, a lower level of WBC ( P = 0.010), LC ( P < 0.001), ALB ( P < 0.001), OR CT value ( P = 0.005) and N CT value ( P < 0.001) was found in the poor nutritional status group. Inflammation parameters were also compared: patients with poor nutritional status had more elevated CRP (> 10 mg/L) ( P < 0.001), more elevated IL-6, higher value of NLR ( P < 0.001), PLR ( P < 0.001) and SII ( P = 0.012). Correlation between nutritional status and moderate to severe infection We analyzed factors influencing the occurrence of patients with moderate to severe Omicron infection (n = 83). The results of multivariate logistic regression are reported in Table 3 . We found that males were more likely to develop higher disease severity than females [aOR, 2.566 (95%CI: 1.313–5.017); P = 0.006]. Compared to unvaccinated/partially vaccinated patients, fully vaccinated/booster patients had an aOR of 0.295 (95%CI: 0.103–0.848; P = 0.023). A higher OR CT value is a protective factor against moderate to severe Omicron infection [aOR, 0.949 (95%CI: 0.908–0.992); P = 0.021]. In addition, for every 1 score increase in the MNA-SF score, the OR of moderate to severe Omicron infection decreased by 14.8% (aOR, 0.852 [95%CI: 0.734–0.988]; P = 0.034). After adjusting gender, MNA-SF score, vaccination and comorbidities, inflammatory parameters were no longer statistically significant (aOR, 1.000 [95%CI: 0.999-1.000]; P = 0.843).
Discussion The Omicron wave of the COVID-19 pandemic has caused a substantial impact between April and June in 2022. This retrospective observational single-center study was conducted among hospitalized older patients infected with Omicron variant in Shanghai. With the evolution of COVID-19 and the availability of treatments, the rate of severe and illness mortality has dropped significantly. However, older adults are frail groups and are more likely to develop severe disease progression [ 15 ]. In our study, 54.3% of older patients were identified as having poor nutritional status which was close to the prevalence in a previous study (52.7%) [ 8 ]. In addition, we found several risk factors for moderate to severe Omicron infection, including male gender, unvaccinated/partially vaccinated, low MNA-SF score and low OR CT value. This finding might help clinicians to identify higher risk patients among senior adults and provide more comprehensive clinical treatment. Nutritional status plays a crucial role in the function of the immune system, supporting innate and adaptive immunity and influencing the proliferation and activity of immune cells [ 19 ]. Our findings were that patients with poor nutritional status had higher levels of inflammation. And immunity is the cornerstone of host-pathogen interactions in any infectious disease [ 20 ]. It also reflects the general condition of a patient including physical condition, protein turnover, and immune-competence. Inflammatory cytokine release caused by the virus might lead to a high state of catabolism and a reduction of protein synthesis [ 21 ]. In addition, a set of syndromes caused by COVID-19 such as nausea, diarrhea, vomiting and loss of taste can result in decrease in food intake [ 22 ]. A study identified 28.6% of adult patients hospitalized for COVID-19 were malnourished 30 days after discharge, which might be higher in older patients due to poor oral health such as poor denture, diminished strength of masticatory muscles and tongue, and poor salivation [ 23 , 24 ]. In the study by Busra et al, the NRS-2002 tool showed an association with in-hospital mortality in older patients with COVID-19 in multivariate analysis, but Geriatric 8 tool, which is very similar to the MNA-SF, was not significant [ 25 ]. The difference between the two tools might due to the severity of disease, which is assessed by the NRS-2002 but not by the MNA-SF and Geriatric. Furthermore, the inclusion of a ‘malnourished’ category in the MNA-SF makes it applicable to older adults in clinical practice. In a similar way to our study, their findings underline the importance of early assessment of nutritional status in the setting of COVID-19. Nutritional risk was highly prevalent among older adults with COVID-19 regardless of the nutritional screening tool applied, nutrition risk screening is necessary for every hospitalized older patient and personalized nutritional support therapy should be incorporated into treatment sessions [ 26 ]. Vaccination had been proved to be protective factors of disease severity [ 27 ]. In the fifth COVID-19 wave in Hong Kong, the relative risk for death among people aged ≥ 60 who were unvaccinated was 21.3 times the risk among those who had received ≥ 2 doses and 2.3 times the risk among those who had received 1 vaccine dose [ 28 ]. Our study also found that older patients had a lower vaccination rate (26.9%) and unvaccinated or partially vaccinated patients were more susceptible to have moderate to severe infection, regardless of nutritional status. Nevertheless, our study also has some limitations. Firstly, the sample size was determined only by the number of consecutive inpatient admissions during the sample collection period. Therefore, selection bias cannot be ruled out and given the small sample size. Secondly, we observed a correlation between nutritional status and disease severity in a cross-sectional study, but we cannot conclude that this is a causal relationship and further verification is needed in more prospective cohort studies. Piotrowicz et al. pointed out that malnutrition was one of the risk factors of post-COVID-19 acute sarcopenia in older adults [ 29 ]. Attention should be paid to ‘long COVID’ in older people, which can last more than 12 weeks from the start of the infection, as it might lead to involuntary weight loss and nutritional deficiencies [ 19 ]. Future research is needed to investigate the relationship between malnutrition and ‘long COVID’. Population aging is currently a worldwide concern. In China, the proportion of people aged 65 years and older was 13.5% in the 2021 report [ 30 ]. According to the United Nations’ reports, this proportion was approximately 10% globally in 2022, and will continue to increase over the next few decades [ 31 ]. With the policy changing, protecting vulnerable older people from the effects of COVID-19 is a top priority [ 27 ]. In conclusion, this study demonstrated that older patients with poor nutritional status were more likely to develop more severe Omicron infections. It is further confirmed that the MNA-SF is effective in identifying malnutrition in older adults and helps in early intervention to prevent infection progression.
Background The Omicron wave of Coronavirus disease 2019 (COVID-19) remains the dominant strain worldwide. The studies of nutritional status in geriatric people with COVID-19 Omicron variant are limited. Thus, the aim of this study was to investigate the incidence of poor nutritional status among Omicron infected older patients, and to explore the correlation between the nutritional status and the severity of Omicron infection in older patients. Methods This is a retrospective cross-sectional study. According to the clinical symptoms, patients were divided into two groups: mild and moderate to severe. Mini Nutritional Assessment short-form (MNA-SF) was conducted when patients were admitted and poor nutritional status was defined as MNA-SF score of 0–11. The inflammatory markers including neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR) and systemic inflammatory index (SII) were calculated and compared between two groups. Results Total of 324 patients were enrolled, with median [interquartile range (IQR)] age of 73 (17) years. Overall, 241 cases were mild, 83 cases were moderate to severe at the time of diagnosis and that 54.3% of patients had poor nutritional status. Patients with poor nutritional status were found to be older ( P < 0.001) and less vaccinated ( P < 0.001), with a longer virus shedding duration ( P = 0.022), more comorbidities (≥ 2) ( P = 0.004) and higher value of NLR ( P < 0.001), PLR ( P < 0.001) and SII ( P = 0.012). Vaccination, cycle threshold value in ORF1ab gene (OR CT value) and female, higher MNA-SF score was negatively connected with probability of moderate to severe infection. For every 1 score increase in MNA-SF, the odds ratio of moderate to severe infection decreased by 14.8% [adjusted odds ratio (aOR), 0.852; 95% confidence interval (CI): 0.734–0.988; P = 0.034]. Conclusions Older patients with poor nutritional status are more likely to develop moderate to severe Omicron infection. Keywords
Acknowledgements The authors would like to express their sincere gratitude to the patients who participated in this study and shared their experiences during the COVID-19 pandemic. The authors would also like to thank the healthcare professionals who supported this study and provided care to the patients. Their dedication and expertise have been invaluable in conducting this research. This work would not have been possible without the collaboration and support of all those involved. Author contributions Yongmei Shi contributed to the conception of the research and review of the manuscript; Xiaohan Gu and Yongchao Guo equally contributed to the design of the research; Yaxiong Lu and Shihan Yang contributed to the acquisition of the data; Yongmei Jiang and Qianwen Jin contributed to literature search and collection. Qing Yun Li contributed to the interpretation of the data and review the manuscript; and Xiaohan Gu drafted the article. All authors critically revised the article, agree to be fully accountable for ensuring the integrity and accuracy of the work, and read and approved the final article. Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Data availability The datasets used and analysed during the current study available from the corresponding author on reasonable request. Declarations Conflict of interest All the authors have no conflicts of interest to disclose. Ethics approval and consent to participate This research protocol was approved by the Ethic Committee of Ruijin Hospital (2,022,373). Informed consent was obtained from all subjects and/or their legal guardian(s). Consent to participate All participants consented to take part in the study. Consent for publication Not applicable.
CC BY
no
2024-01-16 23:45:33
BMC Infect Dis. 2024 Jan 15; 24:88
oa_package/c9/e2/PMC10789013.tar.gz
PMC10789014
0
Background In August 2020, the first case of SARS-CoV-2 reinfection was described in Hong Kong, followed by numerous cases worldwide [ 1 , 2 ]. Reinfections, described as a person infected with an agent, recovered, and then infected again at a later time, could be caused by either the same variant or a new variant of the same agent [ 3 ]. The reinfection rate of SARS-CoV-2 has been reported to be between less than 0.5% to more than 5% depending on the dominant variant at the time of investigation, duration of the study, as well as the country, population studied, vaccination coverage and background immunity [ 2 , 4 – 7 ]. The European Center for Disease Control (ECDC) currently defines a suspected COVID-19 reinfection as a positive PCR or rapid antigen test ≥60 days following a previous positive PCR, rapid antigen test or serology [ 8 ]. In contrast, the WHO case definition proposes at least 90 days between the episodes or, alternatively, genomic evidence of different lineages in the two episodes regardless of time interval to be considered a reinfection [ 9 ]. Countries have also used other intervals and criteria for reporting suspected reinfections [ 8 ]. In a survey conducted by ECDC in 2021, 13 European countries reported having a case definition, however the time interval between episodes ranged from 45-90 days among the countries, where the majority of countries used 90 days. Five countries had also included symptom-free periods in their case definitions. Although the case definitions were similar, they were not standardized. Thus, there is a need for an assessment of the intervals used to identify reinfections to be able to make comparisons across different countries and regions. The first case of SARS-CoV-2 in Norway was detected on February 26 2020 [ 10 ]. Testing criteria and recommendations have changed throughout the pandemic in Norway. Up to May 2020, there was limited availability of SARS-CoV-2 testing and only selected groups were tested. Following this period, test capacity was strengthened, and all individuals that had any respiratory symptom were recommended to get tested for SARS-CoV-2 [ 11 ]. By the end of 2021 testing was further scaled up with the introduction of rapid antigen tests, which also included self-administered antigen tests. Test activity has since remained high, until recommendations were eased and restrictions lifted after the end of January 2022 [ 12 ]. The emergence of the Alpha variant raised concerns about its potential to be more transmissible or escape previously acquired immunity, resulting in increased variant surveillance. In Norway, the Alpha variant was first detected in December 2020, followed by the Delta [ 13 ], and Omicron [ 14 ] waves (Fig. 1 ). In Norway, cases were counted as reinfections if there was a positive PCR result 90 days following a positive PCR test from 24.03.2020, and 180 days between episodes from 01.07.2021. However, as new variants emerged, the interval was changed to 60 days from 21.01.2022, in accordance with the ECDC definition. Thus, there is a need to describe the ability of the different intervals to identify reinfections and the impact of implementing these intervals in national surveillance systems as well as assessing potential factors that could impact on the risk of reinfection.
Methods Aim and data sources The aim of this study was to describe the frequency of SARS-CoV-2 reinfections in Norway during 2020-2022 using different time intervals between infections, as well as assessing the risk of SARS-CoV-2 reinfections in terms of variants, vaccination status, demographic characteristics, and underlying comorbidities. The data was retrieved from the Norwegian national preparedness registry for COVID-19 (BeredtC19) that covers the entire Norwegian population and contains individual level data on demographics, results of laboratory testing, vaccinations, and diagnoses from primary and specialist health services. The data are reported from central health registries, national clinical registries, and national administrative registries [ 15 ] and is linkable by a unique national identity number for all Norwegian citizens, as well as individuals born or permanently residing in Norway. We included data on positive SARS-CoV-2 tests from the MSIS laboratory database, which receives SARS-CoV-2 test results from all Norwegian microbiology laboratories and testing stations (PCR and antigen tests only, self-administered antigen tests are not registered). It is mandatory for all Norwegian microbiology laboratories to report all laboratory results, both positive and negative test results, to the MSIS laboratory database. The laboratory results are electronically reported using the National Laboratory Classification System. The MSIS laboratory database also provided data on SARS-CoV-2 variants which was reported by the Norwegian microbiology laboratories. Further details regarding the variant surveillance in Norway are described on Norwegian Institute of Public Health’s webpage [ 16 ]. Data on comorbidities were based on The Norwegian Patient Registry (NPR) and ICPC-2 codes from the Norwegian Control and Payment of Health Reimbursements Database (KUHR) as outlined previously [ 17 ], while COVID-19 vaccinations were retrieved from the Norwegian Immunization Registry, SYSVAK, and demographic variables (sex, age, county and country of birth) were from the National Population Register. Study design We conducted a register based study using individual level data for the period 26 February 2020 to 31 January 2022. In these analyses, we included all cases with a positive PCR or antigen test for SARS-CoV-2 among individuals with an available national identity number. In order to assess the risk of reinfections during periods when different variants were dominant, we conducted separate cohort studies per variant wave. The definitions and methods used are clarified below. SARS-CoV-2 infections and reinfections A SARS-CoV-2 infection was defined as a person having a positive SARS-CoV-2 PCR test or antigen test registered in the MSIS laboratory database. For the primary infection, the date of the first positive test was used as time of infection. Potential reinfections were defined as, a positive PCR or rapid antigen test using intervals of minimum 30, 60, 90 and 180 days following a previous positive test. If there were several positive tests within the given time interval, these were considered as belonging to the first infection. Variant waves Using virus variant data from the MSIS laboratory database and the date of positive test, we defined variant-waves as periods when one variant was accounting for ≥ 90% of the tests that had been PCR screened or whole genome sequenced, allowing for temporary fluctuations (maximum 2 weeks) when the percentage of the dominant variant was allowed to drop down to ≥ 88%. We defined three dominant variant-waves: The Alpha wave during weeks 11-22/2021, the Delta wave during weeks 30-49/2021 and the early Omicron wave (mainly BA.1) from week 2/2022 to the end of the study period (31 January 2022). From week 53/2020 until the beginning of the Delta wave, more than 10% of screened/whole genome sequenced cases were the Alpha variant. Prior to this, all cases that were not ascertained to be the Alpha variant, were assigned to the group “Pre-Alpha variants” (Fig. 1 ). Vaccination status In Norway, for the duration of this study, the mRNA vaccines Comirnaty (BNT162b2, BioNTech-Pfizer) and Spikevax® (mRNA-1273, Moderna) was primarily used. The adenoviral vector-based Vaxzevria (AstraZeneca) and Jcovden (Janssen-Cilag International NV) were used to a limited extent during 2021, until theses vaccines were suspended in Norway when concerns were raised about the increased risk of cerebral venous sinus thrombosis after vaccination [ 18 ] . The details of the vaccination program in Norway for adults and adolescents are given in detail in previous published work [ 19 , 20 ]. Following the official definitions used for counting number of doses [ 21 ], we defined vaccine status using data on number of doses and date of vaccination recorded in the Norwegian Immunisation Registry (SYSVAK) as: Unvaccinated that have not received any COVID-19 vaccine Vaccinated with one dose of a COVID-19 vaccine <21 days prior Vaccinated with one dose of a COVID-19 vaccine >=21 days prior Completed primary vaccination series with two doses of a mRNA COVID-19 vaccine 7-179 days after second vaccine dose, or 21 days after an initial dose of Jcovden Completed primary vaccination series with two doses of a mRNA COVID-19 vaccine ≥180 days after second vaccine dose, or ≥180 days after an initial dose of Jcovden Vaccinated with three doses of a COVID-19 vaccine. Individuals were considered as vaccinated with three doses if the third dose was received at least 7 days prior. Underlying comorbidities with increased risk of severe COVID-19 Individuals with underlying comorbidities that cause an increased risk of severe COVID-19 have been prioritized for vaccination [ 22 ]. We categorized cases into three groups: i) no underlying comorbidities, ii) medium risk comorbidity and iii) high risk comorbidity, as described elsewhere [ 17 ]. Statistical analysis Description of reinfections We described numbers and proportions of cases reported to MSIS by different characteristics during the study period, distinguishing primary infections and reinfections using different time intervals; 30, 60, 90 or 180 days between positive tests. Risk of reinfection by variant wave In order to assess the risk of reinfection, we conducted separate cohort studies for each variant wave. At the start of each cohort study/wave, the individuals previously infected once were included and were followed up (being at risk for reinfection) until the end of the wave. The outcome of interest was being reinfected (once) using as time interval ≥60 days since a previous positive test. Data were censored at the end of follow-up, reinfection or death. The variables we considered as exposures and that were taken into account in our analyses were sex, age, county of residence, country of birth, underlying comorbidities, vaccination status (determined daily throughout the waves) and time period of previous SARS-CoV-2 infection. To assess the association between covariates and risk of reinfection during the different variant waves we calculated hazard ratios (HR) with 95% confidence intervals (CI) using a Cox proportional hazards model on a calendar time scale, and adjusted hazard ratios (aHR) using a stratified multivariable Cox proportional hazards model. The underlying time scale was calendar time based on sampling date, with entry at the start of each wave. We adjusted the analysis by sex, age, underlying comorbidities, vaccination status, and previous SARS-CoV-2 infection, and stratified by county of residence (11 levels) and country of birth (3 levels). We chose to adjust for vaccination and not conduct separate analyses, in order to assess the impact of vaccination without complicating the analyses. Some individuals could have been vaccinated before the start of the wave (before or after being infected once), and some could be vaccinated at different points (with one or more doses as well) of the wave before or after their reinfection. Different analyses, with different set up, should be planned if we want to explore exposures when having two distinct outcomes, reinfection (only infected before) or breakthrough infection (infection and vaccination before). In our study, our estimates are for these outcomes combined. Proportionality was assessed using Log–log plots of survival (not shown) and found to be satisfactory. Participants were followed until endpoint, death, emigration, or end of the respective wave. We should note that for the analyses to assess the risk off reinfection by different characteristics, we excluded 67 individuals that had received more than three vaccine doses by January 2022 as the numbers were too small to allow any comparisons [ 23 ]. Moreover, we excluded individuals with unknown county of residence as well as individuals with no reported infection prior to the variant wave of interest. Sensitivity and exploratory analyses As testing against SARS-CoV-2 was not readily available without a physician’s referral until 12 August 2020 [ 11 ], a sensitivity analysis was conducted excluding all individuals with the first infection before this date if they did not have a secondary infection before the start of the wave studied. As part of an exploratory analysis, an additional Cox proportional hazard model was constructed where each variable in turn was included in a multivariate model with the previous episode of SARS-CoV-2 infection, stratifying for all other variables. For the variable previous episode of SARS-CoV-2 infection, sex was included in the multivariate model, stratifying for all other variables. Furthermore, a multivariate random-effects logit model was constructed, including sex, age group, risk group, vaccine status, previous episode of SARS-CoV-2 infection as independent variables (Additional files 1 , 2 and 3 ). Statistical analysis was performed in Stata version 17 (Stata Corporation, College Station, Texas, US).
Results From 26 February 2020 to 31 January 2022, 768 755 individuals were reported to have tested positive for SARS-CoV-2 in Norway. Among these, 683 121 (89%) had tested positive for SARS-CoV-2 once, 85 420 (11%) were registered with 2–5 positive tests, whereas 214 (0.03%) were reported to have more than 5 positive tests. We defined potential reinfections, using time intervals of 30, 60, 90 or 180 days between positive tests and the distributions of reinfections are presented in Table 1 . As expected, the number of potential reinfections identified decreased as longer time intervals were applied, ranging from 23 879 (30 days) to 13 960 (180 days). This corresponds to an overall reinfection rate ranging from 1.8–3.1% of all infections (Table 1 ). Similarly, the number of reinfections where both infections occurred within the same variant wave decreased with increasing time intervals, with numbers being generally low, ranging from 333 to 2 (Table 2 ). In accordance with the ECDC surveillance case definition for suspected reinfection, the 60 days interval was used in the remaining part of the study. Screening or sequencing results from both the first infection and the subsequent reinfection was available for 7.1% ( n =1544) of all suspected reinfections (Table 3 ). When using the 60-day interval to identify reinfections, allowing for shorter intervals if information about strain were available for both first infection and reinfection, 18% ( n =3892) of the 21 649 potential reinfections had information from variant screening or sequencing. The median time between the first infection and first reinfection was 39 weeks (interquartile range: 32 [50-20]). Omicron, primarily BA.1 with some BA.2, accounted for 85.4% ( n =3324) of these reinfections confirmed by sequencing or screening (Table 3 ). Among all suspected reinfections, 86% ( n =18 576) were assigned to a variant wave, 80% ( n =17 340) occurred during the Omicron wave, 5% ( n =1 018) occurred during the Delta wave and 14% ( n =3 073) were in between waves (Fig. 1 and Table 3 ). Considering individuals with <60 days since a previous infection as not at risk for reinfection, a total of 75 986 individuals were at risk of SARS-CoV-2 reinfection during the Alpha wave, 130 048 individuals were registered in the Delta wave while 258 107 in the early Omicron wave (10 January–31 January 2022). Among individuals at risk of reinfection, 5.9% ( n =15 151) were registered with a reinfection during the early Omicron wave, compared to only 0.6% during the Delta wave and 0.2% during the Alpha wave. The proportion of men and women was similar (47.8%–48.9% women). Women had slightly increased risk of reinfection with Omicron (6.2% in women vs. 5.6% in men, aHR = 1.15; 95% CI 1.11–1.18; p <0.01) which was not observed for reinfections with other variants. Younger age groups had a higher risk of reinfection during the early Omicron wave compared to the reference group of 30–44 year-olds, with the highest risk among 12–17 year-olds (aHR = 1.67; 95% CI 1.58–1.76; p <0.01). The lowest risk of reinfection was among the 75 year-olds and older age group (aHR = 0.11; 95% CI 0.08–0.16; p <0.01). The reduced risk of infection among the older age groups was also found during the Delta wave. Having medium or high-risk comorbidities seemed to confer a reduced risk of reinfections with Omicron compared to those without comorbidities in univariate analysis. However, in the multivariate analysis only the high-risk comorbidity group had a borderline significant reduced risk of reinfections for Omicron, while the medium-risk group had a slightly increased risk (Table 4 ), which was not readily observed for reinfections with other variants. The risk of reinfection during the early Omicron wave varied between the counties. The highest percentage of reinfections was found in Oslo and the neighboring county Viken (8.2% and 6.8% respectively for the Omicron wave). For the Omicron and Delta waves we did not find any differences in risk of reinfection between individuals who were born in Norway compared to people born abroad. Only 135 (0.2%) reinfections were identified among the 75 986 individuals infected before the end of the Alpha wave, making it difficult to draw conclusions about this wave. However, during the Alpha wave, those born in Norway had a reduced risk of reinfections compared to those born outside of Norway (HR 0.58 95% CI 0.41-0.82, p <0.01) (Additional file 4 ). Vaccinated individuals had a reduced risk of reinfection compared to unvaccinated. There seemed to be little difference between those vaccinated with two or three doses, while those vaccinated with only one dose had an intermediate protection of reinfection. Having had a previous infection during the more recent waves was associated with a lower risk of reinfections compared to having a previous infection during the earlier stages of the pandemic (Table 4 ). Neither excluding infections from before 12 August 2020 (not shown), or stratifying on individual number SARS-CoV-2 test events (Additional file 5 ), had an impact on the results.
Discussion In this study we examined the rate of potential reinfections of SARS-CoV-2 in Norway from 2020 to early 2022, during the Alpha, Delta and early Omicron waves while exploring the use of different detection time interval criteria. Notably, during the early Omicron period, both infections and reinfections surged in Norway (Fig. 1 B, Table 3 ), aligning with reports from other countries [ 5 , 6 , 24 – 26 ]. This increase in infections and reinfections could be attributed to enhanced infectivity [ 27 ], the immune escaping features of the Omicron variant [ 28 ], breakthrough infections among previously vaccinated people [ 29 ] and post infection waning immunity among people with previous infection [ 30 ]. These factors have been widely known and previously discussed in published reports [ 31 ]. Our study assessed the risk of SARS-CoV-2 reinfections and identified variations among population sub-groups. Previous studies have generally used one specific time interval to define potential reinfections, with most using a 90-day minimum interval between episodes [ 6 , 24 – 26 , 32 , 33 ]. In our analysis, we compared reinfections using time intervals of 30, 60, 90 or 180 days between positive tests, and we observed that the distribution of reinfection frequency did not substantially differ for intervals ≤ 90 days (Table 1 ). We should note that since the variant waves were defined as periods when the dominant variant was found in minimum 90% of the screened or sequenced infections, some of these reinfections within the same variant wave might be with different variants. On the other hand, studies have also shown that some individuals have persistent infection or viral shedding up to three months after an infection [ 9 , 34 ]. To mitigate the possibility misclassifying persistent infections as reinfections, we chose the 60-day cut-off to define potential reinfections, consistent with ECDC’s guidelines/definition [ 8 ]. Using a 60-day cut off to define reinfections, the reinfection rate ranged from 0.2% during the Alpha wave to 0.6% during the Delta wave and peaked at 5.9% during the early Omicron wave (Table 4 , Table 5 , Additional file 4 ). Previous studies have reported a range of reinfection rates of SARS-CoV-2 from less than 0.5% to above 5% [ 2 , 4 – 7 ]. Diverse reinfection rates reported globally could be due to differences in case definitions for reinfections, study timing, and duration considering different variants and vaccine availabilities, as well as differences in testing activity and infection pressure. It is important to acknowledge that this study did not directly account for changes in infection pressure and testing which is a limitation. To adjust for test activity, a secondary analysis that stratified on individual test activity was performed (Additional file 5 ). Stratifying for test activity did not affect the results of the study. The considerably higher infection pressure during the early Omicron wave is in itself expected to have increased the likelihood of reinfection. Infection pressure is thought to be correlated with age and county. While our study adjusted for age and county in a multivariable Cox-regression model, accounting for potential biases, the complexity of infection pressure and testing nuances during the Omicron wave demands cautious interpretation. We should note that the rates of reinfections may have been underestimated in our study and other studies, as the existing surveillance systems could not detect all the asymptomatic infections. As the Omicron variants have been reported to be less severe and more often asymptomatic than Alpha and Delta [ 35 ], the underestimation of the reinfections due to asymptomatic cases could be higher during the early Omicron wave compared to preceding waves. Also, the reinfection rate for Omicron could be further underestimated, as the Omicron period in our study was restricted to the early phase of the Omicron occurrence. Lastly, self-administered antigen tests are not reported to the national surveillance system in Norway. This could cause further underestimations of the number of reinfections. Therefore, our results should be interpreted with caution and restricted for the period included. Regarding the association between reinfections and sex, we found that during the Omicron wave, women had a slightly increased risk of reinfection compared to men, whereas no significant difference was observed during the Alpha and Delta waves. This finding is consistent with studies from France [ 4 ] and Serbia, [ 6 ], but not Iceland [ 5 ]. However, the number of reinfections during Alpha and Delta waves was small, resulting in lack of power to detect potential differences. Furthermore, the higher risk of reinfections in women during the early Omicron wave was only slight and of limited practical significance. In general, the reinfection rates for Omicron largely followed infection numbers for January 2022, when there was no difference between the number of infected among men and women [ 36 ] Throughout the Alpha, Delta, and Omicron waves, the infection rate among individuals aged 60 years and above remained low compared to teenagers and young adults [ 21 , 37 – 39 ]. A similar pattern was observed for reinfection risk, with a reduced risk of reinfection during the Omicron and Delta waves among age groups 44 years or older, compared to the 30–44 year-olds. The 30–44 age group was chosen as a reference as the older age groups and risk groups were prioritized for vaccination and this could affect the risk of infection and subsequent reinfection. Likewise, younger age groups were vaccinated later with no strong vaccine recommendation to vaccinate healthy adolescents <16 years old [ 22 ]. The highest risk of reinfection during the early Omicron wave was found among the 12–17 year-olds, which corresponds to the overall infection risk between different age groups at the time [ 21 ]. We should note that changes in testing requirements such as mass testing in school was introduced and maintained in several counties during the Alpha, Delta and Omicron waves. This could have influenced the detection of cases and reinfections, including more asymptomatic cases as well, among children and teenagers aged 6-18 years. However, stratifying on individual test activity did not impact the conclusions (Additional file 5 ). It is possible that the introduction of rapid antigen tests impacted the children subjected to mass testing differently than the general population, and “testing fatigue” could cause a greater reduction in the proportion of positive tests subsequently confirmed by PCR. Studies assessing reinfections during the early Omicron wave in Iceland and France have similarly found a decreased reinfection risk among the older age groups [ 4 , 5 ] The risk of developing severe disease from SARS-CoV-2 increases with age [ 40 ] which might lead to behavioral changes in older individuals resulting in less social contacts than younger individuals. Additionally, older individuals and those with high-risk comorbidities were among the first to be offered vaccines and subsequent booster doses against COVID-19 [ 22 ]. Although the model is adjusted for vaccine status, it is possible that there are residual confounding which could explain the decreased reinfection risk among these groups during the Omicron wave. Throughout the pandemic the proportions of SARS-CoV-2 cases and hospitalizations have been higher among individuals born abroad compared to those born in Norway, with variations observed between countries of origin [ 12 , 41 ]. Therefore, a reduced risk of reinfections among Norwegian-born individuals could be anticipated. However, this difference was only observed during the Alpha wave, and the slight reduction in risk among Norwegian-born individuals during the early Omicron wave, as seen in the exploratory analysis (Additional file 1 ), holds limited practical significance. Previous reports attribute the higher infection risk among individuals born abroad to socioeconomic disparities, densely populated areas, cramped living conditions and increased contact with individuals traveling between countries [ 41 ]. The impact of these factors may have been more pronounced in the early stages of the pandemic but could have been partially mitigated by public health interventions to increase awareness in these group and by the emergence of more infectious variants later on. The time since previous infection has been shown to correlate with the risk of reinfections [ 42 ]. The lower protection against Omicron among individuals previously infected during earlier waves should be interpreted as an effect of the time since the previous infection, rather than different protection against Omicron conferred by the different variants. Although differences in protection against Omicron conferred by the various variants cannot be entirely excluded, the sequential nature of the waves in this study makes it unsuitable for exploring such differences. In addition to previous infections, vaccines could also contribute to the population’s resistance towards the different variants. Vaccines have previously been shown to be effective against infections and severe outcomes of COVID-19 [ 19 , 31 ]. There was a clear protective effect of the vaccines against reinfections, however, there did not seem to be a large difference in risk between those receiving two doses and those who received an additional booster. Despite our efforts, the study has some additional limitations. Using a fixed interval to define reinfections does not consider full recovery or persistent infection. This is especially challenging for surveillance systems during a pandemic with high case load. Defining reinfections based on sequence and variant typing in surveillance systems during this magnitude of cases is daunting, emphasizing the need for a globally agreed-upon definition. Another limitation is that the results were not adjusted for test activity. Age and county adjustments in the multivariable model probably reduced this bias, and a secondary analysis stratifying for test activity did not impact the conclusions (Additional file 5 ). The introduction of rapid antigen tests, especially towards the end of 2021, could have caused an underestimation of reinfections. However, all individuals with a positive self-administered antigen test were recommended to get a free of charge PCR test, and test activity remained high until the end of January 2022 [ 12 ]. We therefore believe that the introduction of self-administered antigen test had little impact on the probability of reporting a positive test, although limited underreporting cannot be completely ruled out towards the end of 2021 and in January 2022. Variations in health behaviors among groups, unaccounted for in this study, could be potential confounders and we did not have information on differences in behavior patterns to assess how these could impact our findings. The results of this study are not only relevant to Norway, but can be generalized with consideration of each country’s test capacity, restrictions and measurements. However, identifying reinfections using a 60 day interval requires a surveillance system registering all tests.
Conclusion The emergence of the Omicron variant led to a substantial increase in infections and reinfections in Norway, with the highest risk of detected reinfections observed among teenagers and young adults. The risk of reinfection seemed to follow similar patterns as the risk of first infection. Individuals with previous/first infections during waves at the start of the pandemic had a higher risk of reinfections than those with infected during one of the more recent waves, indicating that post infection waning immunity is an important factor. Vaccination against SARS-CoV-2 was associated with protection against reinfection. Our findings could assist evaluating vaccination polices for people previously infected but further studies are needed to evaluate the impact of multiple vaccine doses and waning immunity.
Background SARS-CoV-2 reinfection rates have been shown to vary depending on the circulating variant, vaccination status and background immunity, as well as the time interval used to identify reinfections. This study describes the frequency of SARS-CoV-2 reinfections in Norway using different time intervals and assesses potential factors that could impact the risk of reinfections during the different variant waves. Methods We used linked individual-level data from national registries to conduct a retrospective cohort study including all cases with a positive test for SARS-CoV-2 from February 2020 to January 2022. Time intervals of 30, 60, 90 or 180 days between positive tests were used to define potential reinfections. A multivariable Cox regression model was used to assess the risk of reinfection in terms of variants adjusting for vaccination status, demographic factors, and underlying comorbidities. Results The reinfection rate varied between 0.2%, 0.6% and 5.9% during the Alpha, Delta and early Omicron waves, respectively. In the multivariable model, younger age groups were associated with a higher risk of reinfection compared to older age groups, whereas vaccination was associated with protection against reinfection. Moreover, the risk of reinfection followed a pattern similar to risk of first infection. Individuals infected early in the pandemic had higher risk of reinfection than individuals infected in more recent waves. Conclusions Reinfections increased markedly during the Omicron wave. Younger individuals, and primary infections during earlier waves were associated with an increased reinfection risk compared to primary infections during more recent waves, whereas vaccination was a protective factor. Our results highlight the importance of age and post infection waning immunity and are relevant when evaluating vaccination polices. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-17695-8. Keywords Open access funding provided by Norwegian Institute of Public Health (FHI)
Supplementary Information
Abbreviations The European Center for Disease Control Norwegian national preparedness registry for COVID-19 The Norwegian Patient Registry Norwegian Control and Payment of Health Reimbursements Database Confidence intervals Adjusted hazard ratios Acknowledgements We wish to thank all those who have helped to collect and report data to the national emergency preparedness registry at the Norwegian Institute of Public Health (NIPH) throughout the pandemic. We are grateful to all health professionals that performed millions of laboratory tests for COVID-19 and also those contributed to vaccinating the Norwegian population. Additionally, we thank all laboratory personnel at the regional laboratories and the Virology and Bacteriology departments at NIPH that were involved in the analyses of samples, national variant identification, and whole genome analysis of SARS-CoV-2 viruses. We would also like to acknowledge our colleagues at the NIPH who have contributed to the data cleaning from different registries throughout the pandemic. Authors’ contributions All co-authors were involved in the conceptualization of the study. HB, MS, LV drafted the study protocol and coordinated the study. HB, MS, LV, GT, KB, OH contributed directly to the acquisition of data. HB, MS, LV, GT contributed to data cleaning, verification and preparation. HB, LV had access to the final linked dataset. HB conducted the statistical analysis with support from MS, LV, GT, ABK. All co-authors contributed to the interpretation of the results. HB, MS, LV, drafted the manuscript. All co-authors contributed to the revision of the manuscript and approved the final version for submission. Funding Open access funding provided by Norwegian Institute of Public Health (FHI) This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors received no specific funding for this work. Availability of data and materials The datasets analyzed during the current study come from the national emergency preparedness registry for COVID-19, housed at the Norwegian Institute of Public Health. The preparedness registry comprises data from a variety of central health registries, national clinical registries and other national administrative registries. Further information on the preparedness registry, including access to data from each data source, is available at https://www.fhi.no/en/id/infectious-diseases/coronavirus/emergency-preparedness-register-for-covid-19/ [15].Further information on the preparedness registry, including access to data from each data source, is available at https://www.fhi.no/en/id/infectious-diseases/coronavirus/emergency-preparedness-register-for-covid-19/ [15]. Declarations Ethics approval and consent to participate Ethical approval for this study was granted by Regional Committees for Medical and Health Research Ethics - Southeast Norway, reference number 249509. The need for informed consent was waived by the Regional Committees for Medical and Health Research Ethics - Southeast Norway. Consent for publication Not applicable. Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
CC BY
no
2024-01-16 23:45:33
BMC Public Health. 2024 Jan 15; 24:181
oa_package/9b/01/PMC10789014.tar.gz
PMC10789015
0
Background Since the 1978 Alma-Ata Declaration on primary health care, there has been strong advocacy for community empowerment and involvement in designing, implementing and evaluating health activities [ 1 , 2 ]. Empowering communities typically involves building their capacities and providing them with appropriate information to prioritize their healthcare needs and implement actions that seek to improve the healthcare system [ 3 ]. One way to empower communities is through community health management committees (CHMCs) and community health volunteers (CHVs) [ 4 – 6 ]. The CHMCs are voluntary, informal advisory groups, typically composed of 9–13 members and formed to create a clear link between the community and the formal healthcare system. They are to act in the interests of their community, especially in decision-making, and also supervise the activities of the CHVs [ 7 ]. CHVs are lay community health workers empowered to provide voluntary non-specialist basic healthcare support for their communities without receiving a regular salary or holding a ‘confirmed’ position within the formal health system [ 7 , 8 ]. Past studies [ 4 , 6 , 8 ] have found that CHMCs and CHVs are pivotal in improving health outcomes. In Ghana, the Ghana Health Service and Teaching Hospitals’ Act, ACT, 525 of 1996 provides for health committees at various levels of the health system [ 9 ]. The introduction of the community-based health planning and services (CHPS) programme further provided greater recognition of CHMC and CHV roles in the Ghanaian health system [ 10 ]. The CHPS programme was introduced nationally in 1999 to increase healthcare access and empower local communities to have greater control over their healthcare activities [ 10 ]. It is a national primary health care (PHC) strategy aimed at mobilizing grassroots community resources and leadership to reduce health inequalities and remove geographical and physical barriers to healthcare access in Ghana, particularly for lower-income populations. At the centre of the CHPS implementation were the significant roles of the CHMCs and the CHVs [ 11 ]. Since implementation of the CHPS policy, it appears that no studies have specifically investigated the effect of CHMCs and CHVs on the Ghanaian healthcare system. Although some studies [ 12 , 13 ] have investigated CHVs and CHMCs in Ghana, these have mainly focussed on the roles of the CHMC and CHVs in relation to particular targeted health programmes. For instance, one study [ 12 ] focussed on understanding the specific role of the CHMCs and CHVs in the distribution of azithromycin for the control of trachoma in Ghana; another [ 13 ] evaluated the impact of CHVs in the control of diarrhoea and fever among children. Whilst these studies provide useful findings about the roles of the CHMCs and CHVs within specific health programmes, there is a need to consider the role of the CHMCs and CHVs in improving the overall health system, particularly given the intent of the national health policies and legislation promoting patient–public engagement (PPE) activities for improved health in Ghana. This research, therefore, aims to examine the role of the CHVs and the CHMCs in improving the Ghanaian health system.
Methods Study setting The study was conducted in the Afigya-Kwabre South, Sekyere South and Asante-Akim North Districts in the Ashanti region of Ghana. The three districts have a combined population of about 440 000 distributed over a geographical area of approximately 1991 km 2 [ 14 – 16 ]. Most residents in the districts are peasant farmers growing farm food crops such as maize, cassava, plantains, yams, citrus and vegetables on a subsistence basis [ 15 ]. The majority of the residents belong to the Ashanti ethnic group, and Twi is the most commonly spoken language in the districts. Regarding healthcare delivery, the Ashanti Regional Directorate of Health Services (RDHS) is administratively responsible for supervising healthcare services in the region. The RDHS also supervises the District Directorate of Health Services (DDHS) to implement health policy initiatives at the local level and supervises healthcare activities across all communities within the district [ 17 , 18 ]. Design and sampling This study used a qualitative case study research design [ 19 ]. It was part of a broader qualitative study investigating the role of PPE in health system improvement in Ghana. Three districts with recognized good PPE structures, including CHMCs and CHVs, were identified. Overall, six communities (two from each district) were selected. In identifying potential districts and communities for this study, an informal survey via WhatsApp was conducted among health service administrators in the Ashanti region, Ghana, asking for recommendations of districts and communities with good PPE practices. The survey responses were complemented by suggestions from other key stakeholders in the Regional and District Health Directorate Offices whose roles were directly linked to supervising health service activities in the various districts/communities in the region. Participants were chosen using the maximum variation sampling to purposively select them from the national (macro/policy level), district and community health system levels. The national-level participants included representatives from the Ministry of Health. The district-level participants were District Directors of Health Services and Health Service Administrators. Various cadres of health professionals, such as midwives, community health nurses, CHPS coordinators and public health nurses from the district and the communities, were also selected. Participants were also selected from CHMC and CHV roles. The rest were community-based participants such as assemblymen/assemblywomen, traditional leaders (chiefs/queen mothers), and residents. The mix of varied participants was to provide additional perspectives, breadth and depth to the experiences and roles of the CHMCs and CHVs in health system improvement. Data collection Prior to data collection, the chiefs and elders of each community were visited by the lead author to seek their consent for the study in accordance with local Ghanaian culture. The community-level participants involving the CHMCs, CHVs, assembly members, residents and traditional leaders were contacted through personal visits. The health professionals involving public health nurses, midwives, community health nurses, hospital administrators, district director of health services, and the representative from the Ministry of Health were also contacted through official letters and subsequently followed up with telephone calls. The lead author then conducted semi-structured interviews between December 2020 and August 2021. An interview guide developed specifically for this study was used to obtain the views, experiences and opinions of the participants on the role of CHMCs and CHVs in improving the health system. The interview guide broadly focussed on understanding how CHMCs and CHVs were selected and their specific roles in improving health and the overall health system. None of the selected participants withdrew from the study. Interviews lasted an average of 45–60 min. Before each semi-structured individual interview, the participant’s information sheet was read out to each participant. A local language translation of the participants’ information was also read to members who could not communicate fluently in English. Written consent was obtained from each participant before the commencement of the interview. Data analysis The first author transcribed all 35 interviews. Those conducted in the Twi language were also directly transcribed into the English language by the first author with assistance from a native Twi language speaker. To maintain the anonymity of respondents, community sites are not named, all transcripts were de-identified and participants were assigned other unique identifiers and identified by their general roles [ 20 ]. Conduct of this study was approved by the Ashanti Regional Health Directorate of the Ghana Health Service following ethical approval from the University of Otago Ethics Review Committee, with clearance number 20/002. Interviews were analysed and grouped into themes. Braun and Clarke’s six-phase guide on thematic analysis was employed to identify the themes [ 21 , 22 ]. Transcripts were coded in NVivo Version 12 Plus [ 23 ]. In developing the key themes for analysis and interpretation, codes were first assigned to sub-themes and then grouped into major themes. The process also involved looking for consistent, co-occurring and overarching themes, including identifying different themes and sub-themes on the role of CHMCs and CHVs in improving the Ghanaian health system [ 24 ]. We duly documented the systematic process of collecting and analysing these research data and the processes and procedures involved in managing the data [ 20 ]. The second and third authors were involved in verifying the quality of data coding and reviewing interview transcripts to ensure data consistency and questioning the analysis process to assess for bias in relation to any preconceived ideas that may be influencing the analysis. A detailed research diary was kept noting all reflections, particularly after each interview or site visit.
Results Overall, across the six communities, 35 participants were interviewed. None of the invited participants declined to take part in the study. The participants included: traditional leaders ( n = 3); people with dual CHMC–CHV roles ( n = 11); residents/opinion leaders ( n = 4); assemblymen/assemblywomen ( n = 3); CHVs ( n = 3); community health nurses/public health officers/midwives ( n = 5); health administrators ( n = 2); district directors of health services ( n = 3); and representative from the MOH ( n = 1) (Table 1 ). The gender distribution of participants was 70% male and 30% female. A greater number of the participants (46%) had tertiary education, 11% had secondary education, 40% had junior high school and 3% had no formal education. The average age of participants was 50 years (range 30–70 years). Of the 14 participants working as CHMC members and/or CHVs, 50% were aged 70 years or more. Except for one community, most CHMC members or CHVs were older than 50 years. The duration CHMC members or CHVs had served in their roles ranged from 2 years to 29 years. The findings of the study were categorized into two sections. The first section is focussed on how CHMC and CHV members are selected. Nominations came from community durbars; community-based organizations, social and religious groups; district assembly elected members; traditional leadership recommendations; direct contact; and imposition by people in authority. The second part of the paper presents findings about the roles of CHMCs and CHVs in health system improvement. These include resource mobilization; accountability; support to healthcare workers; development of community health action plans; and supporting health education and other community health activities. Selection of CHMCs and CHVs Despite the CHPS policy giving prominence to the role of CHMCs and CHVs in the Ghanaian health system, we found a lack of formal guidelines about how people should be selected for these roles. Unlike the regional and district health committees whose memberships were provided for in ACT 525 of 1996, the CHMCs/CHVs did not have such provisions. An interview with participants from the District Directorate of Health Services (DDHS), which also has the responsibility of supervising healthcare activities in the communities, indicated there was no official document regulating the formation and selection of members into the CHMC or CHVs. As a result, the communities mostly managed their own selection of members for CHMCs or CHVs. An interview with a district director of health service explained: The approaches used to select members of the CHMC and the CHVs also seemed critical to their successful functioning. Thus, the absence of selection guidelines for the CHMCs and CHVs created some inconsistencies and challenges that affected their operations. This study also identified six ways the communities used to select CHMH/CHVs as presented below. Nominations at community durbars Community durbars were regularly used for selecting CHMC members and CHVs. Durbars provided an opportunity for nominated CHMC members and CHVs to be confirmed and supported by the entire community and for screening the nominees. A participant shared their previous experience of selecting a CHV and CHMC member without the approval of the entire community: Nominations from community-based organizations, religious and social groups Participants also noted that members of the CHMC and CHV were often drawn from community-based organizations (CBOs,) religious organizations and other social groups. Although no documented criteria on the composition of the CHMCs and the CHVs were found, nominations from these groups were key to the formation of CHMCs, which required a mix of community groups to be represented. Participants described nominations from Christians, Muslims, CBOs and other community social groups such as the market women, youth and the farmers as significant. These nominations, however, are not final, as some are contested by the public when subjected to a broader community approval during a community durbar. A CHV, who also serves as a CHMC member explained: District assembly elected members Assembly members are elected representatives from the community or electoral area serving on the district assembly (local council). Each community mostly has one assembly member. We found that most assembly members were co-opted into the CHMC. Some participants noted that co-opting the assembly members into the CHMC was critical in ensuring they had a better appreciation of the health issues in the community: Whilst some participants were positive about this in their communities, others also mentioned that assembly members on the CHMC members had a less positive effect on PPE as they were generally passive members: Nominations from traditional leadership The traditional leadership was also found to be key in the formation of CHMCs and the selection of CHVs. Although there were differences in the practice among the six case sites, all CHMCs had at least one traditional council representative on the committee. The traditional representative was mostly a sub-chief who provided periodic feedback on the work of the CHMC and the CHVs to the traditional council. A CHMC member who represented the traditional council elaborated: Direct contact Many participants also mentioned that some CHMC members and CHVs were nominated through direct contact. We found that most community members who were considered suitable for the work of CHMC or CHV did not mostly volunteer to join. As a result, some were approached directly and persuaded to join. Participants explained that such persons were mostly people who already had the respect of the entire community and would be easily approved when their names were presented to the larger community. A CHV and member of the CHMC commented: In addition, apart from identifying and contacting such persons directly, others were also recommended by the existing members of the CHMC and CHVs. Imposition by people in authority Many participants from across all six study sites noted how the establishment of the CHMCs was initially characterized by members nominated by people of high authority, particularly local politicians. This was due to some initial expectation of financial rewards associated with being a CHMC member or CHV. This type of nomination was identified as damaging the morale of many people who genuinely wanted to join the CHMC to improve the health system in the community: Some participants also alluded to how the formation of CHMCs was influenced by some powerful individuals in authority whose sole aim was to benefit from the CHPS donor project funding to strengthen PPE activities in Ghanaian communities. A participant stated that these CHMCs existed in ‘name only’, and they collapsed soon after the funding ran out: Roles of CHMCs and CHVs in health system improvement We noted from this study that most engagement activities, either at the regional, district or community levels, were supposed to be implemented through health committees or volunteers. However, as indicated earlier, these committees were found to function mainly at the community level, despite statutory provisions for effective patient–public engagement (PPE) across all health system levels in Ghana. For instance, at the regional and district health system levels, the Regional Health Management Committee and the District Health Management Committee are expected, by law, to function adequately to improve the health system. However, results from our study revealed the CHMCs were the only health committee that were functional. A district director of health services explained at an interview: A representative from the Ministry of Health also commented: We found the CHMCs and CHVs played important roles in improving the health system across five key themes: (1) resource mobilization; (2) accountability; (3) support to health workers; (4) develop community health action plan (CHAP); and (5) supporting health education and other health activities. Resource mobilization Mobilizing resources for community health activities was mentioned by many participants as a core function of the CHMCs. Although making resources available for community health activities is primarily the duty of the government, CHMCs had taken this up to complement the government’s efforts. For instance, in one of the communities, a CHMC member who was also a traditional leader donated a piece of land for the construction of the community clinic. In addition, members of that CHMC manually dug the ground to lay pipes for treated water to reach the community’s clinic. Members of the CHMC also provided 24-h security for the community clinic. A CHMC member commented as follows: In another site, community representatives on the hospital board were instrumental in lobbying renowned community members to donate an X-ray and other laboratory machines to the hospital. An interview with the Health Administrator of the facility highlighted additional support they had received from the community representatives and their key roles in mobilizing resources for the hospital: Similar examples were found in other communities where the CHMC led the communities to mobilize funds to renovate existing buildings into new community clinics. In addition, the CHMC members further organized durbars and embarked on door-to-door campaigns to mobilize funds from the community to purchase medicines for the clinic. This was again given a further boost by another CHMC member who appealed to pharmaceutical companies to support the community’s clinic with various medicines for their clinic operations: Similar examples were found across all the case sites. A district director of health services in an interview summarized the importance of CHMCs, particularly in mobilizing resources to support community health activities: Accountability Accountability was one of the major roles of both the CHMCs and the CHVs. Among the key things mentioned was the implementation of a scorecard system, which empowered the CHMCs to provide quarterly feedback to the DDHS about services rendered to the community. Scorecards, although not implemented across all case sites, were found helpful in communities using them: Again, in some case sites, CHMC and CHVs had a special day each week on which they met patients and the larger community for feedback on the quality of care received from the clinic, including the health workers’ attitude towards them. The CHMCs and the CHVs discussed this feedback with the health workers, and measures were instituted to improve the gaps in service delivery. However, despite these key accountability roles, many participants, especially the CHMC members and the CHVs, expressed much disappointment in the reluctance or refusal of the health workers to be financially accountable to the community representatives. Participants noted that the health workers had continuously resisted any form of financial accountability, despite the community being a major stakeholder in resourcing the operations of the clinic: We have been supporting this clinic all this while. As a matter of fact, this building was a [defunct company], and we lobbied for it and converted it into a clinic. Following that, we had to mobilize money in renovating it and the entire community was involved in offering communal labour to support this. So, I am not afraid to say it is our clinic. However, we want to know the financial situation of the clinic. It helps us to understand how the little revenues they make there are utilized. As it is now, we are kept in the dark on these issues, and I must say it is not the best. Maybe you can talk to them about it. We are not coming to take the money, but we need to know how the little they generate there is spent. It helps us to appreciate how we can financially support them, but the health workers will not agree to let us know (Queen Mother, CS 3). Support to healthcare workers The CHMCs are crucial in providing various forms of support to health workers. For instance, we found that CHMCs supported newly posted health workers in their communities with accommodation and food supplies, which was key for the initial integration and settlement into the community. As a result of this support, there was improved health worker retention in these communities which were considered remote: While newly posted health workers may experience difficulty integrating into the new environment’s culture, the CHMC was reported to be significant in easing their stay. They helped the health workers understand the community’s way of life and the best way to live in peace and harmony. A participant explained: In addition, many participants acknowledged that the CHMC and CHVs helped resolve conflicts between health workers and community members. The participants noted that the CHMCs and CHVs, through their roles, work closely with both the general community and healthcare workers. Therefore, this enables them to play an intermediary role in resolving most conflicts and misunderstandings between community members and health workers: Development of community health action plans (CHAP) The CHMCs were involved in the development of Community health action plans (CHAP), which offered communities an opportunity to be part of planning healthcare services and decisions. Participants opined that CHAP mostly provided annual direction for community health activities: Despite the development of CHAP being considered a major role for the CHMCs, only a few communities had CHAP in place. Some participants were of the view that health workers mostly led the development of CHAP primarily from the DDHS offices instead of community representatives. As a result, the final plan did not consider most community inputs into this health project. A participant explained: Support health education and other community health activities Participants also reported various activities of the CHVs that contributed to supporting health education and community health. These include assisting community health nurses or public health nurses with home visits, outreach services and general health education. In addition, they also followed up on all non-attendees for their medical appointments: Additionally, a few participants indicated that the CHVs supported disease control officers in carrying out community disease surveillance as well as reporting on diseases and other health issues in the community or a particular locality: Another important role of the CHVs was assisting communities in compiling and regularly updating community health registers and profiles. The register and profile made it easy to compile reports on the health status of the communities as well as implementing community-based health programmes: Lastly, with many village communities located in the hard-to-reach areas of the districts, CHVs were mostly trained to support in providing some first aid treatments for minor diseases and injuries and quickly refer to the appropriate community health nurse or clinic.
Discussion This study provides a detailed understanding of the significant role of CHMCs and CHVs in improving health and the overall health system in Ghana. Whilst we note that CHMCs and CHVs play an important role in improving the health system, their selection process is also crucial for effective functioning. We found that the processes for selecting CHMC members and CHVs were not provided in the CHPS policy, which formally introduced these roles to the Ghanaian healthcare system. The findings of this study, however, have highlighted a range of strategies that were key for the selection of CHMC members and CHVs. Firstly, we noted that community durbars were an important strategy for selecting (or objecting to) community health representatives. This study established that community health representatives selected through durbars had a wider community support. This finding was consistent with earlier studies [ 25 – 27 ] conducted in other Sub-Saharan African countries, which similarly argued that Community Advisory Board members selected through durbars had greater community support than those selected using different approaches. However, a study [ 28 , 29 ]conducted in Nepal in which durbars were found as a key social accountability tool did not offer community members the opportunity to select or nominate their health committee members. Nominations of CHMC/CHVs from various CBOs, religious and other social groups were also considered significant, as they improved the right mix of representation on the committee and ensured a range of community groups were represented. Significantly, this helped provide equal opportunity for minority groups to be a part of decision-making to improve the health system. In a scoping review [ 8 ] of PPE strategies in Sub-Saharan Africa, it was found that nominations from the CBOs and religious groups were key to improving the quality of membership in the CHMCs and the CHVs. However, we found that such nominations also required a larger community acceptance in a durbar. Other strategies employed in selecting members for the CHMC and CHV included nominations from the traditional leaders, co-opting elected members of the district assembly and directly approaching key individuals. Nomination from traditional leaders has been widely reported in the literature [ 11 , 30 ] as an important way to integrate into the CHMC. This study noted that incorporating traditional leadership in the CHMC increases its acceptability in the community and improves its ability to influence decision-making. Regarding the roles of the CHMCs and the CHVs, we found that one of their recognized roles included mobilizing resources to support community healthcare activities. This study found that many communities have, through the leading efforts of the CHMCs and the CHVs, mobilized resources to construct new community health facilities and acquired new medical equipment to support healthcare delivery without government support. These efforts have significantly improved health and the healthcare system and it is particularly crucial for a resource-poor country such as Ghana, which spends more than 80% of its healthcare budget allocation on personnel emoluments and less than 20% on equipment and infrastructure [ 31 ]. As a result, most district health directorates are not adequately resourced to support healthcare activities in their sub-districts and communities [ 32 ]. Therefore, the roles played by the voluntary CHMCs and the CHVs in mobilizing resources to support healthcare activities in the communities are considered significant. Although other studies [ 28 , 33 , 34 ] found that many countries restricted the roles of these health committees to resource mobilization, we found that CHMCs and CHVs in Ghana contributed in other ways towards improving the health system. In addition to resource mobilization, the CHVs and some CHMC members across all case sites were found to have also supported the health workers in delivering direct healthcare services to their community. With an unequal distribution of healthcare personnel in Ghana, particularly between rural and urban areas, most rural Ghanaian communities need adequate health workers, especially community health nurses [ 31 , 35 ]. Therefore, the supportive role of the CHVs, which could be likened to China’s barefoot doctor scheme[ 36 ], was significant in complementing the shortage of health workers in rural areas. This study found that in the difficult and hard-to-reach communities where there were few (or no) health workers, the CHVs were trained to support the delivery of healthcare services adequately. The CHMCs also provided other forms of support and incentives for health workers posted to their communities. This support was mainly in the form of accommodation and food supplies. For example, in four communities, it was found that clinical nursing and medical students posted for community internships were mostly hosted by the communities through the CHMCs and the CHVs. This was reported to make the communities attractive to newly qualified health workers and contributed to their high retention. Studies conducted in Malawi and Tanzania also had similar findings [ 33 , 37 ]. However, this support was not reported to have extended to providing accommodation or supporting health workers with foodstuffs, as found in our study. Significantly, this community-level support improved rural health worker retention in the communities, complementing the Government of Ghana’s effort to improve rural health worker retention in the country [ 38 ]. Lastly, the CHMCs and the CHVs played crucial roles in resolving conflicts and misunderstandings between community members and health workers. For example, as observed from two communities, there were key incidences that resulted in the communities feeling dissatisfied due to health workers disrespecting their traditional systems of providing healthcare. Consequently, the communities boycotted the services of the clinic. As noted by Halian et al., occasional reports of conflict between health workers and community members are not unusual [ 39 ]. However, the ability to resolve these conflicts promptly and effectively is significant. Therefore, as noted in this study, the significant and timely role of the CHMCs and the CHVs in resolving these conflicts effectively sustained and improved health worker–community relationships. Other studies conducted in Ghana and Malawi also reached similar conclusions [ 11 , 40 ].
Conclusions Overall, the findings of this study have highlighted the critical roles played by CHMCs and CHVs in improving the Ghanaian health system. While we recognize the significant role played by these community health representatives in health system improvement, we also note that their effectiveness can hinge on how members are selected. Thus, we recommend a workable guideline for the selection of CHMC members and CHVs, particularly taking into consideration having the required mix of community groups being represented. Again, despite legislation and policies existing for effective functioning of health committees across all levels of the Ghanaian health system, implementation has so far occurred at only the community levels. This seems to have limited the overall effect of the health committees to only few communities with well-functioning CHMCs or CHVs. We therefore recommend enforcing the existing legislation and policies that allow for lay involvement in decision-making across all levels of the health system to improve the wider health system. Finally, the findings of this study have shown that working through the CHMCs and CHVs is critical to improving resource availability for community health services, especially in resource-constrained countries such as Ghana. We found that using the CHMC and CHVs in frontline community health activities also improved health worker retention and overall health outcomes. Thus, we suggest there is a need to actively strengthen PPE activities across all levels of the health system to deliver an improved health system.
Background In Ghana, the community-based health planning and services (CHPS) policy highlights the significance of both community health management committees (CHMCs) and community health volunteers (CHVs) in the Ghanaian health system. However, research into their specific effects on health system improvement is scarce. Some research has focussed on the roles of the CHMCs/CHVs in implementing specific targeted health interventions but not on improving the overall health system. Therefore, this research aims to examine the role of the CHMCs and CHVs in improving the Ghanaian health system. Methods The study was conducted in three districts in the Ashanti region of Ghana. A total of 35 participants, mainly health service users and health professionals, participated in the study. Data were collected using semi-structured individual in-depth interviews. Participants were selected according to their patient–public engagement or community health activity roles. Data were transcribed and analysed descriptively using NVIVO 12 Plus. Results We found that the effectiveness of CHMCs and CHVs in health systems improvement depends largely on how members are selected. Additionally, working through CHMC and CHVs improves resource availability for community health services, and using them in frontline community health activities improves health outcomes. Conclusions Overall, we recommend that, for countries with limited healthcare resources such as Ghana, leveraging the significant role of the CHMCs and CHVs is key in complementing government’s efforts to improve resource availability for healthcare services. Community health management committees and CHVs are key in providing basic support to communities with limited healthcare personnel. Thus, there is a need to strengthen their capacities to improve the overall health system. Keywords
Acknowledgements The authors acknowledge the University of Otago Doctoral Scholarship granted to the first author to undertake a PhD degree programme. The authors acknowledge all the research participants who took part in the study. Author contributions SEA conceived the study under the supervision of SD and AF. SEA, AF and SD developed and refined the methods for conducting the study. SEA collected the primary data. SEA conducted the initial data analysis. AF and SD supervised the data analysis process. SEA prepared the first draft. AF and SD reviewed the first draft. SEA, AF and SD interpreted the results and revised the manuscript. All authors read and approved the submission. Funding The study received no funding. However, the first author was supported by the University of Otago Doctoral Scholarship for his PhD studies. Availability of data and materials The data supporting the findings of this study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was approved by the University of Otago Ethics Review Committee (20/002). In addition, approval was also obtained from the Ashanti Regional Health Directorate of the Ghana Health Service. Written consent was received from each participant prior to data collection. Data collected were not linked to individual participants. Consent for publication Not applicable. Competing interests The authors declare no conflict of interest.
CC BY
no
2024-01-16 23:45:33
Health Res Policy Syst. 2024 Jan 15; 22:10
oa_package/3a/2d/PMC10789015.tar.gz
PMC10789017
38221612
Background Psychiatric disorders comprise a wide number of conditions and rank among the most important contributors to the global burden of disease. Depressive disorder, anxiety disorder, and schizophrenia are the top three specific psychiatric disorders in terms of disability-adjusted life-years (DALYs) [ 1 ]. In 2019, the estimated age-standardized prevalence for any psychiatric disorder in men was 11.7 cases per 100 individuals and 12.8 in women [ 1 ]. Women also have a higher age-standardized DALY rate of psychiatric disorders compared with men (1.4 vs. 1.7 per 100 individuals). The ranked leading cause of DALYs associated with psychiatric disorders has steadily increased from 13 in 1900 to 7 in 2019 [ 1 ]. Receiving a diagnosis of having a psychiatric condition is associated with a greater propensity towards suicide and self-harm [ 2 ]. In recent years, there has been increasing awareness of the impact of the environment on an individuals’ health. Green space, blue space, and the natural environment typically refer to open space for greening or leisure, rivers, lakes, or seas, and the residential non-building space, respectively. There is accumulating evidence from cross-sectional and prospective studies that exposure to green space, blue space, and the natural environment (GBN) has beneficial effects on health, especially for individuals living with certain chronic diseases such as cardiorespiratory diseases [ 3 ], type 2 diabetes [ 4 ], chronic kidney diseases [ 5 ], and inflammatory bowel diseases [ 6 ]. By contrast, the association between GBN and psychiatric disorders remains less well defined. Previous studies have reported a protective effect of green space on risk of dementia [ 7 , 8 ], depression [ 9 – 13 ], anxiety [ 11 , 14 ], and other mental issues [ 15 ], while other studies reported null associations [ 16 – 18 ]. Inconsistent results have also been shown for the associations of blue space with depression [ 12 , 13 , 18 ] and anxiety [ 12 , 14 , 19 ], respectively. In addition, a series of meta-analyses suggested that exposure to the natural environment could decrease the risk of depression [ 20 ] and anxiety [ 21 ]. However, few studies have examined the effect of GBN on specific psychiatric disorders, especially psychotic disorders. Moreover, most aforementioned studies have been cross-sectional and have only been able to explore the association between a single GBN component and risk of certain psychiatric disorders. To date, there is limited prospective evidence to examine the relationship between exposure to GBN with incident psychiatric events. In the current study, we aimed to explore the association between exposure to residential GBN with any or specific psychiatric disorders among middle-aged and older adults in the UK Biobank (UKB), a prospective cohort study of more than half a million adults. We also sought to examine whether there are subgroups of the population who might derive particular benefit from exposure to GBN.
Methods Study design and participants Data was derived from UKB, which is an ongoing prospective cohort study [ 22 ]. Initially, more than 500,000 participants (aged 37–73 years) were recruited during baseline (2006–2010) from 22 research centers across the UK (England, Wales and Scotland). More details about the locations are available at https://biobank.ndph.ox.ac.uk/showcase/exinfo.cgi?src=UKB_centres_map . After obtaining electronic consent for the use of de-identified data, every participant completed a self-completed touch-screen questionnaire, a computer-assisted interview. Participants also consented to a range of physical measures as well as sampling assays and genotyping [ 22 ]. For this study, we excluded participants with a recorded history at baseline (based on the date of diagnosis) of any psychiatric disorder as well as those individuals with missing data for GBN at study baseline. Follow-up of health-related outcomes was achieved by matching any record from the national health-related hospitals, primary care, death registers, and other systems. A total of 363,047 participants were included in the analysis (Additional file 1 : Fig. S1).
Results At baseline, 53.4% of participants identified as women and the mean age of all participants was 56.7 (± 8.1 years; Table 1 ). During the average follow-up of 11.5 (± 2.8) years, the incidence rate for any psychiatric disorder was 11.48 per 1000 person-years in women and 12.45 in men. Individuals diagnosed with any psychiatric disorder were more likely to be men, have chronic health conditions and exhibit suboptimal lifestyle behaviors, and have a lower SES compared with other groups in the cohort. As shown in Table 2 , there were positive effects for the associations of blue space (the third tertile) [300 m buffer, HR:0.973, 95% CI: 0.952–0.994)] and natural environment (the third tertile) [300 m buffer, HR:0.970, 95% CI: 0.948–0.992); 1000 m buffer, HR:0.975, 95% CI: 0.952–0.999)] with any psychiatric disorder. Similar associations were also found when including an ordinal scale in the respective models. However, no statistically significant associations were observed between green space at 300 m or 1000 m buffer and blue space at 1000 m buffer (the third tertile) with any psychiatric disorder. The strength of the associations varied across psychiatric disorders (Fig. 1 and Additional file 1 : Table S3) with the strongest association for a protective effect of GBN exposure observed for psychotic disorders: the third tertile of both green space at 300 m (HR:0.700, 95% CI: 0.555–0.884) and 1000 m buffer (HR:0.682, 95% CI: 0.532–0.874) was associated with an approximate 30% risk of psychotic disorders and approximately 20% and 30% for the natural environment at 300 m (HR:0.783, 95% CI: 0.620–0.988) and 1000 m buffer (HR:0.697, 95% CI: 0.542–0.896), respectively. Compared with the first tertile of exposure, the risk of incident dementia decreased with exposure to the third tertile of greenspace at 300 m (HR:0.905, 95% CI: 0.840–0.976) and 1000 m buffer (HR:0.901, 95% CI: 0.834–0.973) and with the natural environment at 1000 m buffer (HR:0.922, 95% CI: 0.853–0.997). The natural environment at 300 m (HR:0.939, 95% CI: 0.906–0.974) and 1000 m buffer (HR:0.952, 95% CI: 0.917–0.989) were statistically associated with an increased risk of substance abuse. There was a reduction in the risk of incident anxiety among those exposed to the third tertile of greenspace (HR:0.951, 95% CI: 0.910–0.994) and the natural environment (HR:0.955, 95% CI: 0.913–0.999) at 1000 m buffer and with the second and third tertile of blue space at 1000 m buffer. We did not observe a significant effect of GBN on incident depression. Using an ordinal rather than a categorical scale of GBN by tertiles did not materially influence the results. Stratified analyses indicated that the protective associations of green space at 300 m and 1000 m buffer, blue space at 300 m buffer, and the natural environment at 300 m and at 1000 m buffer with psychiatric disorders were stronger among those aged ≥ 65 years compared with younger individuals (Figs. 2 and 3 ). There was also some evidence to indicate that the effects of green space and the natural environment were stronger in men than in women. Similarly, stronger effects of GBN on incident psychiatric disorders were observed among those individuals with a history of cigarette smoking and those with hypertension and type 2 diabetes. Results of the sensitivity analyses indicated that the associations between GBN and any psychiatric disorder did not materially change after adjusting for PM 10 , noise pollution, time spent outdoors, and other variables (Additional file 1 : Table S4-S6) and using different cut-offs of GBN (Additional file 1 : Table S7).
Discussion This is the largest longitudinal study to explore the prospective associations of GBN with any or specific psychiatric disorders in middle-aged and older adults. Overall, there was evidence of a weak protective and independent effect of exposure to GBN on risk of psychiatric disorders, with the strongest association observed with psychotic disorders. The relationship was robust after adjusting for the potential confounding effect of, among other factors, noise and air pollution. In contrast, there was no evidence that exposure to GBN was associated with incident depression. The protective effect of GBN on the independent risk of incident psychiatric disorders was stronger for specific population subgroups, namely those aged ≥ 65 years, men, and those with pre-existing comorbidities. A previous cross-sectional study from China had indicated that living near greenspace was negatively associated with psychiatric symptoms [ 42 ], whereas studies from Europe and the USA reported no significant association for questionnaire-based psychiatric symptoms [ 43 – 45 ] or diagnoses of psychiatric disorders [ 16 , 46 ]. Similar with certain studies, our analysis also showed no beneficial effect of higher greenspace coverage on risk of any psychiatric disorder consistent with previous studies. Specifically, previous observational studies reported that greenspace was associated with a lower risk of depression or depressive symptoms [ 9 – 13 , 47 ]. However, the findings from our current study, as well as others [ 17 , 18 ], did not support a relationship between greenspace and risk of depression. These disparities in findings may be due to differences in study design and how a diagnosis of a psychiatric condition was made [ 10 – 13 ]. Between study differences in buffer size or index related to greenspace may also have contributed to the lack of consistency in study findings [ 9 , 11 , 47 ]. In line with most previous studies, this study showed a beneficial effect of green space on incident anxiety [ 11 , 14 ] and dementia [ 7 , 8 ]. To our knowledge, this is the first study to report on a possible protective effect of greater exposure to greenspace coverage on subsequent risk of psychotic disorders, for example, schizophrenia, schizotypal disorders, or schizoaffective disorders. Further studies are warranted to validate these findings in different populations and to understand the potential mechanistic pathways that may underpin the association. In line with previous evidence from observational studies [ 16 , 19 , 48 , 49 ], this study also showed that blue space coverage was statistically associated with decreased risk of any psychiatric disorder consistent with findings from a systematic review [ 50 ]. Although a few studies have reported the benefits of blue space coverage with specific psychiatric disorders, for example depression [ 12 , 13 , 18 ] and anxiety [ 12 , 14 , 19 ], the findings were controversial and had visible heterogeneity. We examined the associations of blue space with five specific psychiatric disorders and only detected significant associations of blue space at 1000 m buffer with incident anxiety. Nonetheless, more experimental studies are needed to confirm these associations. Evidence from longitudinal studies with long-term follow-up regarding the potential relationship that the total natural environment may have with psychiatric disorders is limited. Findings from our study suggested that the natural environment within a 1000-m buffer could lower an individuals’ susceptibility to developing a psychiatric disorder. A recent meta-analysis and systematic review involving 33 studies reported that short-term exposure to the natural environment was associated with a small protective effect on depressive mood [ 20 ]. Findings from our study were in agreement with this review such that exposure to the natural environment at 1000 m buffer was mildly protective against incident depression (HR = 0.955 for the third tertile). Furthermore, Zhang et al. performed a systematic review and found that exposure to the natural environment could alleviate anxiety [ 21 ], which is in agreement with results from our study. Epidemiological data have consistently demonstrated that GBN is related to decreased risk of chronic disease [ 3 – 6 , 51 ]. The physiological and behavioral mechanisms underpinning this relationship may also have a mediating role in the association between GBN and risk of psychiatric disorders: greater exposure to GBN might be promote less sedentary behavior and more physical activity, which could subsequently improve mental health [ 52 ]. Additionally, a study from China reported that lower green space was associated with lower sleep quality, which is itself a risk factor for psychiatric disorders [ 53 ]. Our current study found that the protective effect of GBN on psychiatric disorders was stronger among those aged ≥ 65 years compared with younger individuals. This effect from age may reflect the retirement status of older individuals (especially men) who as a result would be spending more time at home than those still in the paid workforce. Alternatively, older individuals are likely to have more complex comorbid conditions (e.g., hypertension and diabetes) which may mediate the link between exposure to GBN and risk of psychiatric disorders. Our findings that men, in particular, may benefit more from exposure to GBN compared with women are congruent with previous studies from the UKB. In those studies, the beneficial effect of exposure to green or/and blue space on inflammatory bowel diseases [ 6 ] and cardiovascular disease and respiratory disease mortality rates [ 54 ] was stronger in men than in women. These sex differences may reflect the higher prevalence of more suboptimal lifestyle behaviors in men compared with women, e.g., current smoking, alcohol consumption, poor diet, and low levels of physical activity [ 55 ]. Although we adjusted for these and other potential confounders, residual confounding is likely to have remained. It should be noted however that when we stratified by level of physical activity, the results were not materially different. Rather than relying on self-report measures of physical activity, future studies that include device-measured physical activity are likely to be more informative. In particular, using wearable devices to track activity levels would enable adjustment for the amount and intensity of physical activity while exposed to GBN, something which we were unable to address in the current study. Strengths and limitations Several strengths of this study should be mentioned. First, the UKB is a large-scale well-characterized, population-based cohort study with information on many variables. This enables us to adjust for potential confounders of the relationship between exposure to GBN with psychiatric disorders. We were also able to undertake sensitivity analyses, for example, those involving noise and air pollution data. Although the effect is relatively small, the findings in the current study are robust and reliable. Second, the long duration of follow-up in UKB is unique and enabled us to look at the long-term effects of GBN on psychiatric disorders. Finally, the current study not only reported on the prospective associations with any psychiatric disorder but examined important specific psychiatric disorders. Some limitations should also be mentioned when explaining the results of this study. First, we only captured the data of GBN at 300 m and 1000 m buffer; more detailed data at a buffer of 100 m were not available. Moreover, urbanicity, which is a potential confounder for the association between GBN and psychiatric disorders, was lacked update to the study period in UKB and hence could not be adjusted for in the analysis. Furthermore, although sensitivity analyses that had adjusted for outdoor time showed no material impact on the main findings, it is important to mention that there may have been information bias, namely misclassification bias, when dividing the participants into different groups of exposure to GBN, especially for the uncertain physical or/and visual accessibility of GBN and how much exposure to GBN spaces individuals received. Additionally, exposures were only collected at baseline (2010), which may have led to misclassification if an individual changed residential location or factors relating to socioeconomic status or lifestyle behaviors (e.g., changing physical activity levels). Although we included a wide range of potential confounders, residual confounding by unmeasured factors such as social support, job-related stress, or other neighborhood-level variables is likely to have remained. Finally, UKB had a low rate (5%) of recruitment and a limited age group from 37 to 73 years [ 22 ], which may have introduced some selection bias and therefore limits the generalizability of the conclusions to the general UK population.
Conclusions In summary, greater exposure to GBN was associated with decreased risk for any or specific psychiatric disorders in middle-aged and older adults. There was evidence that the effects may be greater among older individuals, men, and those with pre-existing conditions. Further studies are warranted to investigate the social, biological, and physiological interplay more fully between the environment and an individuals’ mental health.
Background There is increasing evidence for the role of environmental factors and exposure to the natural environment on a wide range of health outcomes. Whether exposure to green space, blue space, and the natural environment (GBN) is associated with risk of psychiatric disorders in middle-aged and older adults has not been prospectively examined. Methods Longitudinal data from the UK biobank was used. At the study baseline (2006–2010), 363,047 participants (women: 53.4%; mean age 56.7 ± 8.1 years) who had not been previously diagnosed with any psychiatric disorder were included. Follow-up was achieved by collecting records from hospitals and death registers. Measurements of green and blue space modeled from land use data and natural environment from Land Cover Map were assigned to the residential address for each participant. Cox proportional hazard models with adjustment for potential confounders were used to explore the longitudinal associations between GBN and any psychiatric disorder and then by specific psychiatric disorders (dementia, substance abuse, psychotic disorder, depression, and anxiety) in middle-aged and older adults. Results During an average follow-up of 11.5 ± 2.8 years, 49,865 individuals were diagnosed with psychiatric disorders. Compared with the first tertile (lowest) of exposure, blue space at 300 m buffer [hazard ratio (HR): 0.973, 95% confidence interval (CI): 0.952–0.994] and natural environment at 300 m buffer (HR: 0.970, 95% CI: 0.948–0.992) and at 1000 m buffer (HR: 0.975, 95% CI: 0.952–0.999) in the third tertile (highest) were significantly associated with lower risk of incident psychiatric disorders, respectively. The risk of incident dementia was statistically decreased when exposed to the third tertile (highest) of green space and natural environment at 1000 m buffer. The third tertile (highest) of green space at 300 m and 1000 m buffer and natural environment at 300 m and 1000 m buffer was associated with a reduction of 30.0%, 31.8%, 21.7%, and 30.3% in the risk of developing a psychotic disorder, respectively. Subgroup analysis suggested that the elderly, men, and those living with some comorbid conditions may derive greater benefits associated with exposure to GBN. Conclusions This study suggests that GBN has significant benefits for lowering the risk of psychiatric disorders in middle-aged and older adults. Future studies are warranted to validate these findings and to understand the potential mechanistic pathways underpinning these novel findings. Supplementary Information The online version contains supplementary material available at 10.1186/s12916-023-03239-1. Keywords
Measures Exposures With consideration of existing evidence of the associations between GBN density and health outcomes and relevant public policy [ 23 – 25 ], the percentage of GBN assigned to the 300 m and 1000 m buffers for each residential location [ 6 , 26 ] was used to estimate an individual’s combined GBN exposure (which took into account their residential location as well as the GBN in the wider-area relative to the residential location) [ 27 , 28 ]. The percentage of residential green space and blue space, which were classed as “greenspace” and “water,” were proportions of the total percentage of all land-use types. In line with previous UKB studies exploring the health effect of GBN [ 29 , 30 ], data on green and blue space were collected from the 2005 Generalized Land Use Database (GLUD) for England [ 26 ]. GLUD, which was obtained from Neighborhood Statistics ( http://www.neighbourhood.statistics.gov.uk/ ), provided data on land use distribution for 2001 Census Output Areas in England. The data on the distribution of natural environment were collected from Land Cover Map (LCM) 2007 (25 m*25 m) [ 31 ]. The LCM 2007 product was from Centre for Ecology and Hydrology (CEH) [ 32 ] and included 23 land cover classes with Class 1–21 reclassified as natural environment. Notedly, Class 22–23 included buildings and gardens, which were different from the definition of GLUD measure. The definition of natural environment was partially overlapped with green and blue space in this study. Participants out of England were excluded due to the restricted availability of GLUD data. More details could be seen at the website of UKB: https://biobank.ndph.ox.ac.uk/showcase/field.cgi?id=24507 . Outcomes The diagnoses of any or specific psychiatric disorder were obtained from the “first occurrence fields” provided by UKB (data category: 2409), which included data from primary care, hospital inpatient record, self-reported medical conditions, and death registers [ 22 , 33 ]. Any psychiatric disorder (F00-F99) was coded using the International Classification of Disease, 10th version (ICD-10) [ 34 ]. Considering the higher prevalence rate in the general population, this study also examined certain specific psychiatric disorders, including dementia (F00-F03), substance abuse (F10-F19), psychotic disorder (F20-F29), depression (F32-F33), and anxiety (F40-F41) [ 1 , 34 , 35 ]. More details about the outcomes are provided in Additional file 1 : Table S1. Participants entered the cohort from the date of being recruited and exited at the date of death, occurrence of outcomes, or censorship, whichever came first. The date of censorship was derived from the date of diagnosis of a psychiatric disorder, obtained from the section of first occurrence field. Covariates The covariates in this study were selected by reviewing previous studies related to psychiatric disorders [ 36 , 37 ], including age, sex, ethnicity, socioeconomic status (SES), body mass index (BMI), household income before tax per year, education group, smoking status, alcohol drinker status, physical activity, history of hypertension, and type 2 diabetes (T2D). The category of physical activity (data field of UKB: 22,032) was derived from the Metabolic Equivalent Task (MET) score that was based on the International Physical Activity Questionnaire (IPAQ) guidelines [ 38 ]. SES was measured by Townsend area deprivation index [ 39 ]. A higher score indicated greater socioeconomic deprivation and poor SES and quartiles of the score for SES were included in the analyses. The diagnoses of hypertension and T2D were also obtained from first occurrence fields. The data field and definition of other covariates are shown in Additional file 1 : Table S2. Statistical analysis Multivariate imputations by chained equations (MICE) [ 40 ] were used to impute missing values with a proportion lower than 5%. The missing values for income (15.8%) and physical activity (19.7%) were regarded as a classification in the models, respectively. Initially, two series of Cox proportional hazard models were performed to explore the associations of GBN and all or specific psychiatric disorders, respectively. Model 1 was adjusted for age and sex, and model 2 was additionally adjusted for ethnicity, SES, BMI, household income before tax per year, education group, smoking status, alcohol drinker status, physical activity, hypertension, and T2D. To take into account the potential for collinearity, measures of GBN were included in separately adjusted models. Tertiles of exposures were used as cut-offs, with the first tertile (the lowest) set as the reference group. An ordinal scale based on the tertiles was also used to explore the continuous trend of the exposure to psychiatric disorders. There were no obvious violations to the proportional hazards assumption for interested exposures. Stratified analyses by age (less than 65 years vs. 65 years or above), sex (female vs. male), ethnicity (white vs. non-white), SES (good, the first two quantiles vs. poor, the second two quantiles), BMI (normal or underweight vs. overweight), income (low, < £52,000 vs high, ≥ £52,000), education (college or university vs. others), smoking status (never vs. previous/current), alcohol drinking (less than once per week vs. once per week or above), physical activity (high vs. low or moderate), history of hypertension, and T2D at baseline were performed to observe the different effects of the exposures of interest on incident psychiatric disorders. Z test was used to compare the estimates of different subgroups as recommended by Altman et al. [ 41 ]. We performed several sensitivity analyses to verify the robustness of the main findings. First, we omitted the participants with any psychiatric disorder during the first 2 years of follow-up to account for reverse causality. Second, the participants with missing values of the covariates were excluded from the analyses (< 5% of participants). Third, considering the interactive effect of the exposure and air or noise pollution on psychiatric disorders, we separately added air pollution [particular matter 10 (PM 10 )] and noise pollution (annual average of 24 h noise) to the models. PM 10 and noise pollution were estimated by LUR model developed as part of the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://www.escapeproject.eu/ ). Fourth, outdoor time in winter and autumn and a history of consultation with a psychiatrist or general practitioner (GP) were further adjusted for in the models to explore the relative effects on the associations, respectively. Finally, a range of percentile classifications for GBN exposure were performed to test for the robustness of the findings: ( 50 and > 50 percentile); ( 20, > 20 to 80, and > 80 percentile), and four groups ( 25, > 25 to 50, > 50 to 75, and > 75 percentile). All the statistical analyses were performed by R v4.1.2 software, with package “mice” used for imputation. Supplementary Information
Abbreviations Green space, blue space, and the natural environment Hazard ratio Confidence interval Disability-adjusted life-years UK Biobank Generalized Land Use Database Ecology and Hydrology Land Cover Map International Classification of Disease Socioeconomic status Body mass index Type 2 diabetes Metabolic Equivalent Task International Physical Activity Questionnaire Multivariate imputations by chained equations Particular matter European Study of Cohorts for Air Pollution Effects General practitioner Acknowledgements The authors would like to thank the participants of the UK Biobank. This research has been conducted using the UK Biobank Resource under Application Number 91536. Authors’ contributions BPL, QZ, and CXJ conceptualized the idea for the manuscript. BPL and QZ contributed to the method for the paper. BPL, RH, TS, and KJH drafted the manuscript under the supervision of QZ and CXJ. All authors read and approved the final manuscript. BPL, QZ, and CXJ are guarantors for this study. The corresponding authors attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding Dr. Liu was supported by National Natural Science Foundation of China (NSFC) [No: 82103954] and Shandong Provincial Natural Science Foundation in China [No: ZR2021QH310]. Dr. Zhao receives grant funding from the Shandong Provincial Natural Science Foundation in China (No: ZR2021QH318) and the Shandong Excellent Young Scientists Fund Program (Overseas) (No: 2022HWYQ-055). Dr. Jia was supported by National Natural Science Foundation of China (NSFC) (No: 81761128033). Availability of data and materials UK Biobank data could be obtained on application from https://www.ukbiobank.ac.uk/enable-your-research/apply-for-access . Declarations Ethics approval and consent to participate UK Biobank received ethical approval from the Northwest Multi-center Research Ethics Committee (MREC reference:21/NW/0157). All participants gave written informed consent before enrolment in the study, which was conducted following the principles of the Declaration of Helsinki. More details about ethical approval could be seen at the website of UKB: https://www.ukbiobank.ac.uk/learn-more-about-uk-biobank/about-us/ethics Consent for publication Not required. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Med. 2024 Jan 15; 22:15
oa_package/80/90/PMC10789017.tar.gz
PMC10789018
0
Background The COVID-19 pandemic posed a considerable global challenge. The pandemic resulted in up to 6.9 million deaths (till November 2023, according to the data from World Health Organization) [ 1 ] and placed substantial burdens on health-care systems worldwide. Individuals with certain pre-existing conditions are particularly susceptible to COVID-19, and the relationship between COVID-19 and cancer has become a crucial and concerning issue. Patients with cancer are more susceptible to severe illness from COVID-19 than are those without cancer, which may be due to the presence of concurrent comorbidities, the inherent immunosuppressive characteristics of cancer, and the immunosuppression induced by systemic cancer treatments [ 2 ], with mortality rates as high as 25% being reported for patients with solid organ malignancies [ 3 ]. Respiratory failure is a severe complication of COVID-19 that typically occurs approximately 1 week after the onset of symptoms. Respiratory failure is usually accompanied by thrombosis and acute renal failure [ 4 ]. Treatment strategies for COVID-19-related respiratory failure are similar to those established for acute respiratory distress syndrome (ARDS) [ 5 ], and include oxygen therapy; lung-protective ventilation; prone positioning; supportive care; and administration of specific medications, such as corticosteroids, antiviral agents, immunomodulators, and anticoagulants [ 4 – 6 ]. Treatment for COVID-19-related respiratory failure among patients with cancer requires a multidisciplinary approach. The risk of death from COVID-19 among cancer patients is influenced by age; male sex; performance status; comorbidities; and hematological malignancies [ 7 – 10 ]. Whether recent cancer treatment influence survival remains controversial [ 2 , 11 , 12 ]. Understanding the factors that increase the risk of death from COVID-19 is crucial for optimizing patient management and improving outcomes. This study aims to investigate and compare the characteristics and outcomes among patients experiencing COVID-19-related acute respiratory failure between individuals with and without underlying cancer, while further analyzing the factors influencing in-hospital survival among cancer patients.
Methods This retrospective observational study was conducted at Taipei Veterans General Hospital, a tertiary medical center in Taiwan, between May and September 2022. During this period, the omicron variant of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was circulating in Taiwan. Patients were included in this study if they were infected with SARS-CoV-2 and experienced acute respiratory failure, defined as requiring high-flow nasal cannula (HFNC), or noninvasive ventilation (NIV), or mechanical ventilation (MV). SARS-CoV-2 infection was confirmed through reverse transcription polymerase chain reaction (RT-PCR) by using the Roche Cobas 6800 system (Roche Diagnostics, Rotkreuz, Switzerland). Electronic medical records were reviewed to collect clinical information. Patients with advanced stage or metastatic cancer and those without remission were included. Other demographic data, including age, sex, body mass index (BMI), smoking and vaccination history, underlying diseases, do not resuscitate (DNR) code status, laboratory results on admission, and severity, were also obtained. Severity was assessed on the day of respiratory failure, including sequential organ failure assessment (SOFA) scores, Mean arterial pressure (MAP) scores (derived from the SOFA score, accounted for the administration of vasoactive agents, rating as 0 (no hypotension), 1 (mean arterial pressure < 70 mmHg), 2 (dopamine ≤5 mcg/kg/min or any dose of dobutamine), 3 (dopamine > 5 mcg/kg/min, epinephrine ≤0.1 mcg/kg/min, or norepinephrine ≤0.1 mcg/kg/min), and 4 (dopamine > 15 mcg/kg/min, epinephrine > 0.1 mcg/kg/min, or norepinephrine > 0.1 mcg/kg/min) [ 13 ], Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II score [ 14 ], Glasgow coma scale [ 15 ], vasopressor usage, PaO 2 /FiO 2 ratio (estimated as the ratio of arterial oxygen partial pressure [PaO 2 in mmHg] to fractional inspired oxygen) [ 16 ] were collected upon the day of respiratory failure. Treatment information, including receiving corticosteroids, tocilizumab, remdesivir, nirmatrelvir/ritonavir, molnupiravir, and enoxaparin; surgery; and new renal replacement therapy during admission, was also reviewed. Cytomegalovirus (CMV) infection, gastrointestinal bleeding, and thromboembolism were included as disease-related complications. Clinical courses and outcomes, such as the use of MV and ECMO, in-hospital mortality, and duration from the onset of symptoms until the day the cycle threshold (Ct) value exceeded 30, were also recorded [ 17 ]. Studies revealed a Ct value of 30 or higher to be non-infectious, with no virus isolated from culture [ 18 ]. In addition, a Ct value of at 30 or higher is the threshold for isolation release set by the Taiwan Center for Disease Control [ 19 ]. Statistical analysis The baseline characteristics were summarized using descriptive statistics, and continuous variables were expressed as medians and interquartile ranges. The Mann-Whitney U test was employed to assess differences in distribution between two independent groups for non-normally distributed continuous variables. Pearson’s chi-square test or Fisher’s exact test were used to examine variations in the distribution of categorical variables across different groups. In-hospital survival time and time to reach Ct > 30 among the patients with and without cancer were plotted using the Kaplan-Meier method and compared using a log-rank test. Cox proportional hazard models were used to assess the factors associated with in-hospital mortality, and factors with P < 0.1 in univariable analysis were incorporated into multivariable analysis. Statistical significance was indicated by P < 0.05. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM, Armonk, NY, USA).
Results In total, 215 patients with COVID-19-related acute respiratory failure were enrolled. Among these patients, 65 had cancer. The patient characteristics, laboratory results, disease severity on the day of respiratory failure, treatment, complications, and outcomes are summarized in Table 1 . The patients with cancer were younger than those without cancer (median age 73 vs 82 years, P = 0.001). Furthermore, the patients with cancer had lower prevalence rates of cerebrovascular accidents (9.2% vs 20.7%, P = 0.041) and heart failure (1.5% vs 14%, P = 0.003) than did the patients without cancer. The patients with cancer had lower absolute lymphocyte counts (median 546.8 vs 781.6 × 10 9 /L, P = 0.003) and higher concentrations of ferritin (1035 vs 529 ng/mL, P = 0.002) and lactate dehydrogenase (LDH; median 423 vs 339 U/L, P = 0.01) on the day of respiratory failure than did the patients without cancer. The patients with cancer also had higher mean arterial pressure scores (median 1 vs 0.5, P = 0.022) and a higher prevalence of vasopressor use (43.1% vs 28%, P = 0.03) on the day of respiratory failure than did the patients without cancer. The patients with cancer were more likely to receive remdesivir (90.8% vs 73.3%, P = 0.004), tocilizumab (46.2% vs 30.7%, P = 0.029), and corticosteroids (93.8% vs 82%, P = 0.023) than were the patients without cancer. In terms of outcomes, the patients with cancer were significantly more likely to die in hospital (in-hospital mortality rate 61.5% vs 36%, P = 0.002) and took longer to reach Ct > 30 (median 13 vs 10 days, P = 0.007) than did the patients without cancer. The in-hospital survival and time to reach Ct > 30 in patients with and without cancer are illustrated in Fig. 1 . The characteristics of the 65 patients with cancer are summarized in Table 2 and Fig. 2 . Most (87.7%) patients with cancer had solid tumors, with lung cancer (24.6%) and gastrointestinal tumors (15.4%) being the most common, followed by hematological malignancies (12.3%). In total, 34 (52.3%) patients received cancer-related treatment within 4 weeks before receiving a COVID-19 diagnosis, with approximately half receiving cytotoxic chemotherapy. In-hospital mortality among the patients with cancer was 61.5%, with 25 survivors and 40 nonsurvivors (Table 3 ). The nonsurvivors were more likely to be smokers (42.5% vs 12%, P = 0.024) than were the survivors. Furthermore, the nonsurvivors had higher white blood cell counts (median 12,450 vs 8500 × 10 9 /L, P = 0.006) and concentrations of ferritin (median 3220 vs 673.5 ng/mL, P < 0.001), LDH (median 534.5 vs 256 U/L, P < 0.001), lactate (median 33 vs 15.7 mg/dL, P = 0.005), and D-dimer (median 4.605 vs 1.570 μg/mL, P = 0.007) than did the survivors. Additionally, the nonsurvivors had a higher incidence of vasopressor use on the day of respiratory failure (55% vs 24%, P = 0.014) and new renal replacement therapy during admission (22.5% vs 4%, P = 0.044) than did the survivors. The difference in survival status was not statistically significant based on whether patients had undergone systemic treatment or received cytotoxic chemotherapy within the 4 weeks preceding their COVID-19 diagnosis. The comparison of the characteristics, laboratory data, treatment, complications, and outcomes between cancer patients who have undergone recent systemic treatment and those who have not received it was summarized in Supplemental Table 1 . The patients who have underwent cancer treatment were younger (median 71.5 vs 79 years old, P = 0.029), had lower absolute lymphocyte count on the day of respiratory failure (median 657.6 vs 440.28 × 109/L, P = 0.030), higher LDH level (median 536 vs 342 U/L, P = 0.016), and took shorter to reach Ct > 30 (median 8.5 vs 17 days, P = 0.033) than did the patients without treatment. According to multivariable analysis (Table 4 ), smoking (OR: 5.804, 95% CI: 1.847–39.746, P = 0.043), an elevated concentration of LDH (OR: 1.004, 95% CI: 1.001–1.012, P = 0.025), vasopressor use on the day of respiratory failure (OR: 5.437, 95% CI: 1.202–24.593, P = 0.028), and new renal replacement therapy during admission (OR: 3.523, 95% CI: 1.203–61.108, P = 0.034) were significantly associated with in-hospital mortality among patients with cancer and COVID-19-related respiratory failure.
Discussion This study revealed the characteristics and factors that influence in-hospital mortality among patients with cancer and COVID-19-related respiratory failure during the period in which the omicron variant of SARS-CoV-2 was circulating in Taiwan. The patients with cancer and COVID-19-related respiratory failure exhibited distinct clinical characteristics, including lower lymphocyte counts, higher ferritin and LDH concentrations, and increased vasopressor use than did the patients without cancer. Additionally, the patients with cancer received COVID-19-related treatments more frequently than did the patients without cancer; however, in-hospital mortality was higher among the patients with cancer than among those without cancer. Smoking, an elevated LDH concentration, vasopressor use, and new renal replacement therapy were independent predictors of in-hospital mortality among this population. The patients with cancer were generally younger and less likely to have histories of cerebrovascular accidents and heart failure than were the patients without cancer. This finding indicates that comorbidities other than advanced stage cancer contributed to the development of severe disease. The patients with cancer had lower absolute lymphocyte counts and higher ferritin and LDH concentrations on the day of respiratory failure than did the patients without cancer. Other biomarkers, such as C-reactive protein (CRP), lactate, fibrinogen, D-dimer, and procalcitonin, did not significantly differ between the patients with and without cancer. In Cai et al., among patients with COVID-19, those with cancer had higher concentrations of inflammatory markers and cytokines (high-sensitivity C-reactive protein, procalcitonin, interleukin (IL)-2 receptor, IL-6, and IL-8) and fewer immune cells than did those without cancer, indicating that patients with cancer are more susceptible to immune dysregulation [ 17 ]. Lymphopenia is a marker of COVID-19 severity and may be used to detect respiratory failure [ 20 – 22 ]. Patients with COVID-19 who are critically ill often exhibit hyperferritinemia; however, ferritin concentration is not a reliable predictor of patient outcomes [ 23 – 25 ]. An elevated LDH concentration has also been associated with mortality among patients with COVID-19 with severe disease and acute respiratory distress syndrome [ 22 , 26 – 28 ]. In the present study, we discovered that the patients with cancer were more frequently treated with remdesivir, tocilizumab, and corticosteroids than were those without cancer. Use of enoxaparin and oral antivirals (nirmatrelvir/ritonavir and molnupiravir) did not significantly differ between the patients with and without cancer. Interleukin (IL)-6, known to be associated with adverse clinical outcomes in patients with COVID-19 [ 29 ], is also a key cytokine in the tumor microenvironment. IL-6, present in high concentrations in various cancer types, correlates with cancer progression and therapeutic resistance [ 30 , 31 ]. IL-6 deregulation participates in the systemic hyperactivated immune response commonly referred to as the cytokine storm. Corticosteroids modulate inflammation-mediated lung injury and thereby reduce the likelihood of short-term mortality and the need for mechanical ventilation [ 6 , 32 ]. Tocilizumab, a monoclonal antibody against IL-6 receptor, reduces the likelihood of progression to mechanical ventilation or death in patients hospitalized with COVID-19 and is effective among patients with COVID-19 with various cancer types [ 33 – 35 ]. We propose that corticosteroids and tocilizumab were used more frequently among the patients with cancer than among those without cancer due to the hyperinflammatory status of the patients with cancer, whose inflammatory status was confirmed by their elevated concentrations of inflammatory markers (ferritin and LDH). The immunocompromised status of the patients with cancer may have led to active viral replication; therefore, although remdesivir was used more frequently among the patients with cancer than among those without cancer, the patients with cancer took longer to reach Ct > 30. The patients with cancer exhibited prolonged nasopharyngeal viral RNA shedding. Longer viral shedding is associated with older age, distant metastasis, and more severe COVID-19 disease [ 36 ]. The patients with cancer had higher MAP scores and a greater likelihood of vasopressor use on the day of respiratory failure than did those without cancer, indicating greater hemodynamic instability among these patients. The patients with cancer were also demonstrated a significantly higher in-hospital mortality rate compared to those without cancer, which is consistent with the finding of another study [ 37 ]. Among the patients with cancer in our study, in-hospital mortality was associated with smoking; a higher white blood cell count; and elevated concentrations of ferritin, LDH, lactate, and D-dimer. These factors indicate that an active inflammatory process may have contributed to a poor prognosis. The nonsurvivors with cancer were also significantly more likely to use vasopressors and receive new renal replacement therapy during their admission than were the survivors. Vaccination status, comorbidities, recent systemic cancer treatment, whether admitted due to COVID-19, whether infected during hospitalization, SOFA score and APACHE II score on the day of respiratory failure, and specific treatments for COVID-19 (including those involving corticosteroids, antiviral and anticoagulation agents, and tocilizumab) did not significantly affect mortality. In multivariable analysis, we identified several factors that were associated with in-hospital mortality among the patients with cancer and COVID-19-related respiratory failure. These factors included smoking, elevated LDH concentrations on the day of respiratory failure, requiring vasopressor use on the day of respiratory failure, and undergoing new renal replacement therapy during admission. Active smoking is considered as an independent predictor of severe disease and mortality among patients with COVID-19 [ 9 , 38 – 40 ]. Current smokers had significantly increased ACE2 expression in airway epithelial cells compared with nonsmokers, which provided more entry points for the SARS-CoV-2 virus and potentially increased susceptibility to infection [ 41 ]. However, in one study, active smoking was not associated with COVID-19 severity [ 42 ]. Elevated LDH concentration has been identified as an independent risk factor for disease severity and mortality among patients with COVID-19 [ 27 , 28 , 43 ]. The requirement for mechanical ventilation, vasopressors, and renal replacement therapy were reported to be poor prognostic factors among patients with cancer who were admitted to the intensive care unit [ 44 ]. Patients with COVID-19 who are admitted to the intensive care unit frequently receive continuous vasopressor support [ 45 ], highlighting the importance of hemodynamic monitoring and fluid management. Our study has several limitations. First, this was a single-center retrospective cohort study with a limited sample size. Second, the laboratory data and SARS-CoV-2 PCR follow-up intervals were not uniform, which potentially introduced bias. Third, some inflammatory biomarkers such as IL-6, IL-2R, IL-8 and antibody titers are either not routinely tested or have no available exam in our hospital, thus we do not have sufficient data to incorporate into our analysis. Treatment strategies may have also varied considerably by patient clinical status and clinician practice.
Conclusion Patients with cancer who develop COVID-19-related respiratory failure exhibit distinct clinical characteristics and have a higher likelihood of receiving specific COVID-19 treatments, such as remdesivir and corticosteroids, than those without cancer do. Patients who develop COVID-19-related respiratory failure with cancer also experience unfavorable outcomes, including higher in-hospital mortality and longer duration of viral shedding, compared with those without cancer. Smoking, elevated LDH concentrations, vasopressor use, and new renal replacement therapy were identified as significant predictors of in-hospital mortality in this patient population. Further research is warranted to validate these findings, elucidate the underlying mechanisms, and explore tailored management strategies to improve outcomes in this vulnerable population.
Background Coronavirus disease 2019 (COVID-19) has affected individuals worldwide, and patients with cancer are particularly vulnerable to COVID-19-related severe illness, respiratory failure, and mortality. The relationship between COVID-19 and cancer remains a critical concern, and a comprehensive investigation of the factors affecting survival among patients with cancer who develop COVID-19-related respiratory failure is warranted. We aim to compare the characteristics and outcomes of COVID-19-related acute respiratory failure in patients with and without underlying cancer, while analyzing factors affecting in-hospital survival among cancer patients. Methods We conducted a retrospective observational study at Taipei Veterans General Hospital in Taiwan from May to September 2022, a period during which the omicron variant of the severe acute respiratory syndrome coronavirus 2 was circulating. Eligible patients had COVID-19 and acute respiratory failure. Clinical data, demographic information, disease severity markers, treatment details, and outcomes were collected and analyzed. Results Of the 215 enrolled critically ill patients with COVID-19, 65 had cancer. The patients with cancer were younger and had lower absolute lymphocyte counts, higher ferritin and lactate dehydrogenase (LDH) concentrations, and increased vasopressor use compared with those without cancer. The patients with cancer also received more COVID-19 specific treatments but had higher in-hospital mortality rate (61.5% vs 36%, P = 0.002) and longer viral shedding (13 vs 10 days, P = 0.007) than those without cancer did. Smoking [odds ratio (OR): 5.804, 95% confidence interval (CI): 1.847–39.746], elevated LDH (OR: 1.004, 95% CI: 1.001–1.012), vasopressor use (OR: 5.437, 95% CI: 1.202–24.593), and new renal replacement therapy (OR: 3.523, 95% CI: 1.203–61.108) were independent predictors of in-hospital mortality among patients with cancer and respiratory failure. Conclusion Critically ill patients with cancer experiencing COVID-19-related acute respiratory failure present unique clinical features and worse clinical outcomes compared with those without cancer. Smoking, elevated LDH, vasopressor use, and new renal replacement therapy were risk factors for in-hospital mortality in these patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12890-024-02850-z. Keywords
Supplementary Information
Abbreviations Coronavirus disease 2019 Lactate dehydrogenase Odds ratio Confidence interval Acute respiratory distress syndrome Severe acute respiratory syndrome coronavirus 2 High-flow nasal cannula Noninvasive ventilation Mechanical ventilation Reverse transcription polymerase chain reaction Body mass index Do not resuscitate Sequential organ failure assessment Mean arterial pressure Acute Physiologic Assessment and Chronic Health Evaluation Cytomegalovirus Cycle threshold Interleukin Acknowledgements Not applicable. Authors’ contributions Conceptualization: Ying-Ting Liao, Hsiao-Chin Shen, Jhong-Ru Huang, Chuan-Yen Sun, Hung-Jui Ko, Chih-Jung Chang, Jia-Yih Feng, Wei-Chih Chen, Kuang-Yao Yang. Supervision: Wei-Chih Chen, Jia-Yih Feng, Kuang-Yao Yang, Yuh-Min Chen. Data Collection and/or Processing: Hsiao-Chin Shen, Chuan-Yen Sun, Jhong-Ru Huang, Ying-Ting Liao, Hung-Jui Ko, Chih-Jung Chang. Analysis and/or Interpretation: Ying-Ting Liao, Wei-Chih Chen, Kuang-Yao Yang. Writing – original draft: Ying-Ting Liao, Wei-Chih Chen, Kuang-Yao Yang. Writing – review and editing: Ying-Ting Liao, Wei-Chih Chen, Kuang-Yao Yang. All authors read and approved the final manuscript. Funding This research was funded by grants from Taipei Veterans General Hospital (V111C-050, V111B-024, V112C-068, V112B-031, V112D65–003-MY2–1, V112A-001, V113B-015, V113A-003), and National Science and Technology Council, Taiwan (MOST109–2314-B-010-051-MY3, NSTC 112–2314-B-075-050, NSTC 112–2314-B-A49–040). Additionally, this work was supported by grants from the Ministry of Education, Higher Education SPROUT Project for Cancer Progression Research Center(111 W31101) and Cancer and Immunology Research Center (112 W31101). Availability of data and materials The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate This retrospective study was performed in accordance with the Declaration of Helsinki and approved by the Institutional Ethical Review Board of Taipei Veterans General Hospital (Approval No. 2022–11-002 AC). Written informed consent was waived by Institutional Ethical Review Board of Taipei Veterans General Hospital due to retrospective design of the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Pulm Med. 2024 Jan 15; 24:34
oa_package/71/e5/PMC10789018.tar.gz
PMC10789019
0
Background Glioblastoma is one of the deadliest cancer entities, with a median overall survival in the range of just one year in population-based studies [ 1 , 2 ]. The standard of care is confined to maximum safe tumor resection followed by chemoradiotherapy with the alkylating agent temozolomide and maintenance temozolomide therapy [ 3 – 5 ], with or without electromagnetic fields applied via scalp electrodes [ 6 ]. Tumor recurrence invariably occurs and therapeutic options are then limited [ 5 ]. Therefore, there is an urgent medical need for improved therapeutic options for patients with glioblastoma especially in the first line treatment. The discovery of electrochemically active, oncogenic neuroglial networks in glioblastoma has sparked attempts to pharmacologically disrupt these networks [ 7 , 8 ]. Glioblastoma cells interconnect to form electrochemically active networks via gap junctions [ 9 ] and these glioma cell networks synaptically integrate into neuronal circuits [ 10 , 11 ]. Oncogenic calcium oscillations of tumor cell networks are activated by autonomously oscillating hub cells [ 12 ] which are present mainly in the tumor core and through activation of glutamatergic neuroglial synapses within the glioblastoma infiltration zone [ 10 , 11 ]. Remodeling of distant neuronal networks can activate tumor cell networks in a vicious cycle, including through epileptic activity and by activity-dependent shedding of neuronal growth factors [ 13 , 14 ]. Of note, a recent study of long-term electroencephalographic recordings in glioblastoma patients suggests high rates of sub-clinical epileptic activity which may contribute to inferior survival [ 15 , 16 ]. Glioblastoma cells also synthesize large amounts of the excitatory neurotransmitter glutamate from α-ketoglutarate via branched chain amino acid transaminase-1 (BCAT-1) [ 17 ] which is released into the tumor microenvironment at high concentrations via the glutamate-cystine antiporter system x c [ 18 , 19 ]. This non-synaptic glutamate release may drive glioma cell invasion [ 20 ] and will likely enhance the hyperexcitability and thus the oncogenic activity of neuroglial networks [ 21 ]. Several brain-penetrating, anti-glutamatergic drugs that are clinically approved for other indications have been identified, including (i) the anti-epileptic drug gabapentin, which interferes with the binding of branched-chain amino acids to BCAT-1 and inhibits thrombospondin-1 signaling by blocking the thrombospondin receptor α2δ-1 [ 17 , 22 , 23 ], (ii) the anti-inflammatory drug sulfasalazine, which inhibits glutamate secretion by blocking the cystine-glutamate exchanger system x c [ 24 ], and (iii) the cognitive enhancer memantine, which blocks N-methyl-D-aspartate (NMDA) type glutamate receptors, thereby inhibiting tumor cell invasion and neuroglial synapse formation [ 25 , 26 ]. The omnipresence and pleiotropic functions of glutamate in glioblastoma lend rationale for a combined anti-glutamatergic therapeutic approach. The well-documented tolerability of some of these drugs supports the feasibility of a drug repurposing approach in combination with standard chemoradiotherapy. There is limited commercial interest in exploring the activity of these drugs as anti-cancer agents.
Methods Study objectives The primary objective of this study is to evaluate whether the addition of gabapentin, sulfasalazine and memantine to standard chemoradiotherapy compared to chemoradiotherapy alone improves outcome of patients with newly diagnosed glioblastoma as determined by progression-free survival at 6 months. Secondary objectives are to determine tolerability, response rates as defined by the Response Assessment in Neuro-Oncology (RANO) working group [ 27 ], progression-free survival, overall survival, seizure-free survival, patient quality of life assessed by the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire C30 and Brain Tumor Module 20 (EORTC-QLQ-C30/BN20) [ 28 , 29 ], caregiver quality of life utilizing the CareGiver Oncology Quality of Life Questionnaire (CarGOQoL) [ 30 ], symptom burden measured by the MD Anderson Symptom Inventory Brain Tumor (MDASI-BT) [ 31 ] and by the Neurological Assessment in Neuro-Oncology (NANO) scale [ 32 ], cognitive functioning assessed by the Montreal Cognitive Assessment (MoCA) test [ 33 ], tumor glutamate levels estimated by magnetic resonance spectroscopy as well as anticonvulsant drug and steroid use. Trial design This study is an open-label, randomized, multicenter, phase Ib/II clinical trial. Following informed consent, patients who meet eligibility criteria will be randomly allocated in a 1:1 fashion to receive either a triple glutamate-targeted treatment with gabapentin, sulfasalazine and memantine plus chemoradiotherapy with temozolomide or chemoradiotherapy alone (Fig. 1 ). A total of 120 patients will be randomized with 60 participants in each study arm. The allocation sequence will be generated in advance using stratified block randomization with varying block sizes. Randomization will be stratified by extent of resection (gross total versus subtotal resection or biopsy). Post hoc central neuropathology review will be conducted for quality assurance. Randomized patients will enter the treatment phase and will be followed-up until death. Tumor progression will be assessed by contrast-enhanced magnetic resonance imaging every 3 months. An epileptic seizure assessment questionnaire will be filled in at every study visit and routine electroencephalography will be performed every 3 months to assess epileptic seizure control and neuronal hyperexcitability. Data bank closure will be 6 months after the last participant was randomized. Patient cohort Patients are recruited at 7 sites in Switzerland (University Hospital Zurich; University Hospital Geneva; University Hospital Basel; Cantonal Hospital Lucerne; University Hospital Bern; Cantonal Hospital St. Gallen; Cantonal Hospital Graubünden). The first patient was enrolled in January 2023. Inclusion criteria Newly diagnosed supratentorial glioblastoma according to the 2021 WHO Classification of central nervous system tumors [ 34 ]; eligible for standard chemoradiotherapy with temozolomide (hypofractionated radiotherapy regimen not allowed); age ≥ 18 years; Karnofsky performance status of ≥ 70; normal kidney and liver function; normal hematologic parameters. Exclusion criteria Intent to be treated with tumor-directed therapy other than chemoradiotherapy; pregnant or breast feeding women; intention to become pregnant or father a child during study course; lack of safe contraception; clinically significant concomitant disease; known or suspected non-compliance, drug or alcohol abuse; inability to follow the procedures of the study; participation in another study with an investigational drug; contraindication for gadolinium-enhanced MRI; any prior radiotherapy of the brain; active malignancy that may interfere with the study treatment; abnormal ECG with QTc > 450 ms; previous intolerance reactions to one of the study drugs; intolerance reactions to sulfonamides or salicylates; acute intermittent porphyria; known glucose-6-phosphate dehydrogenase deficiency; concomitant therapy with digoxin, ciclosporin, methotrexate; history of exfoliative dermatitis, Stevens Johnson syndrome, toxic epidermal necrolysis, drug rash with eosinophilia and systemic symptoms (DRESS) syndrome or renal tubular acidosis. Study treatment Study treatment includes oral gabapentin, sulfasalazine and memantine in the experimental study arm. Temozolomide and radiotherapy are standard of care and given to patients in both arms. Dosing of the investigational drugs in Arm A will be sought up to the maximum approved dose and will be reevaluated in an interim safety analysis after 20 patients have been randomized into the experimental arm. The investigational drugs will be given until tumor progression or withdrawal, whichever occurs first. Dosing will be reduced for at least one week in case of CTCAE grade 3 and permanently discontinued in case of CTCAE grade 4 drug-related toxicity, respectively. If toxicity is resolved to CTCAE grade 0–1, reescalation to higher dose levels is allowed. Permanent discontinuation of one out of the three investigational drugs for toxicity will not be considered treatment failure. Permanent discontinuation of two or more drugs will be considered treatment failure. For discontinuation, investigational drugs will be tapered following the reverse schedule as for the initial dosing. Gabapentin Gabapentin is approved for the treatment of epilepsy and neuropathic pain. The definite mechanism of action by which gabapentin exerts anti-convulsant and analgetic effects has not been fully clarified. Oral gabapentin will be given at a dose of 3 × 300 mg/day in week 1, 3 × 600 mg/day in week 2, 3 × 900 mg/day in week 3 and 3 × 1200 mg/day from week 4 onwards. The most common adverse events related to gabapentin include neurological symptoms, e.g. ataxia, somnolence, dizziness, vertigo, tremor, diplopia, amblyopia and nystagmus. Dosing will be permanently discontinued if DRESS syndrome attributed to gabapentin occurs. Sulfasalazine Sulfasalazine is approved for the treatment of ulcerative colitis and rheumatoid arthritis. Oral sulfasalazine will be given at a dose of 3 × 500 mg/day in week 1, 3 × 1000 mg/day in week 2 and 3 × 1500 mg/day from week 3 on. Dosing will be reduced if hematologic, liver or renal toxicity occurs and will be permanently discontinued if Lyell syndrome, Stevens Johnson syndrome or DRESS syndrome occurs. Memantine Memantine is approved for the treatment of Alzheimer’s disease. Oral memantine will be given at a dose of 1 × 5 mg/day in week 1, 1 × 10 mg/day in week 2, 1 × 15 mg/day in week 3 and 1 × 20 mg/day from week 4 onwards). Higher grade toxicity from memantine is overall rare. Radiotherapy Patients will receive radiotherapy in daily fractions of 1.8 - 2 Gy given 5 days per week over 6-7 weeks, for a total dose of 60 Gy delivered in 30 - 33 fractions. Radiotherapy will be administered concomitantly with temozolomide and in the experimental arm also with the investigated drugs. Target volume delineation will be based on postoperative MRI scans (minimum: T1 native and T1 + Gadolinium, T2/FLAIR; axial orientation) obtained for treatment planning taking pre-operative MRI into consideration as well. Every effort is made to deliver the full dose to all patients. Up to 7 days of treatment interruption are permitted for any reason. Temozolomide Temozolomide will be administered during radiotherapy at a dose of 75 mg/m 2 daily at 7 days per week. This is followed by maintenance therapy with up to 6 cycles temozolomide at 150 to 200 mg/m 2 for 5 consecutive days every 4 weeks, beginning 4 weeks after the end of radiotherapy [ 3 ]. The most common expected toxicity is myelosuppression. If adverse events persist, treatment will be delayed by 1 week for up to 4 consecutive weeks, after which temozolomide will be discontinued, if adverse events have not resolved to ≤ grade 1. Statistical considerations We considered an increase of the progression-free survival rate at 6 months (PFS-6) rate by 20% a clinically meaningful result that would warrant further exploration in a phase III clinical trial (assuming 50% survival rate following chemoradiotherapy alone and 70% with chemoradiotherapy plus gabapentin, sulfasalazine and memantine). At a power of 80% and a one-sided significance level of 10%, allowing a 10% drop-out rate, 120 patients need to be recruited (60 patients per arm) to detect this difference. The primary outcome will be assessed using a one-sided comparison of the PFS-6 proportion of patients in the two treatment arms at significance level 10% and a 90% confidence interval for the risk difference. Subgroup analyses will be based on two-sided interaction tests at a significance level 5%. The 8 subgroup analyses will not be adjusted for multiplicity and potential findings will be interpreted exploratively.
Discussion The recent discovery of glutamatergic neuroglial synapses between peritumoral neurons and glioma cells has sparked cancer neuroscience as a rapidly evolving research field [ 8 , 35 ]. Several pre-clinical studies suggest that pharmacologic interference with these synapses may inhibit glioma growth and invasion [ 10 , 11 , 22 , 35 ]. Hyperexcitability of neuronal networks and tumor-associated epilepsy are deemed drivers of neuroglial signaling [ 22 , 36 ]. Finally, non-synaptic secretion of glutamate into the tumor microenvironment by glioblastoma cells may likewise contribute to hyperexcitability and glioblastoma progression [ 20 , 37 ]. The randomized GLUGLIO trial explores the efficacy of a triple anti-glutamatergic combination of gabapentin, sulfasalazine and memantine to address whether glutamate may be exploited as a therapeutic lever. Gabapentin reduces glutamate synthesis through inhibition of BCAT-1 [ 17 ] and, through the inhibition of thrombospondin-1 receptor α2δ-1, has been found recently to reduce functional connectivity of glioma and neuronal networks by inhibiting synaptogenesis and thus reducing tumor cell proliferation [ 22 ]. Moreover, the anti-convulsant effect of gabapentin alone may be beneficial to patients since a contribution of epilepsy to glioblastoma progression has been suggested by several pre-clinical and clinical studies [ 16 , 22 , 36 ], and long-term electroencephalography suggests that sub-clinical epileptic activity is common [ 15 ]. Along the same lines, a reduction in tumor-associated epilepsy has also been demonstrated for the inhibitor of the glutamate-cystine antiporter system x c by sulfasalazine [ 37 , 38 ]. A decrease of peritumoral glutamate after a single sulfasalazine administration has been documented in glioblastoma patients utilizing magnetic resonance spectroscopy [ 18 ]. Memantine may inhibit NMDA receptor-dependent synapse formation between neurons and tumor cells, interfering with similar processes as in long-term potentiation during physiologic memory formation [ 39 ] and as has been demonstrated in synapse formation between neurons and brain metastatic cancer cells [ 26 ]. Moreover, neuroprotective effects of NMDA receptor inhibition may enhance neurocognitive function, similar to the indication of memantine in the treatment of Alzheimer’s dementia [ 40 ]. Of the investigational medical products tested in the GLUGLIO trial, only two small, uncontrolled clinical studies have thus far sought to explore the efficacy of sulfasalazine and memantine, respectively: One study of monotherapy with sulfasalazine in glioblastoma patients with advanced disease has been terminated for lack of efficiency following the inclusion of 8 patients [ 41 ]. In an early phase clinical trial, memantine in combination with temozolomide with or without mefloquin and metformin was administered to patients with newly diagnosed glioblastoma and memantine was overall well tolerated [ 42 ]. However, the exploratory efficacy results of this trial are difficult to interpret, because there was no standard of care control arm, sample size per treatment arm was small and survival was not reported by treatment arm or excluding patients with isocitrate dehydrogenase-mutant astrocytomas. Two ongoing phase I/II clinical trials seek to explore pharmacological interference with neural circuits and tumor cell networks in glioblastoma. The first trial conducted by the Neuro-Oncology Working Group of the German Cancer Society investigates meclofenamate as a means to disrupt gap junctions within tumor-microtube networks in recurrent glioblastoma, the primary endpoint being safety and efficacy measured by incidence of dose-limiting toxicities and progression-free survival, respectively (EudraCT 2021-000708-39). The second trial evaluates biological effects of perampanel, a non-competitive antagonist of AMPA-receptors, on neuron-tumor interactions in a pre-surgery setting (EudraCT 2023-503938-52). Another report of ten glioma patients treated with perampanel for intractable epilepsy found at best minor effects on tumor growth based on MR images [ 43 ]. However, the small cohort size and inclusion of various glioma entities limits the interpretability of this study with respect to anti-tumor efficacy. Other than the GLUGLIO trial, no randomized clinical trials or uncontrolled studies with efficacy endpoints addressing the interplay of neuronal networks and glioblastoma cells have been registered by 11/2023. Whether or not epilepsy is causally related to survival of glioblastoma patients is not known. In fact, epilepsy has been proposed as an indicator of longer survival of glioblastoma patients [ 44 ], albeit retrospective analysis of survival associations with epilepsy are difficult to assess for several reasons, e.g. glioblastomas becoming symptomatic due to epilepsy as compared to such becoming symptomatic from other neurological deficits may be diagnosed earlier during the disease course, and rates of complete resection may be higher due to cortical tumor location and smaller tumor size [ 45 ]. One retrospective study therefore employed time-dependent multivariate analyses to analyze associations of epilepsy with survival of glioblastoma patients and supported the notion of unfavorable effects of epilepsy [ 16 ]. Finally, anticonvulsant therapy with valproic acid or levetiracetam was not associated with overall survival of glioblastoma patients in a post hoc analysis of a large merged cohort derived from different phase 3 clinical trials [ 46 ], but such analyses have limitations since only drug use at distinct timepoints could be analyzed and extent of drug exposure therefore remains uncertain. The GLUGLIO trial addresses some of these issues, since pre-specified subgroup analyses, albeit with low power, try to address survival separately among patients with or without epilepsy will enhance our understanding of whether or not anti-convulsant therapy may indeed be beneficial for glioblastoma patients who do not suffer from clinically apparent epilepsy. The secondary objective of the GLUGLIO trial to explore epileptic activity and the conduct of serial EEG recordings will also help to better understand the postulated interplay between epilepsy and tumor progression. A major limitation of the GLUGLIO trial is the small sample size, requiring one-sided hypothesis testing and setting the significance level to 10%. Small sample size will moreover limit the sensitivity of pre-defined subgroup survival analyses and of putative signal seeking post hoc analyses. Moreover, if efficacy of the addition of triple anti-glutamatergic therapy will indeed be observed in this trial, the combination approach precludes the definite assignment of efficacy to either of the individual drugs, thus compromising the design of a putative phase III follow-up trial. Additional pre-clinical studies in relevant tumor models will therefore be required. However, the rationale for the combination approach over testing a single drug was that combined targeting of glutamate synthesis (gabapentin), secretion (sulfasalazine) and signaling (memantine) may have additive effects, and that the lack of an efficacy signal would in reverse lend strong rationale against glutamate-targeted treatment approaches in the future. The GLUGLIO trial is currently ongoing and first results are expected by the end of 2026.
Background Glioblastoma is the most common and most aggressive malignant primary brain tumor in adults. Glioblastoma cells synthesize and secrete large quantities of the excitatory neurotransmitter glutamate, driving epilepsy, neuronal death, tumor growth and invasion. Moreover, neuronal networks interconnect with glioblastoma cell networks through glutamatergic neuroglial synapses, activation of which induces oncogenic calcium oscillations that are propagated via gap junctions between tumor cells. The primary objective of this study is to explore the efficacy of brain-penetrating anti-glutamatergic drugs to standard chemoradiotherapy in patients with glioblastoma. Methods/design GLUGLIO is a 1:1 randomized phase Ib/II, parallel-group, open-label, multicenter trial of gabapentin, sulfasalazine, memantine and chemoradiotherapy (Arm A) versus chemoradiotherapy alone (Arm B) in patients with newly diagnosed glioblastoma. Planned accrual is 120 patients. The primary endpoint is progression-free survival at 6 months. Secondary endpoints include overall and seizure-free survival, quality of life of patients and caregivers, symptom burden and cognitive functioning. Glutamate levels will be assessed longitudinally by magnetic resonance spectroscopy. Other outcomes of interest include imaging response rate, neuronal hyperexcitability determined by longitudinal electroencephalography, Karnofsky performance status as a global measure of overall performance, anticonvulsant drug use and steroid use. Tumor tissue and blood will be collected for translational research. Subgroup survival analyses by baseline parameters include segregation by age, extent of resection, Karnofsky performance status, O 6 -methylguanine DNA methyltransferase (MGMT) promotor methylation status, steroid intake, presence or absence of seizures, tumor volume and glutamate levels determined by MR spectroscopy. The trial is currently recruiting in seven centers in Switzerland. Trial registration NCT05664464. Registered 23 December 2022. Keywords
Abbreviations Branched chain amino acid transaminase-1 Drug rash with eosinophilia and systemic symptoms Epileptic seizure assessment questionnaire Gray N-methyl-D-aspartate Progression-free survival Response Assessment in Neuro-Oncology Acknowledgements Not applicable. Authors’ contributions HGW, LH and MW designed the study and wrote the initial trial protocol. MM and HGW drafted the manuscript. LH is the trial statistician. PR, AF, AFH, THu, DM, AO, HL, KS are involved in data collection. AB is the involved neuroradiologist; THo is the involved neuropathologist; LI is involved in electroencephalography data collection and analysis. All authors read and approved the final manuscript. Funding The GLUGLIO trial has undergone independent peer-review and is funded by an Investigator-Initiated Clinical Trial (IICT) grant from the Swiss National Science Foundation (33IC30-198794). Availability of data and materials No datasets were generated or analysed during the current study. Declarations Ethics approval and consent to participate The conduct of this clinical trial was approved by the Swiss Association of Research Ethics Committees (Business Administration System for Ethics Committees (BASEC) ID: 2022-01877, lead ethics committee: cantonal ethics committee Zurich, approval date: 20.12.2022) and by the swiss federal authorities (Swiss Agency of Therapeutic Products, Swissmedic No. 701474). Written informed consent for participation in this clinical trial is required from all study participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Cancer. 2024 Jan 15; 24:82
oa_package/f9/76/PMC10789019.tar.gz
PMC10789020
0
Introduction Asthenozoospermia (AS) is one of the most frequent reasons for infertile men. It is characterized by a reduced sperm progressive motility to < 32%. AS is usually identified as an isolated illness or as one aspect of other semen anomalies [ 1 ]. The etiologies of AS are complex and varied, such as inflammation, immune defects, irregular lifestyles, and genetics [ 2 ]. Gut microbiota could play a role in human immune and causative agent resistance [ 3 , 4 ]. Gut microbiota dysbiosis is usually associated with an abnormality in microbial diversity, resulting in inflammation and autoimmune diseases [ 5 , 6 ]. Moreover, intestinal flora participates in the regulation of inflammatory and immune protection in many organs such as the brain and testes [ 7 , 8 ]. Gut microbiota dysbiosis could affect the integrity of the blood-testis barrier (BTB), eventually impairing testicular spermatogenic processes by potential mechanisms below. On one hand, the testes usually cannot synthesize nutrients themselves. Blood vessels in the testes transport nutrients, including those synthesized or metabolized by the gut microbiota, from the digestive system to the testicular interstitium. These nutrients, such as vitamins and minerals, are vital for normal testicular function [ 9 ]. Gut microbiota dysbiosis may disturb the original nutritional structure and subsequently affect testicular function [ 10 ]. On the other hand, gut microbiota dysbiosis may result in a chronic inflammatory status and excessive immunological response that disrupts the spermatogenic processes in the testes [ 11 ]. For example, gut microbiota dysbiosis could cause abnormal intestinal permeability and increase lipopolysaccharide (LPS) levels in the blood. The increased LPS can induce innate immunity and activate testicular LPS/TLR4/MyD88/NF-κB pathways [ 12 ]. This process can induce testicular endothelial injury and damage the BTB, eventually impairing spermatogenesis. Identifying intestinal flora composition is significant for understanding the causes and pathogenic mechanism of the gut-testis axis and clarifying the relationship between the microflora and infertility. There are few reports on specifically investigating the gut microbiota characteristics in isolated AS patients. Hence, our study aimd to examine the microbiota characteristics in the gut of AS patients and discover the potentially key gut microbiota associated with the development of AS.
Materials and methods Study participants A flowchart of the study design is shown in Fig. 1 . Male patients were recruited in the outpatient department of Tianjin Medical University General Hospital. The study started in September 2021 and ended in March 2023. A total of 580 males were recruited during the study. Patients were diagnosed with isolated AS according to semen analysis results. Healthy men with normal semen were regarded as normal controls. Demographic characteristics and clinical parameters were recorded in detail. The inclusion criteria were as follows: (1) age between 20 and 40 years old; (2) not use antibiotics and hormone drugs in 6 months; (3) not suffer from genetic illnesses. The exclusion criteria were: (1) AS patient was not isolated AS, and combined with other semen abnormality; (2) men had chronic diseases such as hypertension, diabetes, and cardiovascular diseases; (3) men had other clinical symptoms or diseases (such as depression and inflammatory bowel disease) could potentially impact the intestinal flora outcomes; (4) men used the probiotics or prebiotics in the past six months. The study was performed according to the Declaration of Helsinki and approved by the Ethics Committee of Tianjin Medical University General Hospital (IRB2022-KY-308). Written informed consent was obtained from all participants. All data will be available from the corresponding author on reasonable request. Collection of semen samples Semen was collected according to the World Health Organization (WHO) laboratory manual for the examination and processing of human semen. Abstinence was continued for 3–5 days and semen was generated by masturbation. Before collecting semen samples, the hands and penises of these males were washed using warm soapy water 3 times and then wiped with 75% alcohol. Semen was directly ejaculated into a sterile container. Sperm parameters were tested and analyzed as per the WHO laboratory manual. Feces specimen collection A total of 108 fresh fecal samples were collected for gut microbiome analysis. Each man provided a single fecal sample. All fecal samples were collected using sterile and DNase-free containers and stored at − 80 °C until DNA extraction. DNA extraction and quality checked Feces genomic DNA was extracted using E.Z.N.A. Stool DNA Kit (Omega Bio-tek, Inc., USA) following the manual. The concentration and quality were checked by a NanoDrop 2000 spectrophotometer (Thermo Scientific Inc., USA). DNA samples were stored at − 20 °C for further experiments. Analysis of the gut microbiota The V3-4 hypervariable region of bacterial 16S rRNA gene were amplified with the universal primer 338F (5’-ACTCCTACGGGAGGCAGCAG-3’) and 806R (5’- GGACTACNNGGGTATCTAAT-3’). Raw data were divided into different samples according to the barcode sequence through QIIME (v1.8.0) software. A detailed analysis method was described in the supplementary file 1 . Statistical analysis Dichotomous variables were presented as frequencies and compared using the chi-square test. Continuous variables were presented as mean (standard deviation, SD) or median (interquartile ranges, IQR). They were compared using the independent Student’s t-test or Wilcoxon rank-sum test. Statistical analyses were conducted using SPSS 23.0 (SPSS Inc., Chicago, IL, USA) and Project R software (v3.6.0). The value of p < 0.05 was considered statistically significant.
Results Clinical characteristics After rigorous screening, 108 men were enrolled in this study, including 60 isolated AS men (AS group) and 48 healthy control men (NC group). Demographic characteristics of the participants are shown in Table 1 . In general, no significant differences between the two groups were observed in age ( p = 0.570), weight ( p = 0.696), height ( p = 0.810), and body mass index (BMI; p = 0.794). There were also no significant differences in lifestyles including smoking, alcohol consumption, and physical exercise ( p > 0.05). Besides, analysis of dietary habits showed no significant differences between the two groups in tea consumption, coffee consumption, egg consumption, soy or dairy consumption, meat consumption, and vegetable consumption ( p > 0.05). Therefore, demographic characteristics, lifestyles, and dietary habits were comparable between the AS group and the NC groups. Sperm parameters The median semen volume was 3.04 (IQR 2.71–5.15) mL in the AS group and 3.18 (IQR 2.25–4.86) mL in the NC group, with no significant difference ( p = 0.718). Similarly, no significant differences between the two groups were observed in sperm concentration ( p = 0.109) and total sperm count ( p = 0.200). The median sperm total mobility in percentage was 30.70 (IQR 23.25–34.38) in the AS group and 59.00 (IQR 43.44–63.73) in the NC group, while the sperm progressive mobility in percentage was 21.22 (IQR 14.58–26.01) and 54.81 (IQR 40.17–61.39), respectively. Compared to those of the NC group, sperm total motility ( p < 0.001) and progressive sperm motility ( p < 0.001) were significantly decreased in the AS group. All sperm parameters are summarized in Table 1 . Altered diversity of the intestinal flora After sequence processing and filtering, the average read count per sample was 52,998 (range, 28,304 to 99,487). The sequence depth was visualized using a rarefaction curve. The curves for all samples were nearly horizontal with increasing sequencing depth, indicating that the depth was appropriate. The rate of increase in operational taxonomic units (OTUs) in the NC group quickly exceeded that in the AS group, indicating AS patients had a relatively lower degree of taxa abundance. Alpha diversity reflected the abundance and diversity of microbial communities. Consistently, the test depicted significantly lower species richness (Chao1 index and observed OTUs) in the AS group than in the NC group ( p < 0.001, Fig. 2 A, B). The Shannon index, which measures both richness and evenness, was not significantly different between two groups ( p = 0.268, Fig. 2 D). However, the PD Whole tree index, which also measures both richness and evenness, was significantly lower in the AS group than that in the NC group ( p < 0.001, Fig. 2 C). Therefore, there was a noticeable and significant decrease in the alpha diversity indices of AS men compared to NC men. To evaluate extent of similarity between microbiota communities, we calculated beta-diversity values using the unweighted UniFrac method. Principal co-ordinates analysis (PCoA) illustrated that AS patients significantly differed from NCs ( p < 0.01, Fig. 2 E). Furthermore, ANOSIM (analysis of similarities) revealed significant difference between the AS and NC group (ANOSIM, R statistic = 0.506, p = 0.001). These results revealed a remarkable alteration in the gut microbiome between the two groups. Taxonomic changes of intestinal flora To further investigate gut microbial composition, we analyzed the results at the phylum level. Firmicutes , Bacteroidota , Proteobacteria , and Actinobacteria were the predominant phyla in both groups (Fig. 3 A). Compared to the NC group, the relative abundance of Firmicutes was significantly decreased in the AS group ( p = 0.042, Fig. 3 B), whereas the relative abundance of Proteobacteria was significantly higher in the AS group ( p = 0.016, Fig. 3 B). No significant differences were observed in the relative abundance of Bacteroidota and Actinobacteria between the AS and NC groups (Fig. 3 B). Furthermore, the ratio of Firmicutes / Bacteroidota (F/B) showed a lower trend in the AS group (Fig. 3 C), although the difference was not significant ( p = 0.582). At the family level, Enterobacteriaceae , Erysipelatoclostridiaceae , Pasteurellaceae , and Lactobacillaceae were predominant in the AS group, whereas Erysipelotrichaceae were prevalent in the NC group (Fig. 3 D). At the genus level, the gut microbiota composition was analyzed. Overall, the top five most abundant genera detected in the AS group were Bacteroides (14.47%), Escherichia-Shigella (11.43%), Prevotella (9.15%), Blautia (4.57%), and Faecalibacterium (4.56%), whereas those in the NC group were Bacteroides (15.95%), Megamonas (6.26%), Prevotella (5.98%), Subdoligranulum (5.60%), and Blautia (4.66%). Thus, the dominant genera in both group were Bacteroides , Prevotella , and Blautia (Fig. 4 A). The Sankey diagram revealed the changing process of microbial composition from the phylum level to the genus level (Fig. 4 B). To compare the taxonomic profiles between AS and NC, genera with relative abundance > 0.01% in each sample were selected. The Wilcoxon rank-sum test showed that fecal samples from the AS group exhibited a higher relative abundance of Escherichia-Shigella, Erysipelotrichaceae_UCG-003 , Aggregatibacter , Alloprevotella , Holdemanella , Lactobacillus , Phascolarctobacterium , Catenibacterium , Fusobacterium , Erysipelatoclostridium , Sutterella , Muribaculaceae , Desulfovibrio , Prevotellaceae_Ga6A1_group , Parasutterella , and Phascolarctobacterium . The NC group showed a higher relative abundances of Nocardioides , Pseudarthrobacter , MB-A2-108 , and Prevotellaceae_UCG-001 (Fig. 5 A). A Circos diagram showed the differences in the relative abundance of these genera between the two groups (Fig. 5 B). Identifying key intestinal flora Linear discriminant analysis effect size (LEfSe) identified differentially abundant taxa between the AS and NC groups. In total, 54 differentially expressed taxa were identified at a linear discriminant analysis (LDA) score > 3.0 (Fig. 6 A and B). Of them, 39 were highly abundant in the AS group, whereas 15were highly abundant in the NC group. At the genus level, LEfSe feature selection identified notably higher abundances genera of Escherichia_Shigella , Erysipelotrichaceae_UCG_003 , Aggregatibacter , Alloprevotella , Holdemanella , Lactobacillus , Phascolarctobacterium , Parasutterella , Muribaculaceae , and Desulfovibrio in the AS group, and enriched Prevotellaceae_UCG_001 in the NC group (Fig. 6 B). The Circos diagram showed the difference in the relative abundances of these identified genera between two groups (Fig. 6 C). These results demonstrated specific changes in the gut microbial composition of the AS group compared to that of the NC group. Microbial co-occurrence network The co-occurrence network of 11 key genera identified by the LEfSe analysis was constructed. These different genera were used to construct an interaction network presenting the relationships among intestinal flora markers (Spearman’s correlation, |correlation coefficient| > 0.3, p < 0.05). The co-occurrence network of all samples was mainly centered on Escherichia_Shigella , Muribaculaceae , and Alloprevotella (Fig. 7 A). The genus of Escherichia_Shigella was positively correlated with all other key genera, respectively, except for Prevotellaceae_UCG − 001 . The genus of Prevotellaceae_UCG − 001 was negatively correlated with all other genera, respectively. Notably, AS-enriched genera (including Escherichia_Shigella , Erysipelotrichaceae_UCG_003 , Aggregatibacter , Alloprevotella , Holdemanella , Lactobacillus , Phascolarctobacterium , Muribaculaceae , and Desulfovibrio , Fig. 7 B) were more highly interconnected than NC-enriched genera (including Prevotellaceae_UCG − 001 and Alloprevotella , Fig. 7 C). Correlation analysis among key genera and clinical indicators To explore the predictive and discriminatory power of intestinal flora in AS, we investigated the relationship between the relative abundances of the key genera (n = 11, identified by the LEfSe analysis) and clinical sperm parameters. Spearman’s correlation analysis was performed on the key genera and clinical sperm parameters. The results showed that the 11 key genera were all not correlated with semen volume, total sperm count, and sperm concentration ( p > 0.05). The key genera of Escherichia_Shigella , Erysipelotrichaceae_UCG_003 , Aggregatibacter , Alloprevotella , Holdemanella , Lactobacillus , Phascolarctobacterium , Parasutterella , Muribaculaceae , and Desulfovibrio were all negatively correlated with total sperm mobility and progressive sperm mobility, respectively. However, the key genera of Prevotellaceae_UCG_001 was positively correlated with total sperm mobility and progressive sperm mobility, respectively. A correlation heatmap was generated to clearly show the above results (Fig. 8 A). Functional profile analysis of the intestinal flora PICRUSt2 was used to predict the different KEGG pathways and discuss the potential mechanisms of intestinal flora in the AS and NC groups. Of the 178 KEGG (level 3) pathways tested, 88 pathways were differentially enriched between the AS patients and NC men at the value of p < 0.05. Stamp analysis showed increased pathways in the AS group including selenocompound metabolism, sulfur metabolism, rna degradation, nitrogen metabolism, and purine metabolism. Meanwhile, the AS group exhibited reduced activity in some key pathways such as meiosis, drug metabolism, cyanoamino acid metabolism, tetracycline biosynthesis, polyketide sugar unit biosynthesis, and steroid biosynthesis (Fig. 8 B).
Discussion In the present study, high-throughput sequencing technology was employed to examine the intestinal flora of AS patients and NC men. Our study demonstrated that intestinal flora characteristics in patients with AS were significantly different from that in normal males. A lower richness and diversity (α-diversity and β-diversity) of intestinal flora occurred in the AS patients. The relative abundance of Proteobacteria , which could lead to enteral inflammation and tumor, was significantly higher in AS men. Our research also identified key gut microbiota in AS patients, including 11 important genera such as Escherichia_Shigella and Prevotellaceae_UCG_001 , which could serve as potential biomarkers for AS. Recent research has found that an imbalance in microbiota diversity can result in many illnesses like non-alcoholic fatty liver disease [ 13 ]. The microbiota feature in human semen was previously analyzed and some relevant genera such as Lactobacillus and Prevotella were identified [ 14 ]. In our study, four significant phyla including Proteobacteria , Firmicutes , Actinobacteria , and Bacteroidetes were identified. Moreover, the major genera in the gut were also analyzed. The highest abundance in the genus level was Bacteroides in two groups. Bacteroides were regarded as a potentially harmful genus. Some prior animal studies demonstrated the existence of Bacteroides was negatively relevant to sperm concentration and motility [ 12 , 15 ]. Moreover, we discovered a significant increase in the abundance of Escherichia-Shigella , Erysipelotrichaceae_UCG-003 , Aggregatibacter , Alloprevotella , Holdemanella , Lactobacillus , along with a decrease in Nocardioides , Pseudarthrobacter , MB-A2-108 , and Prevotellaceae_UCG-001 in the gut of AS patients. Escherichia-Shigella is usually regarded as a harmful bacterium and could result in sepsis and hemorrhagic colitis. In some cases, Escherichia-Shigella could participate in the synthesis of genotoxin and was relevant to the defect in DNA duplication [ 16 ]. Lactobacillus is identified as a gram-positive bacterium and it could be associated with the synthesis of short-chain fatty acids. Even though some researchers reported short-chain fatty acids could be helpful for demic wellness, excessive Lactobacillus abundance in males could alter the semen pH and cause abnormal microenvironment of spermatogenesis. Dysbiosis of these key microbiotas could have a significant impact on sperm progressive mobility. The co-occurrence network revealed that varying correlations within intestinal flora. In the network encompassing all samples, we found that Prevotellaceae_UCG_001 was all negatively relevant to other genera. However, Alloprevotella and Escherichia-Shigella were identified as the core genera in the network encompassing AS samples. Meanwhile, we showed that the correlation richness decreased significantly from the AS group to the NC group. It indicated that the microbiota profile in AS patients altered, which could result in the alternation of host phenotype, including decreased sperm mobility. Besides, Escherichia-Shigella was identified as one of the most abundant genera in the AS group and was usually regarded as potentially noxious. Escherichia-Shigella was considerably relevant to other genera in the co-occurrence network. Therefore, we considered that these potentially harmful genera could exert synergistic effects on each other and contribute to the occurrence of AS. PICRUSt2 software was employed to predict relevant metabolic pathways. Activity of the steroid biosynthesis pathway was significantly lower in the AS group than in the NC group. Sex hormones, which are a form of steroid, are well-known to play a role in semen quality. Prior research has shown that abnormalities in sex hormones can lead to impaired semen quality [ 17 ]. The alternation of sex hormone levels such as FSH, LH, and T levels is associated with testicular impairment, impeded sperm production and maturation, and reduced sperm motility [ 18 ]. The outcomes revealed a strong relationship between the intestinal flora and human metabolism. In addition, KEGG pathway analysis results showed a significant difference in glycerophospholipid metabolism between the two groups. Glycerophosphocholine and lysophosphatidylcholine were mainly produced by glycerophospholipid metabolism and these two metabolites were associated with semen quality [ 19 , 20 ]. Therefore, the gut microbiota of AS men might affect sperm motility through abnormal metabolic activity. As far as we know, the present study reported a strong association between gut microbiota alternation and AS development for the first time. The outcomes could may provide insight into innovative perceptions of the function of the gut microbiota in the occurrence of AS. Some limitations existed in our study and should be improved in future interventions. First, metabolites in serum or feces were not examined. The detection of metabolites and their changes to microbiota dysbiosis might offer some further insight regarding the detailed mechanism of AS. Second, metagenomics sequencing of gut microbiota might be useful in illustrating the molecular mechanisms of AS and providing guidance for the prevention, diagnosis, and management of AS. Nevertheless, our study also had some excellent advantages. First, the large sample size of our 16 S rDNA analysis lends credibility to our findings. Second, some factors such as inflammatory bowel disease and depression could potentially impact gut microbiota. Our study excluded patients with these comorbidities, which improved the accuracy of our findings. Third, our study innovatively analyzed and compared the dietary habits of the participants, which were also key factors influencing gut microbiota, and no significant differences were observed in dietary habits between the two groups. Thus, our findings were more convincing and valuable. In short, this research significantly contributes to the understanding of the pathogenesis and management of AS disease.
Conclusion It appears that the composition of intestinal flora in AS patients was different from those in healthy men, suggesting that AS development may be associated with intestinal flora dysbiosis. Key gut microbiota biomarkers were screened, and relevant metabolic pathways were predicted in our study. Gut microbiota had a potential role in discriminating AS patients from healthy controls and function as a promising biomarker of AS. These key gut microbiotas are expected to be applied to clinical studies in the future to provide a gut microbiome-based personalized approach for AS patients.
Background Identification of intestinal flora composition is significant for exploring the cause and pathogenic mechanisms of the gut-testis axis and clarifying the relationship between microbiota and infertility. Our study aimed to examine the alternation in gut microbiota composition and identify potential microbes associated with development of Asthenozoospermia (AS). Method A total of 580 males were recruited in the outpatient department of Tianjin Medical University General Hospital between September 2021 and March 2023. Sperm parameters were analyzed according to the WHO laboratory manual. The 16 S rRNA gene high-throughput sequencing was performed to detect the gut microbiota composition in fecal samples. LEfSe analysis was used to screen key microbiota. PICRUSt2 software was utilized to predict relevant pathways. Results After rigorous screening, 60 isolated AS patients (AS group) and 48 healthy men (NC group) were enrolled. No significant differences were observed in demographic characteristics ( p > 0.05), semen volume ( p = 0.718), sperm concentration ( p = 0.109), or total sperm count ( p = 0.200). Sperm total motility and progressive motility were significantly decreased in the AS group ( p < 0.001). AS patients had significantly lower alpha diversity indices (Chao1, observed OTUs, and PD Whole-tree; p < 0.05). The beta-diversity of gut microbiota in AS patients significantly differed from NC men (PCoA analysis, p = 0.001). Firmicutes , Bacteroidota , Proteobacteria , and Actinobacteria were the primary phyla, with the dominant genera including Bacteroides , Prevotella , and Blautia . Eleven key genera such as Escherichia_Shigella and Prevotellaceae_UCG_001 were identified by LEfSe analysis. Most of these genera were negatively correlated with sperm mobility. Eighty-eight KEGG pathways, including steroid biosynthesis and meiosis, were significantly enriched between the two groups. Conclusions It appears that gut microbiota composition in AS patients significantly differed from that in healthy men, and the development of AS might be associated with intestinal flora dysbiosis. Supplementary Information The online version contains supplementary material available at 10.1186/s12866-023-03173-5. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements Not Applicable. Author contributions Yang Pan and Xiaoqiang Liu performed the research. Shangren Wang collected the data. Yang Pan analyzed the data and wrote the paper. Li Liu and Xiaoqiang Liu designed the research study and revised the paper. The final version of the manuscript was approved by all authors. Funding This work was supported by a grant from the National Natural Science Foundation of China (No. 82171594). Data availability All data will be available from the corresponding author on reasonable request. The datasets generated and/or analyzed during the current study are available in the SRA database of NCBI . Declarations Ethics approval and consent to participate An informed consent form was signed by all participants and our study were performed in accordance with the Declaration of Helsinki. The present study protocol was reviewed and approved by the institutional review board of Tianjin Medical University General Hospital (IRB2022-KY-308). Informed consent was submitted by all subjects when they were enrolled. Consent for publication Not Applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Microbiol. 2024 Jan 15; 24:22
oa_package/57/1f/PMC10789020.tar.gz
PMC10789021
0
Background Hyperglycemia in pregnancy affected around 21.1 million (16.7%) live births in 2021. The majority (80.3%) of these were diagnosed for the first time during pregnancy and went on to be classified as Gestational Diabetes Mellitus (GDM) [ 1 ]. The burden is disproportionately higher in low- and middle-income countries (LMICs) where socioeconomic and environmental stressors such as exposure to poor nutrition in early childhood, limited access to healthcare facilities, and a genetic predisposition in certain ethnicities are thought to contribute to the higher burden [ 2 , 3 ]. Several studies have shown a linear relationship between blood glucose levels during pregnancy with adverse maternal–fetal outcomes and the risk of diabetes mellitus later in life [ 4 ]. Increased placental transport of glucose leads to elevated fetal insulin and insulin-like growth factor 1 (IGF-1) levels causing fetal overgrowth or macrosomia [ 5 ]. Macrosomia increases the risk of obstructed labor and cesarean delivery [ 6 ]. Excess fetal insulin production can contribute to β-cell dysfunction and insulin resistance, increasing the risk of hypoglycemia and brain injury after birth [ 7 ]. There is also increased risk of stillbirth and preterm births, due to mechanisms such as oxidative stress, placental dysfunction, pre-eclampsia, and fetal macrosomia [ 8 ]. Thus, it is important to screen women for elevated glucose levels to prevent serious complications of pregnancy. The American Diabetes Association (ADA) has recommended fasting blood glucose levels and oral glucose tolerance test (OGTT) as the gold standard diagnostic tests for GDM [ 9 ]. However, both of these tests require prolonged fasting and the Oral Glucose Tolerance Test (OGTT) can be practically burdensome in low-resource settings with limited access to healthcare. In contrast, the glycated hemoglobin (HbA1c) is a test which gives average glucose levels during the preceding 90–120 days. It is routinely used for monitoring glycemic control in diabetic patients. An HbA1c percent greater than 6.5 is diagnostic of diabetes mellitus in non-pregnant individuals [ 10 ]. In addition, if performed as a point-of-care test, it can improve testing compliance for monitoring hyperglycemia in a single visit. However, there is currently no clear consensus on its use in the screening and management of pregnant women for GDM. Thus, in this paper, we refer to our experience of performing point of care HbA1c testing as a biomarker of hyperglycemia during early to mid-pregnancy on a large cohort of pregnant women across three countries in Asia and Africa and its association with adverse pregnancy outcomes like stillbirth, preterm birth, large for gestational age and cesarean section [ 11 ].
Methods Study design and setting We performed a secondary data analysis on a large cohort of pregnant women enrolled as part of the Alliance for Maternal and Child Health Improvement (AMANHI) biorepository study. Between May 2014 and June 2018, the AMANHI study enrolled 10,001 pregnant women between 8—< 20 weeks of gestational age from Bangladesh, Pakistan, and Tanzania. A detailed description of the study sites and characteristics of the cohort has been published previously [ 11 ]. Briefly, women were enrolled after confirming pregnancy and gestational age through ultrasound scan, and blood and urine samples were collected using standardized methods across the three sites at the time of enrollment, 24–28 or 32–36 weeks of gestation, at the time of birth and 6 weeks after delivery. Placental tissue and maternal and newborn stool samples were also collected at the time of birth. In addition, a paternal saliva sample was collected. At each contact, trained field workers collected detailed information on the health and care seeking behavior of the pregnant woman using a standardized tool across all sites [ 11 ]. For HbA1c testing, trained phlebotomists collected 0.25–0.50 ml of maternal venous blood in a purple top 7.5 ml EDTA tube (S-Monovette). HbA1c level was measured via a monoclonal antibody agglutination reaction using the Siemens DCA Vantage® Analyzer (Siemens, Washington, USA), with controls traceable to the International Federation of Clinical Chemistry (IFCC) reference materials and test methods for measurement of HbA1c. The normal range of this HbA1c level measurement was within 2.5% to 14% units (4 mmol/mol to 130 mmol/mol) according to the manufacturer. Statistical analysis The primary exposure variable for this analysis was HbA1c levels measured at enrollment (8 to < 20 weeks of gestation) and categorized based on the ADA guidelines into: less than 5.7%, 5.7–6.4% and ≥ 6.5% [ 12 ]. For the outcomes of interest, stillbirths were defined as babies who were born dead after 22 weeks of gestation. Among livebirths, preterm births were defined as livebirths before 37 weeks of gestation; large for gestational age (LGA) births were defined as liveborn with a birthweight above 90th percentile based on INTERGROWTH-21st standards [ 13 ]. Mid-upper arm circumference (MUAC) was categorized as severely malnourished < 21 cm, moderately malnourished ≥ 21 cm & < 23 cm, and normal ≥ 23 cm. Body Mass Index (BMI) was categorized as underweight < 18.5, normal ≥ 18.5 & < 25.0, overweight ≥ 25.0and obese ≥ 30.0. The fourth outcome was children born through cesarean section. For descriptive purposes, all continuous variables were expressed as mean ± SD and categorical variables as frequencies with percentages. Generalized binomial regression was used to estimate crude and adjusted risk ratios for HbA1c levels with the four predefined outcomes. We used stepwise regression with forward selection. The final multivariate model had all variables with a p-value less than 0.05. The model was adjusted for the following covariates: maternal age; education status; wealth quintile; parity; gravidity; MUAC; BMI; smoking and tobacco use; exposure to biomass; history of previous stillbirths, miscarriages, caesarean section, and preterm birth; hypertension; diabetes; anemia; gender of the fetus; place and mode of delivery; and history of antepartum and postpartum hemorrhage. Women with missing HbA1c levels at enrollment, multiple births, abortive outcomes, and those missing outcome information were excluded from the analyses. All analysis was performed using Stata version 15.0. Patient and public involvement It was not appropriate or possible to involve patients or the public in the design, conduct, or reporting, or dissemination plans of our research. Ethics The AMANHI study received ethical approval from the local and institutional ethics committees of all the three sites. These included Zanzibar Health Research Ethics Committee (formerly ZAMREC) (ZAMREC/0002/OCTOBER/013) for Tanzania, ICDDR, B (PR12073) and John Hopkins University (IRB 00004508) for Bangladesh and Aga Khan University (2790-paeds-ERC-13) for Pakistan. In addition, the protocols for the biorepository study were also approved by the WHO Ethics Review Committee (RPC 532) and continuing approvals were sought yearly. Written informed consent was obtained from study participants in which all study and sample handling and study procedures were explained in detail. HbA1c results were also shared with these participants.
Results Pregnancy outcomes and maternal characteristics A total of 10,001 women were enrolled in the study across the three sites from May 2014 and June 2018. HbA1c levels at enrolment were missing for 293 pregnant women; 137 women had multifetal pregnancies; 132 pregnancies ended in abortion or a miscarriage, and outcome information was missing for 61 women. These were excluded from the analysis. The remaining pregnancies ( n = 9,378) resulted in 9,039 liveborn babies (96.4%) and 339 stillbirths (3.6%). There were 892 preterm births (9.8%), 892 women underwent a C-Sect. (9.8%), and 532 babies were born large for gestational age (5.9%). The site-wise distribution of these pregnancy outcomes is given in Fig. 1 . Table 1 summarizes the clinical and sociodemographic characteristics of the enrolled women. Most women were in the 20–29 years age bracket with the lowest mean maternal age found in Bangladesh (23.46 ± 4.44 years). In total 679 (7.1%) women were severely malnourished and 1323 (13%) were moderately malnourished, with the highest percentage of malnourished women in Bangladesh. Pakistan site had the highest proportion of women who had no formal education (52%, n = 1284). History of miscarriage in a previous pregnancy was also highest in Pakistan (30%, n = 735). Mean HbA1c levels at enrollment for the whole cohort was 5.2% (± 0.5%). It was highest for Bangladesh (5.31 ± 0.37), followed by Tanzania (5.22 ± 0.49) and then Pakistan (5.07 ± 0.58) (Fig. S 1 ). Using the ADA cutoff values for diabetes mellitus, 8486 number of women (89%) had HbA1c levels below 5.7%, 946 (10%) had HbA1c levels between 5.7 -6.4, and 78 (1%) were above ≥ 6.5%. Tanzania site had the highest number of women 42 (6.5%) with HbA1c levels more than 6.5 (Table 2 ). Figure S 2 a, b and c show the HbA1c levels by categories of maternal Age, BMI and MUAC across all study sites. Association of HbA1c levels with adverse pregnancy outcomes Table 3 and Fig. 2 (a,b,c and d) shows the association between HbA1c levels at less than 20 gestational weeks and adverse pregnancy outcomes across all sites. In the unadjusted model, HbA1c levels ≥ 6.5 were found to be significantly associated with stillbirths (RR = 5.7, 95% CI 3.6,9.1), preterm births (RR = 2.6, 95%CI 1.7,4.1), LGA (RR = 6.0, 95%CI 4.2, 8.6) and C-section deliveries (RR = 2.1, 95%CI 1.3,5.3). In the multivariate analysis, the adjusted relative risks (aRR) for HbA1c levels 5.7–6.4 were (aRR = 1.2, 95% CI 0.9, 1.7) for stillbirths; (aRR = 1.2, 95% CI 0.9, 1.6) for preterm births; and (aRR = 1.2, 95% CI 0.9, 1.6) for LGA. For HbA1c levels ≥ 6.5, the aRR was (aRR = 6.3, 95% CI 3.4, 11.6) for stillbirths; (aRR = 3.5, 95% CI 1.8, 6.7) for preterm births; and (aRR = 5.5, 95% CI 2.9, 10.6) for LGA. (Table 4 ).
Discussion In our study, maternal HbA1c levels during early to mid- pregnancy (8 to < 20 gestational weeks) were associated with stillbirth, preterm birth, and LGA deliveries in Bangladesh, Pakistan and Tanzania. The majority of hyperglycemia during pregnancy remains undiagnosed in sub-Saharan Africa and South Asia [ 14 ]. Only half of the pregnant women in these regions receive the minimum recommended four antenatal care visits and most of the births occur at home, typically attended by traditional birth attendants who lack skills to manage the complications of hyperglycemia in pregnancy [ 15 ]. In this situation, point-of-care HbA1c testing during early to mid-pregnancy can serve as an optimal biomarker for identifying women at an increased risk of adverse outcomes. The ADA has previously suggested that HbA1c levels below 6.0% (42 mmol/mol) in mid-pregnancy are associated with the lowest risk of maternal complications [ 16 ]. In our study, 97.1% of the women had HbA1c levels below this cutoff, suggesting that women above this cutoff could be predisposed to adverse outcomes. We also found the ADA defined category of 6.5 and above for the diagnosis of diabetes mellitus in non-pregnant population to be associated with a higher risk of adverse pregnancy outcomes. Thus, a target of < 5.7% could be optimal during pregnancy in our population, provided it can be achieved without significant hypoglycemia. There have been several previous attempts to define an optimal HbA1c cut-off for predicting adverse pregnancy outcomes in healthy pregnant women without pre-existing diabetes. The HAPO Study (Hyperglycemia and Adverse Pregnancy Outcomes), enrolled 5000 women across the globe and found that higher HbA1c levels at 24–32 weeks of gestation were associated with an increased risk of primary cesarean delivery, neonatal hypoglycemia, and large-for-gestational-age infants [ 17 ]. A longitudinal study from New Zealand conducted by Hughes et al. including 16,122 women predominantly of non-Hispanic white origin, demonstrated that a first trimester HbA1c threshold of 5.9% was associated with an increased risk of adverse pregnancy outcomes, including major congenital anomaly, preeclampsia, perinatal death, large for gestational age and preterm birth [ 18 ]. This cut-off is also lower than the ADA-defined cut-off of ≥ 6.0%. Bender et al., reported a cut-off of ≥ 5.7% at the first prenatal visit which could be used to identify women at increased risk for adverse pregnancy outcomes, including preterm birth, small for gestational age infants, and admission to the neonatal intensive care unit [ 19 ]. In addition, Antoniou et al. proposed an even lower cut-off of ≥ 5.5% was associated with an increased risk of adverse pregnancy outcomes, including pre-eclampsia, preterm birth, and LGA infants [ 20 ]. A large prospective nationwide birth cohort study from Japan with HbA1c < 6.5% (< 48 mmol/mol) reported that every 1% (11 mmol/mol) increase in HbA1c levels measured less than 24 weeks of gestation, were directly associated with a higher risk of adverse pregnancy outcomes [ 21 ]. Similarly, another study by Mane et al., in a multiethnic community reported that an early HbA1c level of 5.9%, unrelated to GDM, indicated an increased risk for macrosomia [ 22 ]. Our results require cautious interpretation since the HbA1c categories used in our study were developed by ADA for the diagnosis of diabetes in non-pregnant individuals. In women with pre-existing diabetes, early pregnancy HbA1c levels directly correlate with pregnancy outcomes [ 22 – 24 ], but this association is still ambiguous in those without diabetes. It is also important to note that the predictive value of HbA1c for adverse outcomes may vary depending on factors such the timing of HbA1c measurement during pregnancy. Carlsen et al. examined the association between HbA1c levels measured during mid-pregnancy and adverse outcomes in women with pre-existing diabetes. The study found that HbA1c levels in the upper quartile (but still within the generally accepted normal range) are at increased risk of preterm delivery and preeclampsia [ 25 ]. Similarly, Hong et al. investigated that predelivery HbA1c at term in a healthy pregnant population is a potential predictor for adverse pregnancy outcomes such as c-section deliveries [ 26 ]. Nielsen et al. found that HbA1c was lower in early pregnancy and further decreased in late pregnancy compared with age-matched nonpregnant women. A decrease of the upper normal limit of HbA1c from 6.3% before pregnancy to 5.6% in the third trimester of pregnancy was of significant clinical importance [ 27 ]. Thus, it may not be possible to compare the studies performed at different time points during pregnancy. The relationship between HbA1c levels and adverse pregnancy outcomes also varies by ethnicity. Results from a multiethnic cohort study in Barcelona, corresponded to a significant association between high normal range HbA1c and the risk of macrosomia, but no associations between the HbA1c level with preterm birth and LGA could be established after adjustment of potential confounders [ 22 ]. These results could in part be attributed to the differences in ethnic origins of the study populations. The research by Hughes et al . was conducted in a relatively low-risk, predominantly white population, whereas the cohort in the current study was characterized by an entirely south Asian or African population hailing from very different socio-economic settings and health conditions [ 18 ]. Previous studies reported an interracial variability in HbA1c levels and in pregnancy outcomes [ 23 , 28 ]. Studies indicate that 70–85% of women diagnosed with GDM according to Carpenter-Coustan or National Diabetes Data Group (NDDG) criteria can effectively control GDM through lifestyle modifications alone and some pregnant women with hyperglycemia may require frequent glucose testing and continuous use of either oral or injectable medications [ 29 – 31 ]. Thus, adopting the early to mid-pregnancy point-of-care test HbA1c test in community-settings can enable timely management and targeted pregnancy-focused education for future risk reduction. The costs of identifying a greater number of the pregnant population to be at an increased risk of adverse outcomes could be balanced by the efficient management of the hyperglycemia and avoidance of consequent healthcare costs associated with an LGA or preterm delivery. We studied a large population-based cohort of women in a setting with universal HbA1c testing in early pregnancy, minimizing the potential for selection bias. We used standardized cut-offs which may cause ease of comparison with other studies. Our study adjusted for systemically identified confounders in a general pregnant population. Our population was well defined, and our sample size appropriately calculated for a multivariable binary logistic regression analysis. Our study population was multiethnic, and we believed our results would be generalizable to a similar population and care setting. Our study has some limitations. The ADA defined cut-offs were developed specifically for diagnosing diabetes and monitoring blood glucose control in diabetic patients, not for predicting pregnancy outcomes in healthy pregnant women. Therefore, it is possible that these cut-offs may not be optimal for predicting pregnancy outcomes in this population. Although we carefully adjusted for potential confounders, we are unlikely to completely rule out the possibility of vestigial confounding by other undocumented determinants, such as family history of diabetes, diagnosis of GDM in a prior pregnancy, gestational weight gain, dietary nutrition during gestation, and other socioeconomic parameters. Additionally, it is also widely recognized that hemoglobinopathies are more frequent in some nonwhite populations and that their presence might influence HbA1c levels [ 32 ]. Furthermore, in this study, pregnant women were not screened for GDM, which meant that it could not be included as a confounder and so an expected influence on results is a rational possibility.
Conclusion In conclusion, maternal HbA1c level is an independent risk factor for predicting adverse pregnancy outcomes such as stillbirth, preterm birth, and LGA among women in South Asia and Sub-Saharan Africa. These groups may benefit from early interventional strategies. Further research is required to predict the diagnostic accuracy of the test as compared to the gold standard in these settings.
Background Hyperglycemia during pregnancy leads to adverse maternal and fetal outcomes. Thus, strict monitoring of blood glucose levels is warranted. This study aims to determine the association of early to mid-pregnancy HbA1c levels with the development of pregnancy complications in women from three countries in South Asia and Sub-Saharan Africa. Methods We performed a secondary analysis of the AMANHI (Alliance for Maternal and Newborn Health Improvement) cohort, which enrolled 10,001 pregnant women between May 2014 and June 2018 across Sylhet-Bangladesh, Karachi-Pakistan, and Pemba Island-Tanzania. HbA1c assays were performed at enrollment (8 to < 20 gestational weeks), and epidemiological data were collected during 2–3 monthly household visits. The women were followed-up till the postpartum period to determine the pregnancy outcomes. Multivariable logistic regression models assessed the association between elevated HbA1c levels and adverse events while controlling for potential confounders. Results A total of 9,510 pregnant women were included in the analysis. The mean HbA1c level at enrollment was found to be the highest in Bangladesh (5.31 ± 0.37), followed by Tanzania (5.22 ± 0.49) and then Pakistan (5.07 ± 0.58). We report 339 stillbirths and 9,039 live births. Among the live births were 892 preterm births, 892 deliveries via cesarean section, and 532 LGA babies. In the multivariate pooled analysis, maternal HbA1c levels of ≥ 6.5 were associated with increased risks of stillbirths (aRR = 6.3, 95% CI = 3.4,11.6); preterm births (aRR = 3.5, 95% CI = 1.8–6.7); and Large for Gestational Age (aRR = 5.5, 95% CI = 2.9–10.6). Conclusion Maternal HbA1c level is an independent risk factor for predicting adverse pregnancy outcomes such as stillbirth, preterm birth, and LGA among women in South Asia and Sub-Saharan Africa. These groups may benefit from early interventional strategies. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-06241-w. Keywords
Supplementary Information
Acknowledgements We thank all the mothers and children who participated in the study. Authors’ contributions The study was conceptualized and designed by principal investigators of the three sites (AB (Bangladesh), FJ (Pakistan), and SS (Tanzania)), AM, YS, and RB. All authors from three sites (Bangladesh, Pakistan, and Tanzania) conducted the acquisition of data. MIN, FJ and JK wrote the first draft of the manuscript that was reviewed by all authors. All the authors have read and approved the final manuscript. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. Funding This work was supported by the Bill & Melinda Gates Foundation through a grant to the World Health Organization, Grant Number (64438). The funders have played no role in the drafting of the manuscript and the decision to submit for publication. Availability of data and materials Data are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate The AMANHI study received ethical approval from the local and institutional ethics committees of all the three sites. These included Zanzibar Health Research Ethics Committee (formerly ZAMREC) (ZAMREC/0002/OCTOBER/013) for Tanzania, ICDDR, B (PR12073) and John Hopkins University (IRB 00004508) for Bangladesh and Aga Khan University (2790-paeds-ERC-13) for Pakistan. In addition, the protocols for the biorepository study were also approved by the WHO Ethics Review Committee (RPC 532) and continuing approvals were sought yearly. Written informed consent was obtained from study participants in which all study and sample handling and study procedures were explained in detail. HbA1c results were also shared with these participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Pregnancy Childbirth. 2024 Jan 15; 24:66
oa_package/2e/54/PMC10789021.tar.gz
PMC10789022
0
Background Vesicular stomatitis virus (VSV), a typical non-segmented, negative-sense RNA virus, belongs to the Vesiculovirus genus of the family Rhabdoviridae [ 1 ]. Based on antibody neutralization tests and complement binding tests, VSV is divided into the New Jersey and Indiana serotypes. The genome of VSV encodes five viral proteins: nucleocapsid (N), phosphoprotein (P), matrix (M), glycoprotein (G), and large protein (L) [ 2 ]. The virus has a broad host range such as horses, cattle, swine, goats, rodents, and humans [ 3 ]. Infected horses, cattle, and pigs can develop oral vesicular epithelial lesions [ 3 ]. Vesicular stomatitis was first described after an outbreak in the USA in 1916. VSV is now considered endemic in parts of equatorial America and in the southwestern states of the USA, both of which witness outbreaks every 10 years [ 4 ]. Currently, several aspects of VSV transmission are not well understood. VSV’s genome is of appropriate size, and it can insert 4–5 kb of exogenous genes [ 5 ]. It can express unrelated glycoproteins on the viral surface. In addition, VSV can infect a wide range of cell lines, where it rapidly replicates to produce large numbers of infectious viral particles. Therefore, VSV is considered a model virus that can serve as a good molecular tool and vaccine vector. The realization of these applications often requires the use of reverse genetic systems, which allow viruses to be modified at the genetic level. In 1995, Whela et al. succeeded in rescuing infectious VSV particles from a full-length cDNA clone of the viral genome [ 6 ]. The successful establishment of the VSV reverse genetic system has opened up the possibility of manipulating the VSV genome and therefore provided a basis for the development of VSV as a widely used research tool, vaccine platform, and oncolytic vector. Long-stranded non-coding RNA (lncRNA) is a class of non-coding RNAs larger than 200 nucleotides in length, which are potentially involved in the development of human diseases, such as cancer [ 7 ]. In viral infections, lncRNAs play an important role in the genetic stabilization of viruses and also affect the generation of innate immune responses in the host cells after viral infection [ 8 ]. For example, lncRNA-Acod1 (an lncRNA identified by its most recent encoding gene Acod1, aconitate decarboxylase 1) can be induced by a variety of viruses but not be induced by type I interferon. It promotes viral replication in mouse and human cells [ 9 ]. Host lncRNAs also play an important role in the process of virus replication. It has been suggested that the pseudogene-derived lncRNA PCNAP1 and its ancestor PCNA may regulate hepatitis B virus replication and promote hepatocarcinogenesis [ 10 ]. In addition, host lncRNAs work together with other non-coding RNAs to influence viral replication. The various types of RNAs involved in the complex network of transcriptional regulation in organisms include competitive endogenous RNAs (ceRNAs), such as mRNA, lncRNA, and circRNA, all of which can bind competitively with miRNAs and together influence the replication process of viruses. Most of the circRNAs are derived from pre-mRNAs, which are clipped to form circRNAs. CircRNAs can competitively bind miRNAs with LncRNAs and mRNAs, resulting in regulation of lncRNAs or mRNAs. lncRNA and mRNA can sometimes have similar sequences; therefore, lncRNA can be used as ceRNA to trap miRNA through similar sequences and release mRNA to perform normal biological functions. In a study by Li et al., qPCR of 144 clinical sputum specimens showed that lncRNA NRAV expression was significantly lower in respiratory syncytial virus (RSV)-positive patients than in uninfected patients and that NRAV overexpression promoted RSV production in vitro, suggesting that reduced NRAV in RSV infection is part of the host's antiviral response. Further studies revealed that NRAV competes with the vesicle protein Rab5c for the microRNA miR509-3p in the cytoplasm to promote RSV vesicle translocation and accelerate RSV proliferation [ 11 ]. Currently, there are very few studies related to the lncRNAs with VSV. In our experiment, VSV was used to infect BHK-21 cells, and the cells were collected 24 h later for transcriptome sequencing. The changes in the expression profiles of lncRNA and mRNA in VSV-infected host cells and their association were analyzed to screen potential candidate lncRNAs and target genes in VSV infection. Currently, there are no VSV-related lncRNA studies; therefore, the results of this experiment may provide new research ideas and drug targets for the prevention and treatment of VSV.
Methods Virus and cells BHK-21 was purchased from ATCC ( https://www.atcc.org/ ). The Indiana strain of VSV was from our laboratory. Growth curves of VSV The growth curve of VSV at an MOI of 1 was determined. Briefly, BHK-21 cells (2 × 10 6 cells/well) were inoculated in six-well cell culture plates and cultured in DMEM containing 10% fetal bovine serum. After 12 h, BHK-21 cells were inoculated with VSV at 1.0 MOI and incubated at 37 °C with 5% CO 2 . The supernatants were collected in TRIzol solution at 4, 8, 12, 24, 36, and 48 h. Three biological replicates were set up for each group. The growth curve of VSV was determined using the following method: 1) Quantitative analysis of the viral genome was performed using reverse transcription real-time quantitative PCR; 2) the titer of the virus in the above supernatant was determined using the plaque assays, and growth curves were plotted. Sample collection BHK-21 cells were inoculated in six-well cell culture plates using DMEM containing 10% fetal bovine serum and cultured at 37 °C with 5% CO 2 for 12 h until the monolayer of cells grew to 95% confluence. BHK-21 cells were inoculated with 1.0 MOI of VSV, incubated for 1 h, supplemented 2 ml of DMEM containing 1% fetal bovine serum, and then maintaining at 37C with 5% CO2. Samples were collected 24 h after infection according to the growth curve of VSV. The experiment was performed in four biological replicates. RNA extraction, library construction, and sequencing Total RNA was extracted using the TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. RNA quality was assessed on an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA) and checked using RNase-free agarose gel electrophoresis. After total RNA was extracted, rRNAs were removed to retain mRNAs and lncRNAs. Then, the enriched mRNA and lncRNA were fragmented into short fragments using fragmentation buffer and reversely transcribed into cDNA using the NEBNext Ultra RNA Library Prep Kit for Illumina (NEB #7530, New England Biolabs, Ipswich, MA, USA). The purified double-stranded cDNA fragments were end-repaired, and A-nucleotide overhangs were added, followed by ligation to Illumina sequencing adapters. The ligation reaction was purified with AMPure XP Beads (1.0 x) and PCR-amplified. The resulting cDNA library was sequenced using the Illumina Novaseq6000 platform by Gene Denovo Biotechnology Co. Reference genome mapping and transcriptome assembly To obtain clean reads, fastp (version 0.18.0) was used to filter the raw reads. The process included removing reads containing adapters, removing reads containing more than 10% of unknown nucleotides, and removing low-quality reads containing more than 50% of low-quality (Q-value ≤ 20) bases. Bowtie2 (version 2.2.8) was used for mapping the clean reads to the ribosomal RNA database. The rRNA mapped reads then will be removed. The remaining clean reads of each sample were assembled using StringTie (version 1.3.1) in a reference-based approach. An index of the reference genome was built, and paired-end clean reads were mapped to the reference genome, Mesocricetus auratus , using HISAT (version 2.2.4), and other parameters set to default. Identification of potential lncRNA candidates Three softwares CNCI (version 2.0), CPC (version 0.9-r2) and FEELNC (version 0.2) were used to assess the protein-coding potential of novel transcripts by default parameters. The intersection of both non-protein-coding potential results were chosen as lncRNAs while they met the conditions of length > 200 bp and exon number > 2. Relationship analysis of the samples Correlation analysis was performed using the R statistical software. Principal component analysis (PCA) was performed with the R package gmodels ( http://www.rproject.org/ ). PCA is a statistical procedure that converts hundreds of thousands of correlated variables (transcripts expression) into a set of values of linearly uncorrelated variables called principal components. PCA is largely used to reveal the relationship of the samples. Analysis of expression The RSEM software was used to calculate the expression of the transcription region. The fragment per kilobase of transcript per million mapped reads value was calculated to quantify its expression abundance and variations. The DESeq2 software was used to conduct differential expression analysis for the two different groups, in which the statistical significance was set at a false discovery rate (FDR)-adjusted p -value (padj ≤ 0.05) and |Log2Foldchange|> 2. Gene function enrichment analysis All differentially expressed genes (DEGs) were mapped to Gene Ontology (GO) terms in the Gene Ontology database ( http://www.geneontology.org/ ). Significantly enriched GO terms in DEGs comparing to the genome background were defined by hypergeometric test. The calculated p-values were subjected to FDR Correction, with FDR ≤ 0.05 as the threshold. The p-value was calculated using the R phyper hypergeometric test and the qvalue (version 2.2.2) was used to calculate the FDR. KEGG ( https://www.kegg.jp ) is a manually managed database resource that integrates various biological objects [ 33 ]. KEGG links genomic with higher-order functional information, i.e., the information in the PATHWAY database [ 34 ]. The purpose of KEGG pathway maps is to establish links from genes in the genome to gene products in the pathway [ 35 ]. Each pathway of KEGG was analyzed for enrichment using the hypergeometric test. The calculation formula was the same as that in the GO analysis. Pathways meeting this condition were defined as significantly enriched pathways in the DEGs. Gene set enrichment analysis (GSEA) We performed gene set enrichment analysis using the GSEA software and MSigDB to identify whether a set of genes in specific. Briefly, we input the gene expression matrix and ranked genes using the SignaltoNoise normalization method. Enrichment scores and p-values were calculated with the default parameters. Gene function enrichment analysis of differentially expressed lncRNA targets LncRNAs regulate target genes by cis-, antisense- and trans-regulation. The three modes of regulation follow their own methods of target gene prediction. The software RNAplex (version 0.2) was used to predict the antisense-targets. LncRNAs in less than 100 kb up or down stream of a gene were assumed to be cis-regulators. As for lncRNA trans-regulation analysis, Pearson correlation > 0.999 was used as a condition to screen target genes. The cis-, antisense- and tran-targets that matched the screening criteria were subjected to GO, as well as KEGG enrichment analysis by referring to Gene function enrichment analysis. Construction of lncRNA/mRNA networks The relevant descriptions of the target genes were available by searching the NCBI database through GeneBank accession number. To infer the functions of the differentially expressed lncRNAs and their target genes, we constructed a network based on lncRNAs and mRNAs in Cytoscape (version 3.1.1). Validation of transcriptome sequencing results Reverse transcription real-time quantitative PCR (RT-qPCR) was performed to validate the genes identified by transcriptome sequencing. Ten random DEGs were selected for RT-qPCR validation. Total RNA extraction was performed using the TRIzol reagent according to the manufacturer's instructions. M-MLV reverse transcriptase (Bao Bioengineering Co., Ltd., Dalian, China) was used for cDNA synthesis. Sequence-specific primers were designed for the randomly selected genes using the SnapGene software (Table 2 ). RT-qPCR was performed using the Roche LightCycler® 480II Real-Time Fluorescent Quantitative PCR System (Roche, Switzerland). RT-qPCR was performed in a 10 μL reaction volume containing 5 μL of TB Green® Premix Ex TaqTM II (Tli RnaseH Plus) (Bao Bioengineering Co., Ltd., Dalian, China), 0.3 μL of upstream and downstream primers (10 μM), 1 μL of cDNA template, and 3.4 μL of ddH 2 O. The following reaction profile was used: 95 °C for 5 min, followed by 40 cycles of 95 °C for 10 s, and 60 °C for 30 s. Melting curve analysis was performed to validate specific amplification. The β-actin gene was used as an endogenous reference gene. RT-qPCR was performed in a 384-well plate, and each biological replicate was tested in triplicate. The relative expression values of the selected genes were calculated using the 2 −ΔΔCt method and normalized against the expression levels of the β-actin gene. Statistical analysis All data were analyzed using IBM SPSS Statistics 26.0. All data are presented as the mean ± SD. The t tests were performed to compare means, and P < 0.05 was considered statistically significant.
Results Growth curves of VSV To determine the intracellular replication cycle, the one-step growth curve method was used to define the dynamics of VSV infection on BHK-21 cells. Because all stages of the VSV life cycle were observed within 48 h, we chose to infect the cells at the value of 1.0 MOI and collect the supernatants at 4, 8, 12, 24, 36, and 48 h. To ensure more reliable results, we used two methods to plot the one-step growth curves. First, the growth curve of VSV was determined using RT-qPCR to detect the changes in the VSV genome copy number in the cell supernatant at 4, 8, 12, 24, 36, and 48 h. The growth curves showed that the amount of virus in the cell supernatant remained essentially stable for 8 h after infection, after which the viral load in the cell supernatant increased rapidly. The viral copy number peaked at 24 h and then remained relatively stable until 48 h (Fig. 1 a). The second method was to determine the titer of the virus in the supernatant as described previously using plaque assays and to plot the growth curve. The plaque assays showed similar results; that is, the viral titer remained essentially stable until 8 h, peaked at around 24 h, and subsequently remained stable until 48 h (Fig. 1 b). As shown in the results above, we chose 1.0 MOI of BHK-21 cells infected with VSV for 24 h and then collected the cells for subsequent sequencing of the transcriptome. Evaluation of the transcriptome sequencing data The quality of the raw reads obtained by sequencing was assessed, and to obtain high-quality clean reads, the reads were further filtered using fastp (version 0.18.0). We checked the sequencing error rate and obtained 99.32–99.44% clean reads for subsequent analysis. The Q30 percentages of the clean data for all samples were higher than 90.94%, and the GC content of the clean data for all samples ranged between 51.68% and 54.05%. The reads were compared with the rRNA database using the Bowtie2 software (version 2.2.8). The percentage of mapped reads after removal of rRNA was from 99.90% to 99.94%. For further analysis, high-quality pure reads were mapped to the reference Mesocricetus auratus genome (Ensembl_release104) using HISAT 2.2.4. Approximately 51.67–73.40% of the clean reads were successfully mapped to the reference Mesocricetus auratus genome (Table 1 ). Since transcriptome sequencing will fragment the mRNA before reverse transcription. Therefore, the raw data from sequencing is required assembly of the reads to explore new genes and new splice variants. The mapped reads of each sample were assembled using StringTie v1.3.1 in a reference-based approach. In total, 20,387 genes were assembled from the sequenced sequences, of which 12,547 were successfully localized to the Mesocricetus auratus reference genome, and 7840 new genes were predicted. The transcripts obtained above were used for the subsequent experiments. Expression levels of the genes and differential expression analysis By analyzing the experimental and control groups with q-value ≤ 0.05 and |Fold change|> 2 as the screening condition for significant difference, we found that there were 1015 mRNAs, of which 418 were upregulated and 597 were downregulated (Fig. 2 A). The differentially expressed mRNAs were clustered and analyzed, and the heatmap showed that there were significant differences between the experimental and control groups (Fig. 2 B, Supplemental Table S 1 ). There were 161 differentially expressed lncRNAs, of which 109 were upregulated and 52 were downregulated (Fig. 2 C). The differentially expressed lncRNAs were analyzed using hierarchical clustering analysis, and the heatmap showed that there were obvious self-segregating clusters in the test and control groups (Fig. 2 D, Supplemental Table S 1 ). GO annotation and KEGG enrichment analysis of the differentially expressed mRNAs To provide a general description of the functions and pathways of the genes obtained through RNA-Seq analysis, we aligned these sequences with the GO and KEGG databases for functional annotation and classification. The GO enrichment analysis showed that the significantly different mRNAs in the mock vs. test group were mainly related to biological regulation, metabolic process, signaling, transcription regulator activity, and membrane (Fig. 3 A, Supplemental Table S 2 ). We performed gene set enrichment analysis using the GSEA software and MSigDB to identify whether a set of genes in specific GO terms showed significant differences in the two groups. The results showed that olfactory receptor activity, positive regulation of signaling receptor activity, inward rectifier potassium channel activity and voltage-gated sodium channel complex, and cell signaling pathway displayed significant differences in the two groups. Similarly, the pathways affecting protein synthesis and metabolism, including serine-type endopeptidase inhibitor activity, endopeptidase inhibitor activity, and structural constituent of ribosome, showed large differences. In addition, differences in the DNA packaging complex as well as the nucleosome pathway representing chromosome structure and function showed differences in the experimental and control groups (Fig. 3 B, Supplemental Table S 3 ). The KEGG database contains a wealth of pathway information that contributes to the understanding of the biological functions of genes at the system level. The results of the KEGG enrichment analysis with q-value < 0.05 showed that the mRNAs with significant differences in the test group were mainly enriched in the TNF, MAPK, P53, and notch signaling pathways (Fig. 3 C, Supplemental Table S 4 ). The KEGG enrichment results of the GSEA showed that the signaling pathways related to energy metabolism and disease, such as Type I diabetes mellitus, graft-versus-host disease, autoimmune thyroid disease, glycolysis/gluconeogenesis, and oxidative phosphorylation, differed significantly in the two groups (Fig. 3 D, Supplemental Table S 3 ). GO annotation and KEGG enrichment analysis of the target genes of the lncRNAs The lncRNA target genes were analyzed based on GO annotations and KEGG enrichment. To gain a better understanding of the biological functions of the differential lncRNAs in the test and control groups, GO and KEGG enrichment analyses were performed on the lncRNA co-expressed target genes. GO analysis of the cis-target genes showed that the mRNAs co-expressed by the lncRNAs in VSV-infected BHK-21 cells were predominantly associated with the death-inducing signaling complex in cellular component (Fig. 4 A, Supplemental Table S 5 ). Antisense-targets were enriched for the humoral immune response, G protein-coupled receptor signaling pathway, and complement activation (lectin pathway) in biological processes (Fig. 4 B, Supplemental Table S 5 ). The trans-targets were enriched for protein binding, binding, transcription regulator activity, and DNA binding in molecular functions (Fig. 4 C, Supplemental Table S 5 ). With q-value < 0.05 as the condition for the KEGG enrichment analysis, the results showed that the cis-target genes were mainly enriched in apoptosis, base excision repair, pathways in cancer, and the p53 signaling pathway (Fig. 4 D, Supplemental Table S 6 ). Antisense-targets genes were mainly enriched in the complement and coagulation cascades, pathways in cancer, JAK-STAT signaling pathway, and cytokine–cytokine receptor interaction (Fig. 4 E, Supplemental Table S 6 ). The trans-targets were mainly enriched in the TNF signaling pathway, pathways in cancer, p53 signaling pathway, MAPK signaling pathway, NF-kappa B signaling pathway, C-type lectin receptor signaling pathway, and apoptosis (Fig. 4 F, Supplemental Table S 6 ). lncRNAs and mRNAs co-expression network analysis An lncRNA/mRNA co-expression network was constructed using 22 differentially expressed lncRNAs and 21 target genes. As shown in Fig. 5 , some lncRNA targets were located in the center of the network, such as Ripk1 , Jag1 , Polk , Tiparp , and Il12rb2 . Regarding viral infections, Ripk1 -mediated innate immunity may play an important role in viral infections. The study by Xu et al. showed that SARS-CoV-2 could hijack the Ripk1 -mediated host defense response to promote its own reproduction [ 12 ]. Likewise, in Getah virus (GETV)-infected Vero cells, Tiparp expression was significantly down-regulated, resulting in a significant increase in viral titer, suggesting that Tiparp overexpression significantly inhibited GETV replication. The host Tiparp is shown for the first time to be a limiting factor for GETV replication [ 13 ]. Several viruses, such as Herpes Simplex Virus type 2 and Human Immunodeficiency Virus, regulate the expression of Fas ligands upon infection of cells, resulting in programmed cell death, which may be a flexible mechanism for viruses to enhance replication and the immune evasion [ 14 , 15 ]. Network modeling revealed that immune-related lncRNA targets were co-expressed with the lncRNAs, suggesting that the lncRNAs and mRNAs are mutually regulated in immune processes, such as viral infection. RT-qPCR validation of the differentially expressed genes In this experiment, RNA-Seq analysis was used to analyze a total of 161 differential lncRNAs and 1015 differential mRNAs. Ten differentially expressed genes, namely Il12rb2 , F2 , Masp2 , Fas , Bmf , Polk , Fgf18 , Jag1 , Ripk1 , and Mcl1 , were randomly selected and detected through quantitative real-time PCR. The expression of Il12rb2 , F2 , Masp2 , Fas , Polk , Fgf18 , and Jag1 was upregulated. The expression of Bmf , Mcl1 , and Ripk1 was downregulated (Fig. 6 ). The validation results of the RT-qPCR were consistent with the RNA-Seq results, proving the reliability of the results of this experiment.
Discussion Vesicular stomatitis is a disease of hoofed animals characterized by blistering lesions on the oral mucosa and feet. VSV infection can cause harm and economic loss to livestock farming. However, VSV can be extremely useful in medical applications. As mentioned previously, many properties make VSVs excellent vaccine carriers. In addition, it has shown encouraging antitumor activity in a variety of human cancer types. VSV is particularly attractive as a tumorolytic agent due to its broad tropism, rapid replication kinetics, and susceptibility to genetic manipulation. In addition, VSV-induced tumor lysis can trigger potent antitumor cytotoxic T-cell responses to viral proteins and tumor-associated antigens, resulting in durable antitumor effects. Because of this multifaceted immunomodulatory property, VSV has been extensively investigated for immunoviral therapy alone or in combination with other anticancer modalities such as immune checkpoint blockade [ 16 ]. The transcriptome analysis of VSV-infected BHK-21 cells in this study provides insights into the mechanisms of VSV–host interactions as well as a theoretical basis for subsequent disease detection, vaccine and drug development, and more in-depth research. In this study, RNA-Seq was performed on BHK-21 cells infected with VSV for 24 h. To understand the regulatory functions of these lncRNAs, the differential lncRNAs co-expression mRNAs were predicted and functionally analyzed, and it was found that the lncRNAs co-expression mRNAs were mainly enriched in the TNF, MAPK, and P53 signaling pathways. They were also related to the KEGG pathways such as cancer: overview, infectious disease: viral, immune system, and cell growth and death. In this study, 10 differentially expressed genes were randomly selected for validation, and the results showed that the expression of all 10 genes was upregulated or downregulated, suggesting that they may play an important role in the process of viral infection. FGF18 selectively binds to FGFR3 ; it is an essential mitogen for embryonic limb development; and it is required for lung development and disease. Studies have shown that FGF18 has multiple organ developmental and injury repair effects [ 17 ]. FGF18 expression was significantly increased in VSV-infected hosts, suggesting that viral infection influences cell fate. Ripk1 plays an important role in pathways such as the TNF, MAPK, and NF-kappa B signaling pathways, and it may be involved in apoptotic processes. Targeting Ripk1 in the treatment of neurological diseases may help inhibit multiple cell death pathways and ameliorate neuroinflammation [ 18 ]. In our study, significant changes in Ripk1 expression may be involved in the immune defense of the cells. In addition, Il12rb2 , F2 , and Masp2 are involved in the activities of the cellular immune system and play important roles in viral infections and cancer [ 19 – 21 ]. Both Il12rb2 and F2 may be implicated in innate immunity. Mannan-binding lectin–associated serine protease 2 co-activates the lectin pathway of the complement in response to several viral infections [ 21 ]. Fas ( CD95/Apo-1 ) is a member of the "death receptor" family, which is a group of cell surface proteins that trigger apoptosis by binding to their natural ligands [ 22 ]. The p53 signaling pathway, in which Fas is engaged, is associated with cell cycle arrest, cellular senescence, and apoptosis. The TNF signaling pathway is closely related to inflammation as well as cancer, and it also activates various pathways, including NFκB and MAPK. It has been shown that BMF mediates fetal oocyte loss in mice [ 23 ]. In diabetic mice, inhibition of BMF transcription by hnRNP F is an important mechanism by which insulin protects diabetic RPTC from apoptosis [ 24 ], suggesting that BMF may play an important role in the apoptotic process. POLK , which encodes the specialized transposable synthesis DNA polymerase κ, is known to perform precise DNA synthesis on microsatellites. Transcriptional regulation of POLK involves some of the p53 tumor suppressors [ 25 ]. It has been shown that Polk (-/-) mice have a significantly increased frequency of mutations in the kidneys, liver, and lungs [ 26 ]. These results suggest that Pol kappa is required for accurate translational DNA synthesis and that significant changes in its expression may be associated with the activation of the cancer pathway. MCL1 is a pro-survival (antiapoptotic) protein commonly expressed in hematological tumors and plays an important role in their biology, either through dysregulation or due to its intrinsic importance to the cells of origin of malignant tumors [ 27 ]. Jagged1 ( JAG1 ) is one of five cell-surface ligands that function primarily in the highly conserved notch signaling pathway. Variations in JAG1 are associated with several types of cancers, including breast and adrenocortical cancers [ 28 ]. Seropositivity to VSV antibodies is generally low in the population; thus, pre-immunization against vectors is rare, and viral sequences are unlikely to integrate into the host genome [ 1 ]. Therefore, the future of VSV as a vaccine carrier is very promising. In preclinical studies, it has been shown that VSV-based vaccines induce strong humoral and cellular immune responses after vaccination [ 29 ]. Several vaccines have previously been developed based on recombinant VSV with good protection. For example, recombinant VSV expressing measles virus (MV) hemagglutinin (VSV-h) induces high titers of MV-neutralizing antibodies in the presence of MV-specific antibodies and provides good protection against subsequent MV attacks [ 30 ]. From the results of this experiment, VSV infection activated the cellular MAPK signaling pathway, P53 signaling pathway, TNF signaling pathway, and other pathways related to the activation of the immune system as well as pathways related to cancer and apoptosis. This provides evidence that VSV is an excellent vaccine carrier. To sum up, in our study, we performed functional analyses of differential lncRNAs and mRNAs and differential lncRNA and mRNA combination analyses; screened potential candidate lncRNAs and target genes in VSV infections; and analyzed the functions of these target genes as well as the pathways in which they are located. The results suggest that VSV infection activates TNF, MAPK, NF-kappaB and other immune-related pathways. Among them, some genes in these pathways were also up- or down-regulated, including Ripk1 , Il12rb2 , Masp2 , etc., in which reveal that VSV infection causes alterations in the host metabolic network. Our study revealed the process of physiological changes in host cells during VSV infection, which contributes to further understanding of the pathogenesis of the virus and provides a basis for the next steps in detection, prevention, and treatment, as well as a direction for further research on the interaction between VSV and the host. Pan et al. showed that the eukaryotic translation initiation factor 3, subunit i (eIF3i) may affect VSV growth by modulating the host antiviral response in HeLa cells [ 31 ]. Kueck et al. also found that a specific antiviral protein, TRIM69, interacts with and inhibits the function of a particular phosphoprotein (P) component of the VSV transcriptional machinery, thereby preventing the synthesis of viral messenger RNAs [ 32 ]. All of these results illustrate the mechanism of VSV-host interactions at the molecular level. Furthermore, our results reveal the interaction between VSV and host can be explored on this basis at the gene level. More importantly, VSV has a wide range of applications as a molecular tool and vaccine carrier. The transcriptome sequencing results showed that VSV can effectively activate the immune system of host cells, which may mean that a live viral vector vaccine using VSV as a carrier can effectively stimulate the body to produce antibodies. Our laboratory has established a mature reverse genetic system for VSV, and the results of this experiment provide a theoretical basis for the construction of future vaccines using VSV as a vector. However, our results are still based on a cellular level, which need further animal experiments to verify at the individual level.
Conclusions In this study, we performed RNA-Seq on VSV-infected BHK-21 cells. The differential genes enriched were mainly connected to pathways related to apoptosis and tumor. Our results indicate that VSV infection causes alterations in the host metabolic network, which provides unique insights for further studies on the mechanisms of VSV–host interactions as well as a basis for the development of potent drugs and vaccines for VSV. More importantly, VSV activated pathways related to the cellular immune system, cancer, and apoptosis, providing evidence that VSV can be used as a live virus vaccine vector. Our results also provide a theoretical basis for studying VSV infection at the gene level, pointing the way to deeper theoretical studies.
Background Vesicular stomatitis virus (VSV) is a typical non-segmented negative-sense RNA virus of the genus Vesiculovirus in the family Rhabdoviridae . VSV can infect a wide range of animals, including humans, with oral blister epithelial lesions. VSV is an excellent model virus with a wide range of applications as a molecular tool, a vaccine vector, and an oncolytic vector. To further understand the interaction between VSV and host cells and to provide a theoretical basis for the application prospects of VSV, we analyzed the expression of host differentially expressed genes (DEGs) during VSV infection using RNA-Seq. Results Our analyses found a total of 1015 differentially expressed mRNAs and 161 differentially expressed LncRNAs in BHK-21 cells infected with VSV for 24 h compared with controls. Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment showed that the differentially expressed lncRNAs and their target genes were mainly concentrated in pathways related to apoptosis, cancer, disease, and immune system activation, including the TNF, P53, MAPK, and NF-kappaB signaling pathways. The differentially expressed lncRNA can modulate immune processes by regulating genes involved in these signaling transmissions. Ten randomly selected DEGs, namely, Il12rb2 , F2 , Masp2 , Mcl1, FGF18 , Ripk1 , Fas , BMF , POLK , and JAG1 , were validated using RT-qPCR. As predicted through RNA-Seq analysis, these DEGs underwent either up- or downregulation, suggesting that they may play key regulatory roles in the pathways mentioned previously. Conclusions Our study showed that VSV infection alters the host metabolic network and activates immune-related pathways, such as MAPK and TNF. The above findings provide unique insights for further study of the mechanism of VSV–host interactions and, more importantly, provide a theoretical basis for VSV as an excellent vaccine carrier. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-024-09991-9. Keywords
Supplementary Information
Abbreviations Long noncoding RNAs Tumor necrosis factor The mitogen-activated protein kinase cascade Nuclear factor kappa-B Interleukin-12 receptor subunit beta-2 Coagulation factor II Mannan binding lectin serine peptidase 2 Myeloid cell leukemia sequence 1 Fibroblast growth factor 18 Receptor interacting serine/threonine kinase 1 Fas cell surface death receptor Bcl2 modifying factor DNA polymerase kappa Jagged canonical Notch ligand 1 Acknowledgements We thank LetPub ( www.letpub.com ) for its linguistic assistance during the preparation of this manuscript. Authors’ contributions These studies were designed by WH, XF and FY, who performed the experimental analyses and prepared the figures and Tables. WH analyzed the data and drafted the manuscript. WL and YL contributed to revisions of the manuscript. XS, JY, JQ, LZ, WZ, GC, WH and XH assisted in interpreting the results and revised the final version of the manuscript. All authors read and approved the final manuscript for publication. Funding This research was funded by the following bodies: The Natural Science Foundation of Gausu Province of China (Grant no. 21JR7RA020), the project from a supporting grant of Lanzhou Veterinary Research Institute (Grant no. 1610312021004), Science and Technology Major Project of Gausu Province (Grant no. 22ZD6NA001), The National Natural Sciences Foundation of China (Grant no. 32372984), Project Funds (Grant no. 2022YB009 and 2022YB010) supported by Hebei Normal University of Science and Technology, Diagnostic techniques and epidemiology of infectious respiratory diseases in cattle, Natural Science Foundation of Hebei Province Project Funds(Grant no. C2022407019), CPXRA Two-component system Regulation mechanism of polymyxin Resistance of Salmonella Typhimurium. Availability of data and materials Data is available at the Sequence Read Archive (SRP465591) with the site https://dataview.ncbi.nlm.nih.gov/object/PRJNA1025338?reviewer=16snf0n1targvkujcsh52ubsca . The datasets used and analysed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Genomics. 2024 Jan 15; 25:62
oa_package/0b/21/PMC10789022.tar.gz
PMC10789023
0
Background Human immunodeficiency virus (HIV) remains a significant global public health problem, with approximately 39 million people living with HIV (PLHIV) and 630,000 acquired immunodeficiency syndrome (AIDS)-related deaths in 2022 [ 1 ]. The introduction of antiretroviral therapy (ART) has been a game-changer in the fight against HIV. Highly effective drugs with improved pharmacokinetics and tolerability have enhanced the prognosis and quality of life for PLHIV, contributing to a 68.5% decline in AIDS-related deaths worldwide between 2004 and 2022 [ 1 ]. Since 2018, the World Health Organization (WHO) has recommended dolutegravir-based regimens as first- and second-line treatments for all PLHIV due to their high efficacy and favorable toxicity profile [ 3 ]. However, the use of these regimens has been primarily limited to high-income countries [ 4 ]. Venezuela has experienced the highest rate of ART interruptions among Latin American countries since 2016, with the situation worsening in 2017 and 2018 due to limited access to ART, with only 16% of patients receiving treatment by April 2018 [ 5 ]. However, through the efforts of nongovernmental organizations and the implementation of the Master Plan for the Strengthening of the Response to HIV, Tuberculosis, and Malaria in Venezuela [ 6 ], nationwide access to dolutegravir-based ART was resumed in February 2019. Despite this progress, monitoring of the efficacy and tolerability of this regimen in both treatment-experienced and newly diagnosed patients has been limited. Strict adherence to ART is crucial for maintaining an undetectable viral load, reducing the risk of progression to AIDS and transmission of HIV to sexual partners [ 7 ]. However, epidemiological surveillance related to access to diagnosis, treatment, and viral suppression remains limited in Venezuela. The coronavirus disease 2019 (COVID-19) pandemic has had significant direct and indirect impacts on global health, particularly in low- and middle-income countries [ 8 , 9 ], including Venezuela, where the effects are compounded by a complex humanitarian crisis, weakened health systems, and concurrent epidemics such as HIV, malaria, and tuberculosis [ 5 ]. In some countries, the COVID-19 pandemic has disrupted HIV testing, care [ 9 – 11 ], and treatment [ 12 ] services, potentially leading to increased HIV-related deaths and transmission, and jeopardizing progress towards the Joint United Nations Programme on HIV/AIDS 90-90-90 global target [ 13 ]. Moreover, despite that several studies showing that COVID-19 vaccination is effective in preventing these adverse outcomes [ 14 – 17 ], vaccine hesitancy remains a global problem [ 18 ], particularly among higher-risk populations such as PLHIV. Concerns about vaccine safety have been identified as a primary factor contributing to COVID-19 vaccine hesitancy among this population [ 19 , 20 ]. Currently, there is limited information available on the impact of the COVID-19 pandemic on PLHIV in Venezuela. Additionally, the rate of COVID-19 vaccine hesitancy among this population is unknown. This study aims to assess the impact of the COVID-19 pandemic and COVID-19 vaccine hesitancy among PLHIV seen at the outpatient clinic of the Infectious Diseases Department at the University Hospital of Caracas during the period of the COVID-19 pandemic in Venezuela.
Methods Study design and patients A cross-sectional study was conducted between March 2021 and February 2022 at the outpatient clinic of the Infectious Diseases Department at the University Hospital of Caracas, Venezuela. This specialized outpatient clinic, which began operating in 1990, provides care for patients with HIV infection and is considered the largest in the country, having attended a total of 6,350 patients in 2022. The study included consecutive patients aged 18 years and over with either known HIV infection or newly diagnosed with a positive HIV result. According to the Statistics Department of the University Hospital of Caracas, in 2021 there were 5,346 outpatient consultations of PLHIV. To analyze this population with a 95% confidence interval and a margin of error of 5%, a sample size of at least 359 participants was required. The sampling method employed was non-probabilistic. Survey design and data collection A data collection form was designed to gather both sociodemographic and clinical data. Sociodemographic data collected included age, sex, origin by state, education level, marital status, occupation, income in US$ per month, sexual orientation, and whether the individual had a stable partner. Clinical data collected included time since HIV diagnosis, comorbidities, sexually transmitted diseases (STDs) history, AIDS-associated diseases history, recommended vaccination history, current ART, previous ART regimen, post-ART viral load, post-ART CD4 + count, and weekly treatment adherence. Additionally, information related to COVID-19 and the COVID-19 pandemic was also collected. This included COVID-19 history and severity, COVID-19 vaccination status and reasons for COVID-19 vaccine hesitancy, problems related to compliance with consultations in the last 12 months, and ART refilling during the COVID-19 pandemic. Data analysis Participant data were summarized using descriptive statistics, including mean, standard deviation (SD), median, interquartile range (IQR), and/or frequency, percentage (%). The normality of numeric variables was assessed using the Kolmogorov-Smirnov test. Univariable analyses were performed using tests such as the Mann-Whitney U test for numerical variables with a non-normal distribution, Student’s t-test for those with a normal distribution, and Pearson’s chi-squared and Fisher’s exact tests for categorical variables. P values less than 0.05 were considered significant. Statistically significant variables identified in the univariable analyses were included in a binomial logistic regression using the enter method to identify factors associated with missed consultations. The best model was selected based on the highest percentage of participants including goodness of fit, R 2 Nagelkerke, and the Hosmer-Lemeshow test. Statistical analyses were performed using the Statistical Package for the Social Sciences version 26 (International Business Machines Corporation, Armonk, NY, USA).
Results Sociodemographic and epidemiologic characteristics of PLHIV A total of 238 patients were analyzed, of which 210 came for a follow-up appointment, including eight who were returning to their controls after having abandoned treatment. The remaining 28 patients were coming to their first appointment for ART initiation. The median age of the patients was 43 (IQR 31–55) years, with the majority being male (68.9%) and heterosexual (50%). Most patients came from the Capital District (45.8%) and Miranda state (45.4%). Almost half of them had a steady partnership, and among these partners, 52.1% ( n = 60/115) had negative HIV serology, 40.9% ( n = 47/115) had positive HIV serology, and 7% ( n = 8/115) were unaware of their HIV status. Additional sociodemographic data may be found in Table 1 . Clinical information and ART history The most frequent comorbidity among patients was hypertension (11.8%, n = 28), followed by osteoporosis (5%, n = 12), asthma (3.4%, n = 8), and diabetes (2.5%, n = 6). In relation to STDs history, syphilis was identified as the most frequent (19.3%, n = 46), followed by human papillomavirus infection. A tuberculosis history, both intra- and extrapulmonary, was the most frequent AIDS-associated disease (10.5%, n = 25). In relation to compliance with the recommended vaccines for PLHIV, the least compliant was the pneumococcal vaccine (Table 2 ). Regarding ART, excluding newly diagnosed patients who were started on treatment for the first time ( n = 28), almost all patients (96.1%, n = 202) were on ART. The majority of these patients (91%) were receiving the combination of tenofovir disoproxil fumarate, lamivudine, and dolutegravir (TLD), followed by the combination of abacavir, lamivudine, and dolutegravir (8.9%). Only eight patients with a known HIV diagnosis were off treatment because they had discontinued it. One hundred and fifty-five patients had been previously exposed to ART, with a mean of 1.5 (SD 1.5, range: 1–7) previous regimens. The most frequent previous regimen was the combination of two nucleoside reverse transcriptase inhibitors (NRTIs) plus non-nucleoside reverse transcriptase inhibitors (NNRTIs) in 51% of patients ( n = 79). Out of the patients on ART ( n = 202), only 137 had available viral load data (67.8%) and almost all of these patients had undetectable viral loads (< 200 copies of RNA, 95.6%, n = 131). The undetectability rate was 95.8% ( n = 115/120) for patients on the TLD regimen and 94.1% ( n = 16/17) for those on the abacavir, lamivudine, and dolutegravir regimen. Only 22 out of 202 patients (10.9%) had a CD4 + count available. Finally, adequate adherence was observed in most patients (84.1%, n = 170/202), but 32 patients (15.8%) reported skipping at least one dose per week (Table 3 ). COVID-19 vaccine hesitancy and the impact of the COVID-19 pandemic on PLHIV Only 43 patients (18.1%) reported having had severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, most of them diagnosed clinically (60.5%) without a confirmatory test and experiencing mild clinical manifestations (83.7%). Regarding COVID-19 vaccination, more than half of the patients (55.5%, n = 132) had received at least one dose of the vaccine (Table 4 ). Of these patients, 25 (18.9%) had received one dose, 92 (69.7%) had received two doses, and only 15 (11.4%) had received three doses. Among the unvaccinated group (44.5%, n = 106), all were hesitant to get vaccinated. Of these patients, 77 (72.6%) were unsure about getting vaccinated and wanted to consult with their doctor, while 29 (27.4%) preferred not to be vaccinated for distinct reasons: 23 (79.3%) expressed fear, four (13.8%) reported distrust, and four (13.8%) stated that they did not need it. Most patients with COVID-19 vaccine hesitancy were male (65.1%), younger than 44 years old (57.5%), employed (47.2%), with a monthly income between US$21–100 (47.2%), and had been diagnosed with HIV for less than one year (33%), but there were no statistically significant differences when compared to the vaccinated group. The proportion of patients with an HIV diagnosis of less than one year was higher among those with vaccine hesitancy (33%) compared to vaccinated patients (18.9%). In contrast, the proportion of patients with a diagnosis of 16 or more years was more frequent among vaccinated patients (31.1%) compared to those with vaccine hesitancy (17%). However, these differences were not statistically significant ( p = 0.053). Although the proportion of patients with comorbidities was higher in the vaccinated group (40.2%) compared to the vaccine hesitancy group (29.2%), there were no statistically significant differences between the two groups ( p = 0.08) (Table 5 ). C In relation to the impact of the COVID-19 pandemic on PLHIV who came to the consultation, only patients who came for control appointments were analyzed ( n = 210). Of these patients, 11.9% ( n = 25/210) reported having missed at least one medical consultation due to COVID-19 pandemic restrictions; of these patients, 92% ( n = 23/25) missed their appointment due to restricted mobility, another 8% ( n = 2/25) due to fear of becoming infected, and one because they were outside the country. The median number of consultations in the last 12 months for these patients was 1 (IQR 1–2) consultation per patient. A model was performed to evaluate factors associated with missed consultations ( p = 0.01, R 2 Nagelkerke = 0.34, Hosmer–Lemeshow test = 0.457), and found that older age was a risk factor for missing consultations (OR = 1.058, 95% CI = 1.009–1.11, p = 0.019). Additionally, not having an alcoholic habit was identified as a protective factor against missing consultations (OR = 0.012, 95% CI = 0.001–0.108, p < 0.001) (Table 6 ). Only 3.3% of patients reported interruption of ART refill due to the COVID-19 pandemic, all reporting that they were unable to claim their medication refill due to COVID-19 pandemic restrictions.
Discussion This study describes the epidemiological and clinical behavior of PLHIV at the Hospital University of Caracas, Venezuela and estimates the impact of the COVID-19 pandemic on disruptions in care, ART, and COVID-19 vaccine hesitancy. The majority of patients were young, employed men, consistent with previous reports [ 21 – 23 ]. Nearly half had a tertiary level of education, yet three-quarters earned less than US$100 per month, insufficient for access to basic food necessities [ 24 ]. Most patients were in heterosexual relationships [ 21 , 22 ], with almost half reporting a stable partner. Nearly half of these partnerships were HIV serodiscordant, similar to other studies [ 25 , 26 ]. Some studies reported substantial interruptions in pre-exposure prophylaxis (27.8–56%) during COVID-19 restrictions [ 27 – 30 ]. However, Venezuela does not have a pre-exposure prophylaxis program. The impact of the COVID-19 pandemic on the diagnosis, care, and treatment of HIV infection has been extensively explored in other countries [ 31 – 34 ]. In general, new HIV diagnoses decreased by 12–45% [ 35 – 41 ]. In this study, a quarter of patients were recently diagnosed (< 1 year), emphasizing the importance of maintaining diagnostic and care activities during the COVID-19 pandemic. Adherence to ART and undetectability rates were similar to those reported in other Latin American countries such as Peru [ 42 ], Brazil [ 30 , 43 ], Argentina [ 44 ], and globally during the COVID-19 pandemic [ 41 , 43 , 45 ]. Despite WHO recommendations for continuity of HIV services during the COVID-19 pandemic [ 46 ], care and treatment have faced challenges worldwide. Unlike other countries [ 41 , 47 – 49 ], we did not use telemedicine due to barriers such as lack of equipment and inconsistent internet access. Instead, we maintained face-to-face consultations with strict biosecurity measures and provided ART refills for longer periods, as documented in other countries [ 50 ]. The COVID-19 pandemic has variably impacted clinical appointments for PLHIV. Many patients have missed HIV clinical visits, support meetings, follow-up tests, and counseling services [ 51 – 56 ]. In a multi-country survey, 55.8% of PLHIV were unable to meet their HIV physician face-to-face in the past month [ 51 ]. In Mexico, 44.3% of patients experienced follow-up failures due to structural barriers such as transportation difficulties and distance to the hospital [ 57 ]. In Peru, 37.2% reported difficulty accessing routine HIV care, with the most common reason being temporary closure of their primary HIV clinic [ 42 ]. A study in Atlanta (GA, USA) found that 19% of PLHIV had missed a scheduled HIV care appointment in the previous 30 days [ 58 ], while another study among men who have sex with men in 20 countries reported that 20% of PLHIV were unable to access their HIV care provider, even via telemedicine [ 27 ]. In contrast, this study documented a lower impact on non-compliance with medical consultations (11.9%), possibly due to the non-prolonged interruption of consultation services and greater regularization of services after the first year of the COVID-19 pandemic, as has been documented in other studies [ 41 , 55 ]. Older age was significantly associated with missed visits, contrasting with COVID-19 pre-pandemic studies where missed visits were associated with younger age [ 59 , 60 ]. Other COVID-19 pandemic-related factors such as inadequate transport, police abuse, insufficient transportation funds [ 61 ], lockdowns [ 62 ], limited access to health services, reduced income, inability to afford travel to health facilities or facemasks, fear of COVID-19 [ 54 ], and fear of visiting hospitals [ 55 ] could have had a greater impact on the life stability of older PLHIV. This instability has been correlated with a higher risk of missed medical appointments compared to their peers with less life chaos [ 63 ]. During the COVID-19 pandemic, ART-producing pharmaceutical companies faced challenges with international shipping due to border restrictions, transportation delays, increased lead times, and rising costs, contributing to global ART disruptions [ 64 , 65 ]. However, surveys and observational studies have shown variability in ART refill interruptions. A global study in 20 countries reported that more than half were unable to access ART refills remotely, with the least access in Belarus, Brazil, Kazakhstan, Mexico, and Russia [ 27 ]. In Ethiopia, 27.4% of participants missed visits for refills [ 54 ], while in Peru, 24% reported difficulty picking up their ART due to cancelled appointments or lack of transportation [ 42 ]. A study in Italy documented a 23.1% decrease in dispensed ART during early 2020 compared to 2019, but this trend normalized after the first few months of the COVID-19 pandemic [ 41 ]. Similarly, a study in Haiti observed an 18% decline in ART refills [ 50 ], while in Brazil, only 17.2% reported an impact on ART refills [ 30 ]. In Taiwan, only 9.1% of PLHIV self-reported interrupted ART [ 48 ], while this study found that only 3% experienced interruption as a result of the COVID-19 pandemic, similar to reports from Brazil [ 43 ], Argentina [ 44 ], Northern Italy [ 41 ], and Indonesia [ 55 ] (4.2%, 3.9%, 3.2%, and 3%, respectively). A multi-country survey among PLHIV reported that 3.6% were unable to refill their ART [ 51 ], while a similar study in China found that 2.7% experienced interruption with a median duration of 3 (IQR 1–6) days and higher risk for those with a treatment abandonment history [ 66 ]. The low rate of interruption in this study may be due to continued operation of the ARV dispensary during the COVID-19 pandemic and the strategy of providing three months of ART at a time, as implemented in other countries [ 50 , 66 ]. Thus, evidence of HIV care disruption and ART interruption during the COVID-19 pandemic was primarily during the early months and varied by region depending on measures implemented by each country. Despite these interruptions (self-reported or electronically recorded), adherence was maintained in several studies [ 30 , 42 , 43 ]. Although studies have shown discrepancies, PLHIV appear to be at high risk for adverse clinical outcomes from COVID-19, with some evidence of higher hospitalization and mortality rates [ 67 – 69 ]. Despite the effectiveness of COVID-19 vaccination in preventing these outcomes [ 14 – 17 ], almost half (44.5%) of participants in this study expressed vaccine hesitancy due to fear and mistrust, similar to reasons reported in Latin America [ 70 ], the USA, India, and China [ 71 – 73 ]. The rate of COVID-19 vaccine hesitancy among PLHIV in this study was lower than in Nigeria (57.7%) but higher than in India (38.4%) [ 73 ], France (28.7%) [ 74 ], China (27.5%) [ 75 ], Trinidad and Tobago (39%) [ 76 ], Brazil (23.9%) [ 51 ], and other Latin American countries (12.8%) [ 70 ]. Most patients with COVID-19 vaccine hesitancy were low-income young men with recent HIV diagnoses, consistent with other studies [ 19 ]. The highest proportion of COVID-19 vaccine hesitancy was found among those with recent diagnoses (< 1 year), possibly due to lack of knowledge about COVID-19 vaccination and HIV infection [ 77 – 79 ]. This highlights the importance of designing education strategies focused on COVID-19 vaccination in the context of HIV infection. This study has several limitations. Firstly, the study is based on a non-probabilistic sample from a single center, which may accurately represent patients attending this specific center but may not reflect the broader population of PLHIV in Venezuela. This is despite the institution being the primary referral center for PLHIV in the country. Furthermore, the required and calculated sample size was not obtained, which restricts the statistical power to detect certain effects or differences. Secondly, the cross-sectional design limits causal inference and only provides a snapshot of challenges during a specific period of the COVID-19 pandemic. Additionally, information was collected at different times throughout the study period, so perceptions (e.g., vaccine willingness) may have been influenced by the rapidly evolving COVID-19 pandemic. Thirdly, the limited epidemiological surveillance on HIV in Venezuela over the past decade has posed significant challenges in acquiring data regarding the COVID-19 pre-pandemic period on newly diagnosed PLHIV, adherence, and undetectability rates, making a comparative analysis infeasible. Fourthly, while there was adequate weekly ART adherence in most patients, approximately one-third were unable to undergo viral load testing. This could be explained mainly by the limited availability of testing in the public healthcare sector and high cost in the private healthcare sector [ 5 ]. Fifthly, the limited access to CD4 + lymphocyte counting in the public system, coupled with the low income of PLHIV to afford private testing, resulted in a scarcity of T-CD4 + lymphocyte count results. This scarcity hindered our ability to correlate this value with other variables. Sixthly, some medical histories were incomplete or inadequate and were supplemented with direct patient questioning, introducing potential recall bias. Seventhly, while data on COVID-19 vaccine hesitancy is available, the specific reasons why they preferred not to be vaccinated were not thoroughly explored. The small sample size also limits the generalizability of these results. Finally, it was not possible to accurately calculate ART interruption and missed scheduled consultations from available records due to data quality issues.
Conclusions The disruption of HIV services during a public health crisis such as COVID-19 is an important problem for healthcare systems and policymakers to address, as it may exacerbate disparities in the HIV treatment cascade in settings with a high HIV burden or among vulnerable populations. This study found limited impact of the COVID-19 pandemic on adherence to consultations and interruptions of ART refills among PLHIV at the University Hospital of Caracas, Venezuela. However, due to the limitations of this study, it is essential to conduct comprehensive, multicenter studies with larger sample sizes that include regions with less access to the continuum of care for PLHIV.
Background The coronavirus disease 2019 (COVID-19) pandemic has disrupted multiple health services, including human immunodeficiency virus (HIV) testing, care, and treatment services, jeopardizing the achievement of the Joint United Nations Programme on HIV/AIDS 90-90-90 global target. While there are limited studies assessing the impact of the COVID-19 pandemic on people living with HIV (PLHIV) in Latin America, there are none, to our knowledge, in Venezuela. This study aims to assess the impact of the COVID-19 pandemic among PLHIV seen at the outpatient clinic of a reference hospital in Venezuela. Methods We conducted a cross-sectional study among PLHIV aged 18 years and over seen at the Infectious Diseases Department of the University Hospital of Caracas, Venezuela between March 2021 and February 2022. Results A total of 238 PLHIV were included in the study. The median age was 43 (IQR 31–55) years, and the majority were male (68.9%). Most patients (88.2%, n = 210) came for routine check-ups, while 28 (11.3%) were newly diagnosed. The majority of patients (96.1%) were on antiretroviral therapy (ART), but only 67.8% had a viral load test, with almost all (95.6%) being undetectable. Among those who attended regular appointments, 11.9% reported missing at least one medical consultation, and 3.3% reported an interruption in their ART refill. More than half of the patients (55.5%) had received at least one dose of the COVID-19 vaccine, while the rest expressed hesitancy to get vaccinated. Most patients with COVID-19 vaccine hesitancy were male (65.1%), younger than 44 years (57.5%), employed (47.2%), and had been diagnosed with HIV for less than one year (33%). However, no statistically significant differences were found between vaccinated patients and those with COVID-19 vaccine hesitancy. Older age was a risk factor for missing consultations, while not having an alcoholic habit was identified as a protective factor against missing consultations. Conclusion This study found that the COVID-19 pandemic had a limited impact on adherence to medical consultations and interruptions in ART among PLHIV seen at the University Hospital of Caracas, Venezuela. Keywords
Abbreviations Human immunodeficiency virus People living with HIV Acquired immunodeficiency syndrome Antiretroviral therapy World Health Organization Coronavirus disease 2019 Sexually transmitted diseases Standard deviation Interquartile range Tenofovir disoproxil fumarate, lamivudine, and dolutegravir Nucleoside reverse transcriptase inhibitors Non-nucleoside reverse transcriptase inhibitors Severe acute respiratory syndrome coronavirus 2 Acknowledgements Not applicable. Author contributions DAFP, FSCN, JLFP, NACÁ, and MEL conceived and designed the study. DAFP, DLMM, ÓDOÁ, ALM, VLV, MDMB, YC, LV, and MFA collected clinical data. DAFP, FSCN, JLFP, NACÁ, and MVMR analyzed and interpreted the data. DAFP, FSCN, JLFP, NACÁ, DLMM, ÓDOÁ, MDMB, and CMRS wrote the manuscript. FSCN, MC, JC, RNG, MCR, and MEL critically reviewed the manuscript. All authors reviewed and approved the final version of the manuscript. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability All data generated or analyzed during this study are included in the article. Declarations Ethics approval and consent to participate The study protocol was reviewed and approved by the Bioethics Committee of the University Hospital of Caracas (CBE-HUC-17/2021). The study was conducted in accordance with the ethical principles for medical research in humans of the Declaration of Helsinki and the Venezuelan regulations for this type of research, with the corresponding signed informed consent of all patients. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Infect Dis. 2024 Jan 15; 24:87
oa_package/e8/f7/PMC10789023.tar.gz
PMC10789024
38221613
Introduction The burden of disease from HIV remains high, with an estimated 37.7 million individuals globally living with HIV, as reported in 2020, and over 1.5 million people contracting HIV in 2020 alone [ 1 ]. Sub-Saharan Africa (SSA) has the greatest burden of HIV. Despite significant improvements in HIV-related morbidity and mortality following the introduction of highly active antiretroviral therapy (HAART), people living with HIV (PLWH) are vulnerable to other co-morbidities such as diabetes mellitus [ 2 – 6 ]. Diabetes Mellitus (DM), an illness resulting from either insulin resistance or deficiency that leads to abnormally elevated blood glucose levels, is one of the most prevalent Non-Communicable Diseases (NCD) worldwide and in Africa [ 7 ]. According to previous reports, the prevalence of DM is estimated to range from 2.6% in Togo to 22.5% in Niger, depending on the region. [ 8 ] DM was responsible for a 7.0% mortality rate attributed to NCDs in SSA between 1990 and 2015 [ 9 , 10 ]. In response, the World Health Organization (WHO) has recommended that primary healthcare (PHC) systems provide screening and treatment for NCDs and integration of HIV and primary health systems (PHC) [ 11 ]. However, there is limited evidence on the prevalence of diabetes among PLWH at the primary care level in SSA, and the response of health systems to address comorbidities among PLWH. Although world leaders pledged the United Nations General Assembly to address NCDs, the issue of HIV-NCD multimorbidity has not been specifically addressed. According to the World Health Organization's (WHO) Global Action Plan for the Control of NCDs 2013–2020, nations should improve their health systems and combat NCDs by implementing universal health coverage (UHC) and person-centered PHC [ 12 ]. Despite attempts by some African countries, such as Ghana and Nigeria, to attain Universal Health Coverage through universal health insurance programs, most populations in Africa access healthcare at government-funded health facilities or pay out-of-pocket. People living with multiple morbidities are further compromised by multiple visits and services from the health sector. However, there has been some progress towards attaining the Sustainable Development Goal target 3.4, which states that "by 2030, premature mortality from NCDs shall be reduced by one-third through prevention and treatment and be promoted to promote mental health and well-being’ [ 13 , 14 ]. People living with HIV have benefitted from recommendations such as the WHO Package of Essential Noncommunicable Disease Interventions (WHO PEN), which have been combined to provide information on the methodologies for the management of co-infections and comorbidities, focusing on the screening, prophylaxis, treatment, and timing of ART for these conditions [ 15 ]. The WHO STEPwise chronic disease risk factor surveillance program (STEPS) attempts to quantify the burden of diabetes in sub-Saharan Africa (SSA). A surveillance program is a straightforward, standardized strategy for gathering, examining, and sharing information on important NCD risk factors in countries. It is a questionnaire-based evaluation that considers both the biochemical and physical parameters. The national authorities of the country implement a coordinated strategy [ 16 ]. Several risk factors for DM in PLWH have been identified in previous studies. These include a family history of diabetes, weight gain, lipodystrophy, advanced age, and hepatitis C infection among PLWH receiving protein inhibitor therapy [ 17 ]. A noteworthy correlation between diabetes and the duration of antiretroviral therapy was also identified, particularly the use of protease inhibitor drugs [ 18 , 19 ]. The extended time spent living with HIV, persistent low levels of inflammation, oxidative stress, and mitochondrial damage caused by HIV medication put PLWH at a slightly greater risk of developing diabetes [ 20 ]. It has been observed that PLWH taking HAART have a four-fold increased risk of developing diabetes mellitus compared to those without HIV [ 6 , 21 , 22 ]. The underlying mechanisms are believed to be either a direct result of drug side effects or an indirect consequence of immune system reactivation and the subsequent acquisition of health [ 19 ]. These mechanisms permit and enhance the impact of conventional risk factors that are independent of HIV infection, such as advanced age, obesity, smoking, sedentary lifestyle, family history, and genetic predisposition [ 23 – 27 ]. The direct effects of HIV infection on several organ systems, the toxicity of ART, polypharmacy, social isolation, stigma, and other poorly identified risk factors are only a few of the many factors that likely affect the health of HIV-infected adults. The added burden of another chronic condition, such as DM, adversely affects quality of life [ 28 ]. In HIV patients, diabetes has been linked to an increased risk of hospitalization, adverse cardiovascular and renal outcomes, and progression to end-stage renal disease, leading to reduced life expectancy and higher treatment costs for this population [ 5 ]. Although there is a paucity of reliable information regarding NCD in some resource-constrained clinics and suboptimal NCD care in clinics, the results from our study will highlight these gaps and help guide policymakers and stakeholders in the synchronization of actions to help improve the quality of life for patients with multimorbidity. This study reports the prevalence of diabetes mellitus (DM) among HIV-infected patients receiving primary health care services at primary care clinics in Harare, Zimbabwe, and associated factors. The findings are intended to identify potential risk factors for DM/HIV comorbidity and potential interventions to improve the integration of care.
Methodology Study setting The study setting was primary healthcare clinics (PHC) in Harare, the capital city of Zimbabwe. Clinics provide primary healthcare services for a population of approximately 2.5 million people in urban and rural communities [ 29 ]. Eight (8) primary health clinics were selected to represent urban and rural communities. Six of the facilities were located in urban areas, and two were in rural areas. The majority of PHC facilities in the region are funded by the government or non-governmental organizations (NGOs) and provide free access to all people requiring primary care services. These PHC clinics provide a wide range of services such as HIV screening, diabetes and hypertension management, acute and chronic condition management, and health promotion. During each visit, routine blood pressure, weight, temperature, and urine composition measurements were recorded. According to the national guidelines, patients with classic symptoms can be diagnosed with diabetes mellitus (DM) by measuring their plasma glucose levels. Tests, such as the fasting plasma glucose (FPG) test, measure blood glucose after a minimum of eight hours without food or liquids other than water. To screen for diabetes, two blood tests were used: the random plasma glucose test, which measures the blood glucose level when you have not fasted for at least eight hours, and the HbA1C test, which gives an average of your blood glucose levels over the previous two–three months. An abnormal glycosylated hemoglobin test or two abnormal fasting glucose readings were used to diagnose diabetes. As is the case with most diagnostic tests, if a test result indicates diabetes and the diagnosis is unclear because of laboratory error, it should be repeated. An example would be a patient who exhibited classic hyperglycemia or hyperglycemic crisis symptoms. Because there is a higher chance of concurrence in this instance, it is preferable to repeat the test for confirmation. Professional nurses and medical officers provided the clinical services. Most continuing clinical care is provided by registered general nurses who have received additional training in HIV management. With the assistance of the hospital's radiology and laboratory departments for diagnostic support, doctors offered supplemental care on a consultation basis. Both paper-based and electronic files are used to store patient data, which are typically linked by means of distinct patient identification numbers. Every clinic visit included the recording of patient information on the stage of HIV disease, incident AIDS-related and non-AIDS illnesses, prescribed medications, and laboratory test results. Study design We conducted a cross-sectional descriptive study at primary healthcare facilities in Harare, Zimbabwe. A mixed-method approach was used to collect quantitative data through data extraction from patient records and qualitative data through semi-structured patient interviews. Data extracted from patient records included participant demographics, weight, height, past medical history, antiretroviral medication history, and current illnesses, including communicable and non-communicable diseases. Study population The population of interest was adult HIV-positive patients receiving primary care services in Harare, Zimbabwe. Inclusion criteria HIV-positive adults aged 18 years and older Registered as patient at the study site. Receiving primary health services Exclusion criteria HIV-negative Unable to provide consent. Not registered as a patient at a study site Sample size and sampling procedure Purposive sampling was used to recruit the participants who met the inclusion criteria. The minimum sample size was determined using a single population proportion formula to calculate the prevalence of diabetes in the HIV patient population. This formula is represented as where (critical value at a certain confidence interval in a normal distribution scenario), (the expected proportion of the prevalence of type 2 diabetes mellitus (T2DM)/HIV), and (the desired level of absolute precision, taken as 0.05). The values of the parameters in the above formula were determined as follows: Assuming that 10% will decline to participate in the study, the minimum sample size is given as: Therefore, a minimum sample size of 423 was used in this study. Eligible participants were identified from the clinic register and recruited from each of the eight clinics until the sample size was met. The data were collected between January 2022 and March 2023. Clinics 1 to 6 provided services to an urban community, while clinics 7 and 8 were situated in rural/per-urban communities. Clinic 1 had 68 participants, Clinic 2 had 45 participants, Clinic 3 had 53 participants, Clinic 4 had 62 participants, Clinic 5 had 45 participants, Clinic 6 had 70 participants, and rural/peri-urban clinics included Clinic 7 and Clinic 8 with 52 and 55 participants, respectively. A total of 450 participants were recruited for the study. Data collection methods Both secondary and primary data collection methods were used in this study, as explained in the following sections. The participants were allocated a study code and no names were recorded. Reviewing clinical records Body Mass Index (BMI) was calculated from the data extracted from the clinical records. BMI was recorded and classified as underweight (< 18.5 kg/m 2 ), normal (18.5 kg/m 2 – 24.9 kg/m 2 ), overweight (25.0 kg/m 2 – 29.9 kg/m 2 ), or obese (≥ 30 kg /m 2 ). For the analysis, two categories of BMI were considered: obese (≥ 30 kg /m 2 ) and non-obese (< 30 kg /m 2 ). Structured questionnaire The pilot-tested structured questionnaire served as the primary means by which the data were collected. It was designed to collect information on whether a patient had DM and on the possible risk factors associated with DM in PLWH. The WHO STEPS framework was utilized as a standardized and versatile instrument for identifying non-communicable disease risk factors, including the demographic, behavioral, and clinical features of participants. The sociodemographic variables of the participants included age, gender, marital status, level of education, and employment. The captured behavioral factors included smoking, alcohol consumption, dietary intake, and physical activity. Clinical features included duration of HAART, BP, blood glucose, height, and weight measurements. Data analysis Data analysis was performed using Stata version 17. Descriptive statistics in the form of frequencies and percentages were used to provide a summary of DM/HIV comorbidity and independent variables. The characteristics of HIV-positive patients with DM were described and compared with those of HIV patients without DM comorbidity. Chi-square tests were used to evaluate the existence of an association between risk factors and the DM/HIV comorbidity status. This study constructed bivariate and multiple logistic regression models to ascertain the risk factors associated with DM/HIV comorbidity. A bivariate logistic regression model was used to ascertain whether a given predictor variable affected DM/HIV comorbidity, while multiple regression models controlled for other risk factors. The decision on coefficients was made using p -values at the 5% significance level ( P < 0.05). Independent variables were categorised into three groups: socio-economic, behavioural, and biological factors based on the literature review [ 17 , 23 – 25 , 30 – 34 ]. Socioeconomic factors included age (coded as 1 for ≥ 30 and 0 for < 35), marital status (coded as 1 = married, 2 = single, and 3 = widow/divorced), education (coded as 1 = primary or less, 2 = secondary education, and 3 = tertiary education), employment status (coded as 1 = unemployed, 2 = self-employed, and 3 = formally employed), and gender (coded as 1 = male and 0 = female). As for the risk factors, BMI was classified as 1 = obese (≥ 30 kg /m 2 ) and not obese (< 30 kg /m 2 ). Smoking was classified as 'yes' for smokers (coded as 1) and 'no' for nonsmokers (coded as 0). Alcohol consumption was divided into two categories: 'yes' for patients who consumed alcohol (coded as 1) and 'no' for those who did not consume alcohol. Fruit and vegetable consumption was entered into the analysis as a dummy variable, taking a value of 1 for patients with fruit and vegetable servings at least once a day and zero for patients who rarely take fruit and vegetable servings. Fried/fast food was included as binary, taking a value of 1 if the patient ate fried/fast food at least once per day, and 0 if the patient rarely consumed fried/fast food. Physical activity had two categories: 'yes' (coded as 1 for patients involved in sports and exercises) and 'no' (coded as 0 for patients not involved in sports and exercises). Following previous research, such as Duguma et al. [ 17 ], the research included hypertension as a predictor, which had two categories: 'yes' (coded as 1 for patients with hypertension) and 'no' (coded as 0 for patients with no hypertension).
Results The prevalence of T2DM among HIV-positive patients A total of 450 participants were included in the study, of which 76.2% resided in urban communities and 23.8% resided in rural communities. The prevalence of Diabetes Mellitus (DM) among HIV-positive patients in PLWH receiving primary care services at local clinics in Harare was 14.9% ( n = 67). Urban healthcare clinics had a prevalence of 14%, whereas rural healthcare clinics had a prevalence of 0.9%. The prevalence of diabetes among PLWH was significantly higher in participants from urban areas (14%) than in those from rural clinics (0.9%). There was a gender disparity, with 57.6% of the participants identifying as female. Notably, most participants were over 35 years old (80.2%), while the rest were aged between 18–35. The proportion of married respondents was the highest (73.8%). The level of employment was high with approximately half of the participants being self-employed (49.8%). In terms of BMI, 17.8% of the participants were categorized as obese. Table 1 shows that most participants did not consume alcohol (53.1%), 78.9% did not smoke tobacco cigarettes, and 73.6% were involved in sports, fitness, or recreational activities at least once per day. Furthermore, 49.8% had attended secondary school, 37.8% had attended higher education, 8.7% had attended primary education, and 3.8% had no education. The longer a patient was on HAART, the higher the chances of developing diabetes, and the likelihood of developing DM was higher in patients with comorbid hypertension. In this study, a significantly greater proportion of women (74.6%) had DM/HIV comorbidity than males. Patients with higher education (53.7%) had DM/HIV comorbidity than those who had no education or primary or secondary level education. A significantly greater proportion of patients aged > 35 years (73.1%) had DM/HIV comorbidity than patients aged < 35 years. Those who were unmarried (55.2%) had a larger proportion of patients with DM/HIV comorbidity than those who were married. There was a significantly greater proportion of self-employed individuals (77.6%) than those who were unemployed or formally employed. A smaller percentage of patients with DM/HIV comorbidity were obese (7.5%) than their counterparts because the risk of DM increases linearly with BMI. There was a significant association between DM comorbidity and gender, education, alcohol consumption, exercise, BMI, occupation, and smoking ( P < 0.05). However, the results indicated that there was no association between having DM/HIV and patients’ servings of fruits and vegetables ( P > 0.05) among all the risk factors. The findings in Table 1 indicate that participants from urban communities were significantly more likely to have DM/HIV comorbidity than those from rural communities. Further analysis was conducted on the differences between urban and rural dwelling participants with DM/HIV comorbidities (Table 2 ). Logistic regression results of risk factors of T2DM/HIV Table 2 displays the outcomes derived from the binary logistic regression model for risk factors associated with DM among HIV-positive individuals. Demographic factors In comparison to their female counterparts, male participants were less likely to develop Diabetes Mellitus (Adjusted Odds Ratio (AOR): 0.5, p < 0.05, 95% Confidence Interval [CI]: 0.2–1.0). HIV-positive patients older than 35 had a lower likelihood of developing DM compared to their counterparts (AOR: 0.2, p < 0.05, 95% CI: 0.1–0.3). Socio-economic factors HIV-positive individuals who were self-employed displayed a likelihood of contracting Diabetes Mellitus when juxtaposed with their jobless counterparts (AOR: 6.4, p < 0.05, 95% CI: 2.9–13.9). Compared to HIV-positive patients with secondary education and less, patients with higher education were more likely to develop DM (AOR: 8.6, p < 0.05, 95% CI: 3.2–23.0). Behavioural risk factors Table 2 presents findings indicating that individuals who are HIV-positive and consume tobacco products are at a decreased risk of developing Diabetes Mellitus (DM) in comparison to non-tobacco users (AOR: 0.02, p < 0.05, 95% CI: 0.006–0.1). Conversely, individuals who are HIV-positive and abstain from alcohol consumption are at a heightened risk of developing DM in comparison to those who do consume alcohol (AOR: 2.0, p < 0.1, 95% CI: 0.9–4.8). Additionally, physical activity in the form of sports and fitness activities is linked with a reduced risk of developing DM/HIV comorbidity (AOR: 0.3, p < 0.05, 95% CI: 0.1–0.9). In terms of dietary habits, HIV-positive individuals who consumed fruits and vegetables at least five times a day exhibited a lower risk of developing DM in comparison to those who rarely consume fruits and vegetables (AOR: 0.5, p < 0.05, 95% CI: 0.2–1.1). Duration on HAART Individuals who have been on HAART for more than 2 years are most likely to develop DM/HIV comorbidity (AOR: 1.1, p < 0.05, 95% CI: 1.0–1.3). Hypertension HIV-positive individuals who were hypertensive displayed a likelihood of having Diabetes Mellitus when juxtaposed with their non-hypertensive counterparts (AOR: 8.4, p < 0.05, 95% CI: 3.5–20.6).
Discussion Historically, PHC facilities in Zimbabwe have focused on the management of single disease entities, with a limited inclusion of screening for NCDs among PLWH. The adoption of integrated chronic care models has improved the detection and management of NCDs in PLWH. However, an improved understanding of key risk factors could improve disease prevention strategies and targeted community surveillance. Our study demonstrated a relatively high prevalence of DM/HIV comorbidity in Harare in Zimbabwe compared to other regions in Africa. This estimated prevalence rate is notably greater than those reported in comparable studies carried out in the SSA region, which typically showed rates between 2 and 14% [ 35 ]. In Ethiopia, studies have shown that the prevalence of DM in individuals living with HIV is 8% [ 25 ] and 11.4% [ 17 ], respectively, which could be because obesity is not a health problem in Ethiopia [ 36 ]. Our findings further surpassed the prevalence rates of other studies conducted in Zimbabwe (2.83%) by Magodoro et al. [ 37 ], 6.9% established by Cheza et al. [ 32 ], and 8.4% by Gonah et al. [ 33 ] could be because our study was conducted in urban and rural council clinics providing primary healthcare services, as opposed to secondary (district hospitals), tertiary (provincial), and quaternary (central) health centers, which were the settings used in other studies [ 32 , 33 , 37 ]. Urban primary healthcare facilities are situated in high-density suburbs and rural clinics are extensively accessible. All of these clinics typically serve as the first point of contact for individuals requiring health services. The prevalence of diabetes in PLWH in South African rural settings in 2023 was 8.1% [ 38 , 39 ], and a higher prevalence of diabetes (12.1%) was reported in urban areas [ 38 , 39 ]. A significant prevalence of diabetes was noted in urban clinics situated in high-density suburbs (14%) compared to 0.9 prevalence in rural clinics. This could be because two of the urban clinics from the study were donor-funded to treat and supply medication to patients with HIV comorbidities, leading to a large number of patients with comorbidities knowing their diabetes diagnoses and being on record. Another possible factor for the low prevalence of DM in rural communities could be the lower consumption of fast foods. Numerous studies have demonstrated a strong relationship between food and microbiota, demonstrating how the components of various diets may directly affect gut microbiota and may contribute to the development of various diseases, such as diabetes [ 40 – 42 ]. Rural Africa is likely to have a disproportionately high percentage of undiagnosed diabetes, which may be primarily caused by the inverse care rule, which is a discrepancy between patients' medical needs and the availability of healthcare. Delays in diagnosis are caused by a variety of issues, including a lack of continuity of care, delayed or improper access to care, and the existence of psychiatric and other comorbidities [ 43 ] that underestimate the true prevalence of diabetes mellitus in PLWH [ 44 ]. Prioritization in strategic planning to avoid diabetes and its complications is affected by underestimating the prevalence of diabetes and a lack of knowledge about demographic and clinical characteristics in the rural population. Concerning the variables associated with the development of DM and HIV comorbidity, our investigation highlighted that sex and occupation exhibit a statistically significant association. Women are more likely to seek health care services than males, according to the study, just 40% of males seek medical attention, which results in male patients having a lower risk of getting diabetes than their female counterparts (AOR: 0.4; 95% CI: 0.2, 0.9). This observation accentuates the gender differences that exist in the pathogenesis of DM [ 45 – 47 ]. Sex hormones, which have a great impact on energy, metabolism, body composition, vascular function, and inflammatory responses among women compared with men, also help explain the differences in the risk of developing diabetes [ 48 , 49 ]. A study conducted in Malawi in 2018 indicated that women were more likely to be overweight or obese than men, which is related to a higher risk of developing diabetes [ 50 ]. This finding corresponds to other research that suggests that women living with HIV face a higher risk of developing DM [ 32 , 33 , 37 ]. Regarding occupation, our analysis revealed that self-employed patients exhibited higher odds of developing DM than their unemployed counterparts. This could be because those working have a higher probability of moving around, thereby exercising, and can afford to buy and consume a balanced diet. Patients who drink alcohol exhibit a more than twice as much higher propensity of developing DM relative to those who do not consume alcohol (AOR: 2, p < 0.05; 95% CI: 0.9, 4.8). Alcohol intake leads to changes in insulin secretion by pancreatic β-cells and can also mediate insulin resistance, resulting in impaired glucose metabolism that can lead to diabetes. The investigation revealed that patients with HIV who smoke have a slightly lower likelihood of developing DM in comparison to their non-smoking counterparts (AOR: 0.02, p < 0.05 95% CI: 0.006, 0.1). Engaging in physical activities was associated with a lower possibility of developing DM compared to their counterparts, and this difference was statistically significant. This finding can be plausibly explained by the fact that physical activity can enhance patients' sensitivity to insulin, which in turn improves their glucose tolerance [ 34 ]. Individuals who participate in physical activities such as sports or fitness activities are less susceptible to DM [ 34 ]. This result is consistent with prior research on the prevention and management of diabetes, which has established that physical activity can reduce the incidence of DM in both HIV-positive and HIV-negative patients [ 51 – 55 ]. With regard to dietary habits, HIV-positive patients who consume fruits and vegetables more often have a lower risk of developing diabetes. This can be attributed to the fact that fruits and vegetables are rich in dietary fiber, which helps regulate blood sugar levels by slowing the absorption of glucose into the bloodstream. Furthermore, fruits and vegetables are highly nutritious, as they contain vitamins, minerals, and antioxidants that are key to improving insulin sensitivity and reducing the risk of developing DM [ 34 , 56 , 57 ]. Yiga et al. noted that dietary habits are affected by a variety of cultural factors. One's view of one’s body has an impact on how much they eat. Weight loss is a source of stigma and an indication of HIV/AIDS; however, weight gain is related to beauty, dignity, health, richness, and excellent treatment from husbands. Other identified cultural views included high social prestige given to unhealthy fast food and eating out and low social status given to fruits, vegetables, legumes, and minimally processed cereals [ 58 ]. Strengths and limitations This study has some limitations that are worth to be noted. The data collection process involved gathering information from 450 patients who attended eight clinics; however, it is crucial to note that the generalizability of the study findings is limited to PLWH. The focus was on HIV-positive patients; non-HIV patients were not included in the study as references. It is crucial to highlight that information on DM status was based on patients' disclosure and medical records, and the time of diagnosis of DM was not included in the study to determine whether diabetes developed after HIV diagnosis. Consequently, this approach limits the reliability of the data, as some patients could have DM and HIV, but have not yet been diagnosed. Despite the significant contribution of this study, our results cannot be generalized because we only conducted the study at eight facilities. The strengths of the study were the inclusion of traditional risk factors for NCD development, such as tobacco and alcohol use, among others.
Conclusion Diabetes mellitus is increasingly prevalent among PLWH and contributes significantly to the burden of disease experienced by patients with HIV. Our research indicates a high prevalence of DM/HIV comorbidities, especially in urban communities. Adopting an active screening policy for non-communicable diseases (NCDs), particularly type 2 diabetes and hypertension, is highly recommended as a component of standard HIV care. Owing to the severe lack of resources in the majority of SSA care settings, targeted screening based on factors such as age and sex may be suggested. The rapid scaling of treatment delivery in the face of severe human capacity constraints can be made possible by task shifting of HIV care in PHC clinics from doctors to nurses and other less-skilled cadres. Despite these achievements, frontline professional training is recommended to keep up with the changing epidemic and its new issues. The adoption of an integrated health model that comprehensively addresses all chronic diseases in patients at the primary care level may help improve the quality of health services provided to PLWH with comorbidities. Early detection of NCD by opportunistic screening of PLWH will facilitate early management and potentially reduce or delay the onset of cardiovascular disease.
Background Highly active antiretroviral therapy (HAART) has improved the life expectancy of people living with HIV (PLWH) and has increased the risk of chronic non-communicable diseases. Comorbid HIV and diabetes mellitus (DM) significantly increase cardiovascular disease and mortality risk. This study aimed to determine the prevalence of type 2 diabetes mellitus among HIV-positive patients receiving HAART in Zimbabwe and its associated risk factors. Methods This cross-sectional study was conducted at eight primary healthcare facilities in Harare, Zimbabwe, between January 2022 and March 2023. Non-probability convenience sampling was used to recruit adult HIV-positive patients undergoing HAART attending the facilities. Data were captured on clinical history and socio-demographic and behavioral characteristics, and analyzed using descriptive statistics to determine DM prevalence rates. Additionally, bivariate and multivariate logistic regression models were employed to examine factors associated with HIV and DM comorbidities. Results A total of 450 participants were included in this study, of which 57.6% ( n = 259) were female. The majority were married (73.8%) and older than 35 years (80.2%). Most participants had completed high school (87.6%) and 68.9% were employed either formally or self-employed. The prevalence of diabetes mellitus (DM) was 14.9%. HIV/DM comorbidity was more prevalent in patients who were female, self-employed, and smoked ( p < 0.05). Multivariate logistic regression analysis revealed that the factors associated with DM-HIV comorbidity were gender, age, education, marital status, employment status, smoking, physical activities, duration of HAART, and diet. Age, level of education, marital status, and occupation were not associated with HIV-DM comorbidity. Obesity (body mass index > 30 kg/m2), smoking, and alcohol consumption were associated with an increased risk of DM. Regular physical activity is associated with a reduced risk of DM. Conclusion A substantial burden of DM was found in PLWH. The intersectoral integration approach is advocated, and active screening for DM is recommended. Gender-specific interventions are necessary to target diseases and health behaviors that differ between men and women. These interventions should be customized to the specific diseases and behaviors of each group. Supplementary Information The online version contains supplementary material available at 10.1186/s12875-024-02261-3. Keywords
Supplementary Information
Abbreviations Highly active antiretroviral treatment Human immunodeficiency virus Chronic disease Confidence intervals Diabetes mellitus High-income countries Human immunodeficiency virus Low- and middle-income countries Non-communicable diseases Non-governmental organizations Primary Care Nurses Primary healthcare People living with HIV Statistical Package for Social Science Sub-Saharan Africa Type 2 Diabetes Mellitus Village Health Workers World Health Organization Acknowledgements First, the authors want to thank the almighty God for guiding us throughout this journey. The authors acknowledge the University of KwaZulu Natal for the opportunity to conduct this research, the Medical Research Council of Zimbabwe, and the Harare City Council Department of Health for providing support during data collection. Informed consent All participants were provided with an information sheet that explained step by step procedures of the research and were reassured verbally. Verbal clarification was also provided before obtaining written consent. Legally Authorized Representatives of illiterate participants provided informed consent for the study. Assurance of confidentiality was provided through further explanations that were given to the participants. No patient information is to be shared without patient consent; patients to be informed about the kind of information being held about them, how and why it might be shared, and with whom it might be shared. Authors’ contributions RC contributed to the conceptualization of the paper, data entry, data analysis and writing of the manuscript, while KN and TM contributed to the writing of the manuscript. RC is the corresponding author. Both authors read and approved the final manuscript. Funding This study was self-funded by the authors. Availability of data and materials Data and materials are available. Declarations Ethics approval and consent to participate Ethical approval was granted by the Medical Research Council of Zimbabwe (MRCZ/A/2821) and the University of KwaZulu-Natal Biomedical Research Ethics Committee (BREC/00003160/2021). Throughout the study, participants ensured both anonymity and strict confidentiality. Written informed consent was obtained from all participants before data collection, legally Authorized Representatives of illiterate participants provided informed consent for the study. Moreover, the participants were advised that their participation was voluntary and that they were free to withdraw from the interview at any point in time if they experienced discomfort. All data were stored in a lockable steel cupboard and a password-protected laptop accessible only by the study team. All methods were carried out in accordance with relevant guidelines and regulations. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Prim Care. 2024 Jan 15; 25:28
oa_package/09/30/PMC10789024.tar.gz
PMC10789025
38225626
Background Eating disorders (ED) encompass a group of severe psychiatric conditions characterized by dysfunctional eating habits, significantly diminishing an individual's quality of life and often necessitating extended treatments [ 1 – 3 ]. From a transdiagnostic perspective, the psychopathological core of ED revolves around concerns related to body image and dysfunctional emotional regulation, involving behaviors such as binge eating or restraint [ 4 – 6 ]. In fact, weight and eating habits are central to the psychopathology of ED [ 7 ], with nutritional rehabilitation serving as a primary treatment goal across the entire spectrum of ED [ 8 , 9 ]. Despite the crucial role of nutrition in ED treatment, empirical knowledge about meal support remains limited [ 10 , 11 ], especially within inpatient settings [ 12 ], and the focus has often been on anorexia nervosa [ 13 ]. This could be attributed to differences in the rehabilitation of restrictive and binge-purge eating behaviors, which may have guided the research toward more extreme and unstable conditions [ 14 , 15 ]. Patients undergoing ED treatment may have distinct objectives during mealtimes, influenced by the nature of their diagnosis and the specific challenges they face [ 16 ]. For example, individuals with anorexia nervosa may have goals centered around increasing food intake and addressing malnutrition, while those with bulimia nervosa may focus on establishing regular eating patterns and reducing binge-purge cycles [ 17 – 19 ]. Staff-supervised mealtimes in ED treatment serve a crucial purpose, functioning as pivotal moments for patients with diverse ED diagnoses and various types of eating pathology. These structured mealtimes aim to provide a supportive environment where patients can work towards their individualized goals, taking into consideration the nuances of their specific ED diagnosis. For example, dietitians manage the composition and administration of meals, with the support of other professionals such as nurses who can help improve the emotional climate, reduce rituals, minimize compensatory behaviors, and follow the goals established outside the dining area [ 18 , 19 ]. Mealtime in an inpatient environment can be particularly challenging due to its associations with eating, changes in body weight, and the potential for negative affect, fear, and anxiety [ 20 , 21 ]. Previous studies have suggested strategies to create comfortable, safe, and structured environments, encourage regular conversations, and provide opportunities for patients to address their difficulties [ 10 , 22 ]. Other strategies may include avoiding negative feelings or providing comfort to patients [ 13 ]. Through experimental methods, the evaluation of diverse strategies, including vodcasts and music, has revealed that interventions aimed at helping individuals cope with anxiety and preventing mood decline can positively influence increased food consumption [ 23 , 24 ]. However, due to the limited understanding of the best approaches to support patients, strategies can vary [ 12 , 25 ], often leaving it to individual initiative and resulting in diverse approaches and outcomes. Addressing these challenges requires an exploration of holistic and transdiagnostic approaches to eating disorders. This is particularly important due to the possible presence of individuals with different diagnoses in the same space during mealtime. Considering shared psychopathological elements, the intervention should focus on reducing restraint, addressing food rituals, challenging dysfunctional beliefs about food, and minimizing emotional activation triggered by food presentation and assumptions [ 10 , 16 ]. Music, with its potential as a transdiagnostic intervention, has been gaining recognition in the treatment of various psychiatric conditions [ 26 ]. Music-based interventions have been recognized as effective in reducing stress responses [ 27 ] and decreasing state anxiety [ 28 ], while also modulating the autonomic nervous system [ 29 ]. Recent studies have suggested that music-based interventions can offer benefits that cut across diagnostic boundaries, helping individuals struggling with different subtypes of ED cope with emotional dysregulation, improve their overall psychological well-being, and reduce anxiety [ 30 – 32 ]. A qualitative study conducted after the meal indicates that music assisted individuals in diverting their attention from the recently consumed meal, taking a break from anxiety, and connecting with their peers. Indeed, music listening seems to increase mindfulness following a stressor [ 33 ]. However, there are several aspects that should be considered in the implementation of music in a specific intervention, from personal preferences to specific characteristics like rhythm or sounds. For example, it seems that a slow tempo might produce greater relaxation and less tension than faster tempo music [ 34 – 36 ], but evidence is still limited. Therefore, this article aims to evaluate the influence of music, or its absence, during mealtime within a specialized inpatient service for ED. Additionally, it explores the potential of music as a facilitator for approaching food and for reducing negative affective effects, particularly within a transdiagnostic context. Notably, given the current lack of clear evidence regarding the specific type of music that may be beneficial, the study aims to contribute insights in this regard. Our primary hypothesis is twofold: first, we hypothesize that music during mealtime can help individuals with ED cope by producing positive effects on mood and decreasing anxiety, leading to better emotional responses than those experienced in silence; second, we hypothesize that music during mealtime will yield additional benefits, such as a stronger desire to eat, ultimately aiding participants during meals. The secondary objective is to evaluate the differences between focused/relaxation music and general music in terms of tolerability and other effects on meal rituals or food intake, to identify variations between specific types of music.
Method Participants This study employed a within-subjects experimental design. Fifty-one women with an ED were recruited for this study while receiving inpatient care at Casa di Cura Villa Margherita in Arcugnano, Vicenza, Italy, a specialized ward for psycho-nutritional rehabilitation [ 37 ]. Enrollment was based on presence in the inpatient facility and none of the patients refused to participate. The inclusion criteria mirrored the hospitalization requirements, including (a) an age range between 13 and 60 years and (b) the absence of severe psychiatric conditions such as schizophrenia or bipolar disorder, medical comorbidities, or neurological trauma or disorders. Trained psychiatrists diagnosed ED in all participants, following the criteria outlined in the DSM-5 [ 38 ]. All participants voluntarily consented to participate in the evaluation, and patients under 18 years of age required parental consent. No participant was remunerated for their involvement. This study adhered to the principles of the Declaration of Helsinki and its subsequent amendments and received approval from the Vicenza Ethics Committee. Questionnaires Eating psychopathology within the sample was assessed using the Eating Disorder Examination Questionnaire (EDE-Q), a 28-item self-report questionnaire widely used to evaluate specific concerns [ 39 ]. It comprises four subscales, namely eating restraint, eating concern, shape concern, and weight concern, in addition to producing a global score. The Positive and Negative Affect Schedule (PANAS) is a self-report questionnaire that features two 10-item scales designed to measure positive and negative affect [ 40 ]. Each item was rated on a 5-point Likert scale ranging from 1 (not at all) to 5 (very much). To assess various subjective sensations, specific 10 cm visual analog scales were utilized to measure hunger, satiety, desire to eat, perceived stress, and mealtime difficulty. Participants were asked to indicate the intensity of each sensation by marking on the scales. Following the meal, the volume of the music and its pleasantness were evaluated using an 8-point Likert scale, ranging from 0 (absent) to 7 (very high/very pleasant). For the evaluation of eating rituals, a streamlined checklist adapted from a previous study was used, which contained three elements: cutting food into small pieces, patting the food dry, and focusing on one food item at a time [ 41 ]. The presence or absence of rituals was determined for each meal, with scores ranging from 0 (no rituals) to 1 (rituals present in each meal). Dietitians were trained to standardize the assessment, and two evaluators reached a consensus on the presence or absence of rituals. Finally, the ability to adhere to the correct mealtime, set at 20 min for each course, was assessed. Meal conditions and procedure Over a span of four weeks, from April 2022 to July 2022, this study involved the randomization of three different background music conditions during lunch and dinner, occurring from Monday through Friday. For the first two weeks and the last two, the meal compositions remained consistent and identical for all participants. General meal planning details are provided in Table 1 . Dietitians meticulously managed individual differences in meal compositions, recording these for subsequent comparisons with participants. The three background music conditions included no background music, continuous classical music featuring only a piano, termed 'focus music,' and a preset pop music playlist. The randomization, which included the number of meals and conditions, was performed using the Excel function (RAND). All other variables during mealtime remained constant, encompassing ward conditions, timing, meal planning, communal groups, and the presence of both a nurse and a dietitian. Participants were seated in groups of three or four at tables and were encouraged to engage in conversation under all conditions. General meal planning encompassed six distinct mealtimes, including breakfast, lunch, dinner and three snack sessions, each customized to address specific nutritional rehabilitation needs related to the participant's diagnosis. We focused our evaluation exclusively on lunch and dinner due to their lower interpersonal variability compared to other mealtimes. Additionally, both lunch and dinner were managed uniformly by dietitians in terms of food composition and quantity, ensuring consistency for all patients. The remaining daily meals exhibited variations according to the individual's diagnosis and unique nutritional requirements. Throughout mealtime, dietitians meticulously recorded each participant's meal composition and the amount of uneaten food at the meal's conclusion. The remaining food was classified as nothing, a quarter, half, three-quarters, or all. All dietitians were trained equally for this evaluation before the study, worked together in the ED facility for years, and were supervised by the same dietitian. The nutritional and energy composition was evaluated according to national guidelines [ 42 ] in a separate moment using notes about the quantity of food left by each participant. This operation was necessary for comparing different meals during the weeks. Noise levels were measured using a dedicated cell phone application called 'DecibelX' placed in the room's center, the device captured the mean noise level during the meal, with data reported in decibels. The goal was to evaluate the consistency of the music volume across different scenarios. Different dietitians managed background music and meal planning during the specific study weeks. This randomization was planned to eliminate any potential associations between meal compositions and music conditions. Before each meal, information on hunger, satiety, the desire to eat, and PANAS scores was collected using a pencil-and-paper approach. After mealtime, participants reported psychological difficulty with meals, music volume, loudness, and provided information on hunger, satiety, the desire to eat, and PANAS. The post-meal questionnaires were filled out 5 min after the end of the meal. Additionally, during meals, a dietitian collected information about eating rituals, the ability to adhere to mealtimes, and details about what the participants consumed or left on their plates. During the course of the study, all participants participated in ten meals, being exposed to all conditions at least three times. Statistical analysis We conducted a power analysis prior to recruitment based on similar studies with community volunteers, which indicated that a minimum of 25 subjects would be required to detect differences in energy intake with an α = 0.05 and a power of 80% [ 43 ]. Given the clinical condition of our sample, we opted to double the sample size. All analyses necessitated a nonparametric approach due to the distribution of the data. The Kruskal–Wallis test was used to assess differences among diagnoses for demographic and clinical variables, as well as variations between different musical conditions. Pairwise comparisons were adjusted using the Bonferroni correction. For each participant, mean responses were assessed both before and after at least three meals, and under different music conditions. A Box-Cox transformation was applied to the variables assessed before and after meals, including hunger, satiety, desire to eat, stress, and PANAS scores. This transformation enabled us to employ repeated measure ANOVA analyses, with pairwise comparisons adjusted using the Bonferroni correction. Additionally, the interaction between time (pre-post meal) and music conditions was assessed using repeated measures ANOVA. A significance level was established for all analyses at p < 0.05. We conducted all data analyzes using IBM SPSS Statistics 25.0 software (SPSS, Chicago, IL, USA).
Results Demographic characteristics The included sample comprised 51 women: 31 had a diagnosis of anorexia nervosa, nine had a diagnosis of bulimia nervosa, five had a diagnosis of binge eating disorder, and six participants were diagnosed with other feeding and eating disorders. All the participants were white cisgender women. The sample was characterized by an average age of 25.22 ± 11.33 years and an average BMI of 20.34 ± 7.80 kg/m 2 . For clinical details, see Table 2 . No differences emerged in the mean energy intake planned for participants ( H = 2.250, p = 0.522) or the actual mean energy consumed ( H = 0.569, p = 0.903) regarding the possible effects due only to the different diagnoses of ED (see Table 2 for details). Composition of meals Table 3 compares the composition of the meal, the rituals, and the timing. Specific differences emerged for untaken energies (η p 2 = 0.047), rituals (η p 2 = 0.064), and pleasingness (η p 2 = 0.612), with the condition without music that reported more negative results for all the features. Within-subjects analyses The means and standard deviations of the psychological evaluations in the three music conditions are reported in Table 4 , with differentiation between pre-post mealtimes. Repeated measure ANOVA conducted on hunger revealed a significant reduction over time (F(1,150) = 97.447, p < 0.001, η p 2 = 0.394), but a nonsignificant effect of the type of music (F (2,150) = 0.033, p = 0.968) and a nonsignificant interaction time by type of music (F (2,150) = 0.189, p = 0.828). The same was found for satiety, with a significant increase over time (F(1,150) = 255.630, p < 0.001, η p 2 = 0.630), but a nonsignificant effect of the type of music (F (2,150) = 0.036, p = 0.965), and a nonsignificant interaction time by the type of music (F (2,150) = 0.129, p = 0.879). Looking at the desire to eat, the same results were found with a significant reduction over time (F(1,150) = 104.415, p < 0.001, η p 2 = 0.410), but a nonsignificant effect of music type (F(2,150) = 0.339, p = 0.713) and a non-significant interaction time by type of music (F (2,150) = 0.411, p = 0.664). The same results were found for the stress level, with a significant effect of time (F(1,150) = 183.471, p < 0.001, η p 2 = 0.550), but a nonsignificant effect of the type of music (F (2,150) = 0.033, p = 0.967), and a nonsignificant interaction time by the type of music (F (2,150) = 0.326, p = 0.722). Looking at perceived negative emotions, no time (F(1,150) = 0.239, p = 0.626) or music (F(2,150) = 0.384, p = 0.682) were found; thus, no results for the time by music (F (2,150) = 0.078, p = 0.925). Finally, looking at perceived positive emotions, we found a significant effect of time (F(1,150) = 27.250, p < 0.001, η p 2 = 0.154), a significant effect of the musical conditions (F(2,150) = 3.173, p = 0.045, η p 2 = 0.041), and a significant interaction time by music (F(2,150) = 26.920, p < 0.001, η p 2 = 0.264). Looking at pairwise comparisons between music conditions, a significant difference was found between focus music and no music ( p = 0.046) and playlist and no music ( p = 0.022), but no differences were found between playlist and focus music ( p = 0.762). See Fig. 1 for a graphical representation.
Discussion This study adopts an ecological approach, pioneering the examination of background music’s impact on individuals with ED during mealtime in an inpatient facility, considering various music choices. Our findings reveal a positive influence of background music during meals, particularly in reducing negative emotional states. This effect was most pronounced in the absence of background music. Although we did not observe significant differences between the types of music, the general benefit of music during mealtime was evident. Recent literature has underscored the importance of incorporating music into treatment protocols, although consensus remains elusive due to methodological variations between studies [ 44 ]. Previous research, which included classical music during mealtime, demonstrated positive results, particularly among individuals with anorexia nervosa and bulimia nervosa [ 23 , 24 ]. Our findings align with this evidence, highlighting the positive impact of background music across the entire spectrum of eating disorders. Participants reported better emotional states and averted post-meal mood deterioration, implying the potential utility of integrating background music into the ED treatment settings during mealtime, as previously identified [ 12 ]. However, even though music demonstrated a potential effect in preventing the deterioration of positive moods, we need to assess the absence of other specific effects in our study. Despite literature reporting an active role of music in emotional regulation and stress reduction, its application during mealtime appears to be less effective [ 45 – 47 ]. This discrepancy could be attributed to the collection of data close to the end of mealtime, or it might be influenced by the specific context of a particularly challenging moment, where patients could face difficulty in selective attention [ 45 , 48 ]. While our study is unable to clarify this point, future studies should evaluate these aspects. The ability of music to alleviate negative moods is well established, and music interventions are commonly employed to mitigate negative emotions in various settings, offering both psychological and physiological benefits [ 27 , 49 ]. Furthermore, existing literature suggests that the style of music does not exert significant moderating effects on mood during mealtime [ 27 ], reinforcing the role of music—in general—as a potent distraction from food-related concerns. Our findings, which revealed reduced uneaten food when music was present, are consistent with observations in the general population [ 50 , 51 ]. Overall, our data show the presence of an improvement in mealtime outcomes using background music in an inpatient setting. When examining uneaten foods, an interesting observation emerged: a decrease in protein and lipid intake was noted in the silent scenario. In the nutritional rehabilitation of people with ED, an appropriate amount of fat and protein is crucial for several reasons: nutrient density, energy source, muscle maintenance, hormonal balance, satiety, and brain function [ 52 , 53 ]. Indeed, this finding is intriguing because protein is essential for the synthesis of serotonin and dopamine, neurotransmitters that play a pivotal role in fostering feelings of positivity, motivation, passion, tranquility, and presence [ 54 ]. Similarly, lipids are crucial for neural development, nerve cell differentiation, and migration, making them vitally important for the proper functioning of the nervous system and for activating reward-related areas in the brain [ 55 , 56 ]. Therefore, the decline in mood may be exacerbated also by the reduction in specific dietary intake. The presence of music might help mitigate these effects, assisting people in improving their dietary quality [ 51 ]. However, this aspect should be assessed in future studies to explore its potential effects on eating rehabilitation. Finally, a particularly intriguing result emerged, showing reduced eating rituals documented during mealtime when background music was introduced. Psychological and neurobiological data indicate that eating rituals in EDs are associated with obsessive–compulsive traits and negative responses to stressors, which can hinder progress and treatment effectiveness [ 57 , 58 ]. This underscores the need for studies to improve results in this regard. Few studies have explored the impact of exposure to music on obsessive–compulsive symptoms, with limited data suggesting a positive role of music in this specific psychological disorder [ 59 ]. Our findings imply that external elements, such as music, can help patients shift their focus away from negative emotional states that could be linked to rumination and disruptive thoughts [ 59 , 60 ]. Additionally, our results support the potential of music in reducing the degradation of positive emotions, although more research is warranted to dive into this area. Limitations, strengths, and future directions One of the notable limitations of this study is the predominance of participants diagnosed with anorexia nervosa, with fewer individuals diagnosed with bulimia nervosa or binge eating disorder. To mitigate the variation between individuals, we employed a randomized repetition of exposure to music under all conditions. A notable strength of this study lies in its application of diverse conditions within real-world settings, demonstrating the potential practicality of the results in inpatient facilities. However, for a broader generalizability, it may be advisable to replicate the study with outpatient populations. Additionally, we relied exclusively on self-report questionnaires to assess affective changes and there is a reduced heterogeneity of the sample. Future research could improve this aspect by incorporating a variety of methodologies and different populations (i.e., men and gender-diverse individuals) to ensure a more comprehensive evaluation. Additionally, for future studies, we acknowledge the potential for exploring mediation effects, particularly investigating whether the observed effects of music on mealtime benefits are mediated by increases in mood and decreases in anxiety. The examination of this mediation pathway, from music to mood and subsequently to mealtime benefits, could offer valuable insights and contribute to a deeper understanding of the underlying mechanisms involved.
Conclusion In summary, the findings affirm the role of music as a beneficial environmental distractor during mealtime for people with ED. The introduction of background music may enhance the overall inpatient treatment experience, creating a more supportive and accommodating atmosphere, particularly during stressful moments like mealtime. This approach has the potential to be considered for implementation in all ED inpatient facilities, with the aim of promoting positive affective states that could, in turn, produce positive effects on both psychological and physical symptoms during meals, ultimately contributing to improved overall outcomes.
Background In rehabilitating eating disorders (ED), mealtimes are critical but often induce stress, both for restrictive and binge-purge disorders. Although preliminary data indicate a positive effect of music during mealtime, few studies employ an experimental approach. This study examines the influence of background music during mealtime in an inpatient ward setting, offering a real-world perspective. Methods Fifty-one women diagnosed with ED participated in this within-subjects study. Over two weeks, during lunch and dinner, they were exposed to three randomized music conditions: no music, focus piano music, and pop music. The self-report questionnaires captured affective states, noise levels, and hunger, while trained dietitians recorded food consumption and eating rituals. Results The absence of music led to an increase in uneaten food ( p = 0.001) and the presence of eating rituals ( p = 0.012) during mealtimes. Significantly, only silence during mealtime reduced positive emotional states, while background music maintained positive emotions ( p < 0.001). No specific differences emerged between the two types of music (focus piano and pop). Conclusions These findings affirm the positive impact of background music during mealtime in real-world settings, enhancing the potential of inpatient eating rehabilitation programs for individuals with ED. More studies are needed to validate and extend these results, particularly in outpatient settings. Mealtimes can be stressful for people with eating disorders (ED). This study looked at how background music during meals could help. We had 51 women with ED in an inpatient ward. They ate lunch and dinner without music, calming piano music, or pop music for ten days. We asked them about their feelings, noise, and hunger. Dietitians noted what they ate and any rituals. We found that not having music led to more uneaten food and eating rituals. Surprisingly, complete silence reduced positive emotions. All types of music kept positive feelings, with no difference between them. Music helped to make mealtimes better for these patients. But more research is needed, especially for patients outside the hospital. Keywords Open access funding provided by Università degli Studi di Padova.
Author contributions PM conceived and designed the study, analyzed and interpreted the results, and contributed to the manuscript's first and subsequent drafts. PT interpreted the results and contributed to the manuscript's first and subsequent drafts. EB and SM conceived the study, collected the data, and contributed to the first draft. AMA, LN, and SS collected the data and contributed to the first draft. All authors critically reviewed and approved the submitted manuscript. Funding Open access funding provided by Università degli Studi di Padova. There was no funding associated with this work. Availability of data and materials The research team will hold the anonymized data. Data sharing will be considered for researchers who provide a methodologically sound proposal. Declarations Ethical approval and consent to participate All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the Vicenza Ethics Committee. Informed consent Informed consent was obtained from all individual participants included inthe study. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
J Eat Disord. 2024 Jan 15; 12:7
oa_package/2d/64/PMC10789025.tar.gz
PMC10789026
38225608
Background In the past few decades, China has turned from a low-income country into a mid-income country with near one-fifth of the world's population. The increased income leads to not only promote the total meat consumption, but also increase the demand for high-quality meat with high nutritional content and sensory properties [ 1 , 2 ]. Mutton is considered to be a highly nutritious and valuable food because it is rich in high biological value proteins, low cholesterol and rich vitamins [ 3 ]. Therefore, during the fattening period of goats, it is not only the feed efficiency and growth rate that need to be pursued, but also the dietary impact on the carcass and meat quality should be considered. Probiotics are regarded as potential alternatives to antibiotics to regulate gastrointestinal microecological balance and improve growth performance and meat quality [ 4 , 5 ]. Clostridium butyricum (CB), a Gram-positive bacterium, has been reported to be a potent feed additive to improve growth performance, feed conversion efficiency and antioxidant capability in monogastric [ 6 ]. Previous studies have been focused on its role in adjusting the intestinal microflora structure because CB can survive at low stomach pH and high bile concentrations [ 7 ]. To date, CB has been widely used as an alternative to antibiotics in the improvement of growth performance and health of animals, especially in poultry [ 8 ] or pigs [ 9 ]. However, there is still limited research on the effects of CB on growth performance, and the results are inconsistent in ruminants [ 10 , 11 ]. In addition, dietary CB can also improve the meat quality of broilers and pigs, which may be related to regulating nutrient digestibility, improving muscle amino acid (AA) and fatty acid (FA) profiles, and enhancing antioxidant status [ 12 , 13 ]. Moreover, butyric acid produced by CB metabolism can regulate muscle lipid metabolism [ 14 ]. Furthermore, the inconclusive effects of CB in the literature may be due to differences in source of CB strains used, its supplemental dose and type of diet [ 15 , 16 ]. Zhang et al. [ 17 ] observed an interaction between dietary lipids and CB on lipid-related gene expression in breast muscle of broiler chickens, suggesting that dietary fat may be interacted with CB on meat quality. The information is scarce on the effects of dietary CB supplementation on meat quality and FA composition of ruminants fed diets varying with fat contents. The close association of intramuscular fat (IMF) with meat quality, tenderness and flavor as well as the water holding capacity of meat is well documented [ 18 ]. Studies reported that dietary fat supplementation to ruminants improved animal productivity by increasing the dietary energy density, providing essential FA and increasing IMF concentration [ 19 , 20 ]. The FA composition of meat in ruminants is largely influenced by dietary composition, thus the efforts have been made to alter the FA composition and content of animal tissues by inclusion of dietary fat. Therefore, we hypothesized that dietary supplementation with CB and rumen protected fat (RPF) could improve meat quality, nutritional value and promote fat deposition of muscle, and their effects would be interacted. The objective of this study was to investigate the effects of dietary supplementation with CB and RPF on carcass traits, meat quality, muscular antioxidant capacity, oxidative stability, AA and FA composition, and lipid metabolism of Longissimus thoracis (LT) muscle in goats.
Methods Animals, experimental design and treatments Thirty-two male Saanen goats with an average age of 3 months and an initial body weight (BW) of 20.5 ± 0.82 kg were used in the study. The experiment was a completely randomized block design with a 2 × 2 factorial treatment arrangement: 2 RPF supplementation (0 vs. 30 g/d) were combined with 2 CB supplementation (0 vs. 1.0 g/d). The goats were blocked by BW and allocated into 8 blocks of 4 goats. Within each block, goats were randomly assigned to one of the 4 treatments. The product of RPF consists of 48% C16:0, 5% C18:0, 36% C18:1, 9% C18:2 and 2% C14:0, and was provided by Yihai Kerry Food Industry Co., Ltd. (Tianjin, China). The product of CB (2 × 10 8 CFU/g) was provided from Greensnow Biological Biotechnology Co., Ltd. (Wuhan, China). The dose of the CB was determined based on the manufacturer’s recommendation as well as the previous studies [ 10 , 11 ], while the dose of RPF was determined according to the report by Behan et al. [ 21 ]. The goats were housed individually with free access to water and provided ad libitum a total mixed ration (TMR) twice daily at 08:00 and 18:00. The daily dose of CB and RPF was mixed with 10 g of ground corn, and were top dressed onto the ration twice in the morning and the afternoon feeding. The ingredients and composition of the experimental diets are shown in Table 1 . The whole experiment period was consisted of a14-d for adaptation and a 70-d for data and sample collection. Data and sample collection Feed intake and growth performance Feed offered and refusals of each goat were recorded daily during the sample collection period to calculate the dry matter intake (DMI). The BW of each goat was recorded at the beginning and the end of collection period before the morning feeding to determine the average daily gain (ADG) and feed conversion ratio (FCR). Slaughtering, carcass traits and sample collection The goats were weighed as the live weight before slaughter (LWBS) after 16 h of fasting from solid food on the second day after the end of the experiment and slaughtered in a commercial abattoir (Changhao, Harbin, China) in the early morning. After removing the hairs, head, viscera and hoofs, the hot carcass weight (HCW) was recorded and dressing percentage was calculated individually as HCW divided by LWBS × 100. The liver, heart, spleen and kidneys were weighed and organ index was calculated as percentages of live weight. Back fat thickness was measured at the midpoint of the LT muscle at the 12 th and 13 th rib. A section of LT muscle from the right side of each carcass were frozen at −20 °C for chemical composition, AA and FA analysis. Another LT section was frozen in liquid nitrogen for antioxidant status and gene expression determination. Laboratory analysis Chemical analysis The chemical composition of DM (No. 930.15), crude protein (CP, No. 984.13) and ether extract (No. 920.39) of the feed ingredients, diets and LT muscle were analyzed according to the Association of Analytical Chemists method (AOAC) [ 22 ]. Contents of heat-stable α-amylase treated neutral detergent fiber (NDF) was analyzed following the methods of Van Soest et al. [ 23 ], and acid detergent fiber (ADF, No. 973.18) was determined according to AOAC [ 22 ]. Meat quality measurement The pH value of the LT muscle at 45 min and 24 h (stored in air at 4 °C for 24 h) after slaughtering was determined by inserting a portable pH meter (HI9125; Hanna Instruments, Padova, Italy) with temperature compensation to probe directly into the muscle. The pH meter was calibrated at two points with two kinds of standard buffers (pH = 6.86 and 4.01) before measurement. The meat color parameters including L * (lightness), a * (redness), and b * (yellowness) were determined using a portable chromameter (CR-400, Minolta, Osaka, Japan) under a D65 light source, with a 10 ◦ standard observer, an 8 mm diameter measuring area and a 50 mm diameter illumination area (meat sample was stored in a vacuum bag, taken out before measurement and allowed to bloom for 30 min at 4 °C). Approximately 12 g of meat sample was trimmed to regular pieces (2 cm × 2 cm × 2 cm) and initially weighed. Then, the sample suspended at 4 °C for 24 h and blotted dry on filter paper and reweighed. The drip loss was calculated as the difference before and after drip to the percentage of original weight. The cooking loss and shear force of 32 goat muscle samples were measured simultaneously. Cooking loss was assessed as difference before and after cooking. Briefly, approximately 25 g of meat sample (4 cm × 2 cm × 2 cm) was weighed, wrapped in sealed bags, and heated in a water bath until the central inner temperature reached 70 °C. After cooling and drying at room temperature, the cooked sample was reweighed. Following the cooking loss determination, the same meat samples were used to evaluate the shear force according to the method of Garba et al. [ 24 ]. The meat sample was cut into a cuboid of 1 cm × 1 cm × 2 cm along the direction of the muscle fiber and then cut perpendicular to the muscle fiber by a tenderization analyses (C-LM3B tenderization instruments, Northeast Agricultural University, Harbin, China) with a 15-kg load transducer, a crosshead speed of 200 mm/min, and a shearing action similar to a Warner–Bratzler shear device. The samples were cut parallel to the longitudinal orientation of the myofibers. Each sample was measured 6 times, and the average was calculated. Muscle antioxidative status The activities of total antioxidant capacity (T-AOC, No. A015), superoxide dismutase (SOD, No. A001), glutathione peroxidase (GPX, No. A005), catalase (CAT, No. A007), and the contents of malondialdehyde (MDA, No. A003) in muscle were determined using assay kits according to the manufacturer's instructions (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Amino acid analysis The AA profiles of LT muscle were analyzed using the standard method GB 5009.124–2016 [ 25 ]. The freeze-dried muscle samples (1.5 g) was added with 10 mL of 6 mol/L HCl solution, and hydrolyzed at 110 °C for 22 h after filling with nitrogen. The solutions were centrifuged, precipitated and dried using an evaporator. Then, the obtained residue was dissolved by adding 2 mL of sodium citrate buffer solution, and was analysed for AA composition using an automatic amino acid analyzer (LA 8080, HITACHI, Tokyo, Japan) after filtered through a 0.22-μm membrane. Fatty acid analysis Lipids of the freeze-dried muscle and the feed samples were extracted using a mixture of chloroform–methanol (2:1, v/v) according to the proceduces of Folch et al. [ 26 ]. Total FA from LT muscle were transesterified into FA methyl esters (FAME) with boron trifluoride-methanol solution reagent, according to He et al. [ 27 ]. The FAME were analyzed using an Agilent 6890N gas chromatography equipped with a flame ionization detector (Agilent Technologies) and a CD-2560 (100 m × 0.25 mm × 0.20 μm) capillary column. The gas chromatography program had initial temperature at 75 °C for 30 s, then increased to 175 °C at the rate of 20 °C/min, held for 25 min, increased again from 175 to 215 °C at 10 °C/min and finally held at 215 °C for 40 min. The injector and detector temperature were 235 °C and 250 °C, respectively. The identification of individual FA methyl esters was achieved by comparing the retention times with commercial standard mixtures (FAME mix 37 components). The conjugated linoleic acid isomers and trans - and cis -octadecenoic acids were identified with reference to previous reports [ 28 ]. FAME were quantified using an internal standard, and nonadecanoic acid (C19:0) methyl ester into each sample prior to methylation. The concentration of FA in the samples was calculated following Le et al. [ 29 ]. Each FA content of muscle was expressed as mg/100 g of total FA concent. Quantitative real‐time PCR Total RNA was extracted from LT muscle (100 mg) using a Trizol reagent (Vazyme, Nanjing, China) according to the manufacturer's instructions. The RNA integrity was assessed using 1% agarose gel electrophoresis. The quality and quantity of RNA samples were determined using a spectrophotometer (DeNovix, USA) at 260 and 280 nm. The RNA samples were converted into the complementary DNA (cDNA) using a reverse transcription kit (BL699A, Biosharp, Hefei, China) according to the manufacturer's instructions. The general reverse transcription system included 1 μg of total RNA, 4 μL of 5 × RT MasterMix, 1 μL of 20 × Oligo dT & Random Primer, and RNase-free H 2 O to a final volume of 20 μL. Quantitative real-time PCR was performed using 2 × Fast qPCR Master Mixture (Green) kit (DiNing, Beijing, China). Primers specific for acetyl-CoA carboxylase ( ACC ), fatty acid synthase ( FAS ), stearoyl-CoA desaturase ( SCD ), sterol regulatory element-binding transcription factor 1( SREBP-1 ), lipoprotein lipase ( LPL ), CCAAT/enhancer binding protein alpha ( C/EBPα ), hormone-sensitive lipase ( HSL ), carnitine palmitoyltransferase-1B ( CPT1B ), peroxisome proliferator-activated receptor γ ( PPARγ ) were designed using Primer 5.0 software (Table 2 ) and were synthesized by Sangon Biotech Co., Ltd. (Shanghai, China). A portion (1 μL) of each cDNA sample was amplified in a 20 μL PCR reaction, including 0.5 μL of upstream and downstream primers, 10 μL of 2 × Fast qPCR Master Mixture (Green), and ddH 2 O was added to a final volume of 20 μL. The Line-Gene 9600 Plus real-time PCR system (Bioer, Hangzhou, China) was used as follows: 94 °C for 2 min followed by 40 cycles of 94 °C for 15 s, 60 °C for 15 s and 72 °C for 30 s. All samples were assessed in triplicate. The β-actin was selected as the reference gene to normalize mRNA expression of target genes. The relative expression of target genes was evaluated using the 2 –ΔΔCT method [ 30 , 31 ]. Statistical analysis Data were analyzed using the MIXED procedure (SAS 9.4, SAS Institute Inc., Cary, NC, USA). The model included the fixed effects of CB, RPF and the interaction between CB and RPF, and the random effect of goat. The initial BW was also included in the model as covariate. Tukey’s multiple comparison test was used to examine the significance among treatments when the interaction was significant. Results are reported as least squares means. Effects at P < 0.05 were considered statistically significant and effects at 0.05 < P ≤ 0.10 as trends.
Results Growth performance and carcass traits As shown in Table 3 , the DMI ( P = 0.005) and ADG (trend; P = 0.058) were increased by supplemented RPF in diets. There was an interaction between CB and RPF for LWBS ( P = 0.041). The supplementation of CB increased ( P < 0.05) LWBS when RPF was not added. The HCW was increased either by dietary CB supplementation (trend; P = 0.093) or by adding RPF ( P = 0.046), and similarly the back fat was also increased by supplementation of CB (trend, P = 0.100) and RPF ( P = 0.006). Dietary CB supplementation increased spleen weight ( P = 0.011) and spleen index ( P = 0.045) of goats. Meat quality The results of meat quality are presented in Table 4 . There were interactions between CB and RPF for shear force ( P = 0.015) and IMF ( P = 0.049). CB supplementation reduced ( P < 0.05) shear force in the absence of RPF supplementation. Whereas, the IMF was increased ( P < 0.05) with combination of CB and RPF supplementation. Furthermore, pH 24h ( P = 0.009) and a * ( P = 0.007) were increased, and L * ( P < 0.001) and drip loss ( P = 0.005) were decreased by supplementing CB. Overall, the supplementation of RPF did not affect the meat quality, except that it reduced ( P < 0.001) shear force at no CB addition, and increased ( P < 0.001) IMF regardless of with and without CB. Antioxidative status The parameters of antioxidants in the LT muscle of goats are shown in Table 5 . Dietary CB supplementation increased activities of T-AOC ( P = 0.050) and GPX ( P = 0.006) and decreased muscle MDA content ( P = 0.044). However, the antioxidant activity in the LT muscle was not affected by dietary RPF inclusion. Amino acid composition As shown in Table 6 , an interaction ( P = 0.003) between CB and RPF was noticed only on the content of lysine (Lys) between CB and RPF; the CB supplementation increased ( P < 0.05) the content of Lys in the absence of RPF. The contents of essential AA (EAA) ( P = 0.027), flavor AA (FAA) ( P = 0.010) and total AA (TAA) ( P = 0.024) were increased by CB supplementation. The CB supplementation also increased the contents of arginine (Arg) ( P = 0.013), histidine (His) ( P = 0.035) and threonine (Thr) ( P = 0.026) for EAA, and increased non-essential AA (NEAA) contents of serine (Ser) ( P = 0.018), aspartic acid (Asp) ( P = 0.023), and glutamic acid (Glu) ( P = 0.047). Dietary RPF supplementation increased the contents of isoleucine (Ile) ( P = 0.049) and tyrosine (Tyr) ( P = 0.044), and decreased ( P = 0.003) Lys content. Fatty acid composition Dietary CB supplementation increased the content of 18:3, n-3 ( P < 0.001), 20:5, n-3 ( P = 0.003) and polyunsaturated FA (PUFA) ( P = 0.048), but it decreased the content of 16:0 ( P = 0.013) without altering the profiles of other FA (Table 7 ). In addition, the dietary RPF supplementation increased the content of total FA ( P = 0.003), 16:0 ( P < 0.001), c 9-18:1 ( P = 0.002), 20:2, n-6 ( P = 0.014), SFA ( P = 0.031) and MUFA ( P = 0.004). Lipid-metabolic genes expression As shown in Fig. 1 , the interactions between CB and RPF were observed for SCD ( P = 0.001) and PPARγ ( P = 0.025). Dietary CB supplementation did not change the expression of SCD when it was combined with RPF, but it downregulated ( P < 0.05) the expression of SCD without RPF supplementation. Dietary CB supplementation upregulated ( P < 0.05) the expression of PPARγ when RPF was not added in diets. Moreover, supplementation of CB upregulated ( P = 0.034) the LPL expression in LT muscle. Dietary RPF supplementation upregulated the expression of ACC ( P = 0.003), FAS ( P = 0.038), and SREBP-1 ( P = 0.008).
Discussion Growth performance and carcass traits To date, the information regarding the effect of CB supplementation on the growth performance of ruminants is still scarce. Li et al. [ 32 ] observed that the growth performance of Holstein heifers was improved by dietary CB inclusion. However, Zhang et al. [ 10 ] reported that supplementation of CB in goat diet did not change the fattening performance of goats, which is consistent with the present study. Moreover, Cai et al. [ 11 ] found that supplementation of CB alone or combined with Saccharomyces cerevisiae both improved growth performance of heat-stressed goats. Thus, it can be suggested that the beneficial effects of CB supplementation on animal growth performance varied with studies due to the animal species used (dairy vs. goats) or environmental conditions (e.g., heat stress). It has been reported that the amount and composition of fat supplemented in diets are the key factor to affect the DMI of ruminants [ 33 – 35 ]. In the present study, supplementation of RPF increased DMI, which is consistent with the study reported by De Souza et al. [ 36 ] who found an increase of DMI of dairy cows fed an C16:0-enriched diet. Similarly, Bai et al. [ 35 ] reported that supplemental rumen bypass fat (87% C16:0 + 10% C18:0) increased DMI of Angus bulls. The possible explanation could be the C18:0 content (5%) in the RPF, as speculated by Rico et al. [ 37 ] that C18:0 inhibit the secretion of hypophagic gut peptides, such as glucagon-like peptide 1 and cholecystokinin which would limit a sense of fullness, consequently allowed for a greater DMI. In addition, previous studies reported that supplementation of RPF increased the nutrient digestibility such as protein, lipid or fiber in steers or ewes [ 38 , 39 ]. Therefore, the greater ADG due to RPF supplementation could be explained by the greater DMI and nutrient digestibility. In this study, CB supplementation increased the LWBS, but it was interacted with RPF, the effect of either CB or RPF on the increased LWBS appeared to be independent, rather than additive. The trend of increase of HCW by CB is consistent with the increased LWBS. The increase in LWBS with CB in this study may be due to the fact that the CB can provide metabolites, especially short-chain FA, as an energy source to the digestive enzyme system, thus increase the BW [ 15 ]. The spleen is the largest immune organ in animal body, which can improve the body resistibility and minimize the pathogenic bacteria from invading the body, therefore its development is believed to be directly associated with the immune function of the body. The increased spleen weight and its percentage of LWBS, suggesting that feeding CB to goats may potentially improve the immune function. Dietary energy level is closely associated with carcass traits, and the dressing percentage is considered as a key indicator for measuring carcass traits. In this study, the increased HCW with adding RPF was in agreement with the increased LWBS, while, the increased backfat thickness by RPF supplementation could be explained by increasing fat deposition as a result of adding FA from RPF. The increased LWBS, HCW and backfat thickness without affecting the dressing percentage with RPF addition were in agreement with Awawdeh et al. [ 40 ], who reported dietary fat supplementation improved LW and backfat thickness of lambs without changing the dressing percentage. It speculates that adding dietary fat would be especially toward to backfat deposition with minimum impact on dressing. Meat quality Muscle pH is primarily related to its shearing force, water retention capacity and meat color. Glycolysis is a major metabolic pathway in animal postmortem period, resulting in accumulation of lactic acid, which leads to a rapid decline in muscle pH [ 41 ]. In this study, although the pH 45min was not changed, the increased pH 24h by adding CB suggest a reduction of muscle glycolysis rate. Notably, drip loss of meat is related to the ultimate pH and the speed of pH drop, the lower drip loss as a result of raised pH, as reported by Di Luca et al. [ 42 ]. Results from the current study demonstrated that the addition of CB decreased drip loss, which is in accordance to the results of Liu et al. [ 12 ]. Studies have reported that the antioxidant capacity of meat is inversely correlated with drip loss, and the damage of cell membrane integrity caused by lipid oxidation is associated with increased drip loss [ 43 ]. Therefore, the decrease of drip loss in our study indirectly reflects the inhibitory effect of CB on oxidative damage and exerting positive effects on the water-holding capacity of muscle. Meat color is one of most used criteria in assessing meat quality and is the single most important driving factor in a consumer's decision to purchase meat [ 44 ]. The meat color mainly depends on oxidation and light reflection. Consumers prefer bright red meat because they associate to a red color with freshness. The content of myoglobin is closely related to meat color values, when oxymyoglobin is oxidized to methemoglobin, the color of meat changes from bright red to reddish brown [ 45 ]. Studies have found that exogenous antioxidants supplementation could prevents lipid oxidation, thus stabilizing oxymyoglobin content [ 46 ]. In this study, CB supplementation would improve the oxidative stability due to the increased activities of T-AOC and GPX, and decreased the content of MDA in LT muscle, which was reflected to meat color changes with an increase in a * values and a decrease in L * values. In a similar study, Cai et al. [ 11 ] found that dietary addition with CB increased the a * value and decreased the L* value in muscle of pigs. Tenderness has been considered as the most important palatability characteristic of meat, which can be evaluated by shear force. In this study, it was found that supplementation of CB decreased shear force of LT muscle. This may be attributed to the decrease of drip loss, which is positively correlated with meat tenderness. In addition, the decrease in shear force when goats were fed RPF in the present study can be attributed to the higher IMF, as reported in beef cattle [ 47 ]. However, although the RPF had the influence on shear force, it had no effect on pH value, meat color and water retention capacity in the present study. In support of our findings, Parente et al. [ 19 ] showed that diets supplemented with oil containing a variety of mixed FA decreased shear force, but had no effect on pH, meat color and other meat physical traits. The IMF positively influences sensory quality traits of meat including tenderness, juiciness, and flavor of meats [ 48 ]. In this study, although the supplementation of CB alone did not change IMF content, it increased the IMF content with RPF addition in the diets, suggesting an additive effect between CB and RPF on the IMF content. We speculate that adding CB in the diet may facilitate the deposition of FA in the muscle when a high dietary FA content is available. Oxidative stability of the longissimus thoracis muscle Lipid oxidation has a negative impact on meat quality and shelf life, which can lead to deterioration of flavor, color and nutritional value of meat [ 49 ]. Reducing lipid peroxidation or improving antioxidant status is an effective way to increase the quality and shelf life of meat products. It is reported that CB supplementation in broilers’ diets increased SOD activitiy in liver tissues while decreasing MDA concentrations in serum and liver tissues [ 6 ]. Our findings, as first demonstrated, that dietary CB supplementation in finishing goats beneficially increased T-AOC and GPX activities and decreased lipid oxidation products MDA in LT muscle. The antioxidant activity of muscle was also closely related to meat quality. Zhang et al. [ 50 ] reported that increased antioxidant activity in meat can inhibit oxidative stress, maintain meat color stability and reduce drip loss. Therefore, the current results suggested that supplementation of CB has a positive effect on regulating the redox state of goat’s LT muscle and improving meat quality. The effect of CB on antioxidant capacity may be partly attributed to the beneficial effect of butyric acid and H 2 produced by its metabolism. Butyric acid can regulate oxidative damage by reducing reactive oxygen species and increase antioxidant enzyme levels [ 51 ], and H 2 mediates selective scavenging of harmful substances such as free hydroxyl radicals and oxygen free radicals [ 52 ]. Amino acid profile of longissimus thoracis muscle The flavor and nutritional value of meat are closely related to the profiles and content of AA. Specific AA are thought to be important in contributing to its desired flavors such as Arg and Phe provide a bitter taste; Glu and Asp show an umami flavor; Gly, Ala, and Ser present a sweet taste [ 43 ]. The EAA are essential to meet the human AA requirements and play critical role in growth, regulating immune function and maintaining normal metabolism. In the present study, dietary CB supplementation increased the concentrations of EAA (+6.5%), FAA (+6.5%) and TAA (+5.5%), and the concentrations of individual AA (from +6.1% to +16.6%) including Arg, His, Lys, Thr, Ser, Asp and Glu. It is worth mentioning that Arg, Asp and Glu belong to the FAA, and the Glu is the most important FAA, which plays an important role in the freshness of meat and buffering sour and salty taste. Therefore, our results suggested that supplementation of CB could improve the flavor and nutritional value of goats’ meat. In contrast, dietary RPF supplementation appeared to have limited effect on AA profiles and its contenet in meat. The increased the content of Ile and Tyr may have been due to improved digestibilities of AA as reported that dietary fat can improve AA digestibility [ 53 ]. However, the reduced Lys content by RPF in combination with CB addition is not clear, and it necessitate further research to understand the mechanism by which RPF influences the AA composition. Fatty acid profile of longissimus thoracis muscle It is well documented that FA are important indicators to evaluate meat quality and nutritional value, as well as the basis of the characteristic flavor of meat. In recent years, researchers have increasingly focused on the regulation of FA profiles in meat as cardiovascular heart disease are considered closely related to dietary with high SFA, specifically C16:0 and myristic acid (C14:0) [ 54 ]. The MUFA and PUFA play important roles in protecting the heart, lowering blood lipids and regulating blood sugar. Therefore, a decrease in the SFA content and an increase in the UFA content can improve the nutritional value of goats’ meat. The current findings showed that CB supplementation decreased the content of C16:0, but increased that of C18:3, C20:5n-3 and PUFA in the LT muscle. The α-linolenic acid (C18:3n-3) is known as a precursor for the synthesis of EPA (C20:5n-3) and DHA (C22:6n-3), which are converted to EPA and DHA via elongation and desaturation enzymes located in the liver. EPA and DHA play important regulatory roles in human health, which can prevent the synthesis of lipoproteins in the liver, improve cardiovascular function and regulate inflammatory immune function of the body. Thus, it can be inferred from the above studies that CB has the potential to improve FA profile of muscle and enhance the nutritional value and flavor of mutton. The mechanism by which CB regulates muscle FA composition and content of muscle PUFA is unclear. We speculate that butyric acid, a metabolic product of CB, may play a key role in maintaining intestinal health, regulating lipid metabolism and epithelial barrier function, which may benefit digestion and absorption processes of PUFA in the gastrointestinal tract. In addition, studies have suggested that the increased PUFA concentrations in meat may be due to the protective effects of antioxidants in the diet [ 55 ], which is consistent with the increased muscle antioxidant capacity by CB in the present study. Manipulating diet is an effective way to affect the FA composition in ruminant meat. The oleic acid ( c 9-18:1) is reported to be the most abundant UFA in mutton, and has an effect on lowering cholesterol and regulating blood lipid [ 36 ]. The higher proportions of c 9-18:1 and C16:0 in LT muscle with RPF supplementation in the current study may be explained by the large amount of C16:0 and C18:1 supplied by the RPF. These findings are consistent with Ladeira et al. [ 56 ], who reported that dietary supplementation with RPF increased the concentrations of C16:0, C18:1 and MUFA in the LT muscle of bulls. Relative mRNA expression in longissimus thoracis muscle Muscle lipid accumulation is generally the result of a balance between lipid availability (via circulatory lipid uptake or de novo lipogenesis) and lipid disposal (via FA oxidation). The process of lipid accumulation involves many key enzymes and transcription factors [ 35 ]. LPL is a rate limiting enzyme used to hydrolyze lipoproteins, chyle granules and low-density lipoproteins. LPL catalyzed reaction products, fatty acids and monoglycerides, are partially absorbed by adipose tissue and skeletal muscle and stored in the form of neutral lipids. In this study, CB addition increased expression of LPL . It is reported that overexpression of LPL is related to increased triacylglycerol accumulation and fat deposition in mammalian muscles [ 57 ]. Previous studies have reported that the expression of SCD is negatively correlated with PUFA, EPA and DHA in beef cattle [ 58 ], which is consistent with our results that the expression of SCD in the LT muscle of goats was decreased, while the content of PUFA and EPA was increased by dietary CB supplementation. Furthermore, SCD can catalyze the dehydrogenation of SFA to MUFA, especially catalyzes C16:0 and C18:0 into C16:1 and C18:1, respectively, and is closely related to the differentiation of preadipocytes. In this study, we found increased expression of SCD when goats were fed RPF diets, which may be the reason that RPF increased the content of C18:1 in LT muscle, indicating that RPF could change the expression of SCD , thereby affecting the composition of FA in muscle tissue. The PPARγ , as a member of the nuclear receptor superfamily, can induce adipocyte differentiation as well as regulate the expression of ACC , FASN and LPL to induce the accumulation of lipid droplets in skeletal muscle, thereby increasing the content of IMF [ 59 ]. In our study, dietary CB supplementation increased the expression of PPARγ indicating that CB could promote fat deposition. The increased expression of PPARγ may be partly related to butyric acid produced by the metabolism of CB. It has been reported that butyric acid could affect lipogenesis by regulating the PPARγ signaling pathway [ 60 ]. In addition, the mRNA expression of PPARγ was also increased by RPF in LT muscle in this study. Similarly, Li et al. [ 61 ] found that oleic acid increased the mRNA expression of PPARγ in bovine muscle satellite cells. The SREBP-1 is a nuclear transcription regulator that regulates the expression of many downstream target genes involved in lipid metabolism. ACC is the rate-limiting enzyme in de novo synthesis of FA, catalyzing the synthesis of malonyl-CoA for subsequent biosynthesis of long chain FA. FAS is considered to be a determinant of the maximal capacity of a tissue to synthesize fatty acids by the de novo pathways, which plays a catalytic role in the last step of FA biosynthesis pathway. Previous studies have reported that the expressions of ACC and FAS , in LT muscle of Korean steers are positively correlated with IMF content [ 62 ]. It can be seen from our study that dietary addition of RPF increased the expressions of ACC , FAS and SREBP-1 in LT muscle, indicating that RPF could contribute to fat synthesis. Yang et al. [ 63 ] reported that the expressions of ACC and FAS in the LT muscle were enhanced with increasing dietary energy levels. RPF can improve the energy density of the diet, and high dietary energy means that the cells can absorb more energy, thus increasing the expression of these fat-producing genes, promoting lipid metabolism, and resulting in fat deposition in tissue.
Conclusion In conclusion, the CB supplementation in the goat diet improved meat quality by enhancing the antioxidant capacity, color and pH, and improving the AA and FA composition of LT muscle. Specifically, dietary CB supplementation increased the IMF content, when the RPF was supplemented in the diet. It suggests an additive effect between CB and RPF on improving the meat quality and composition. In addition, the RPF supplementation in goat diet demonstrated an improvement of the growth performance, carcass traits, and FA profiles by increasing 16:0 and c 918:1 concentration. The expressions of ACC , FAS , SREBP-1 and PPARγ were also increased by RPF supplementation, and consequently increased the intramuscular fat content. It concludes that supplementation of goat diet with CB and RPF has beneficial effect on improving the carcass traits, meat quality, and promoting fat deposition by upregulating the expression of lipogenic genes of LT muscle.
Background Clostridium butyricum (CB) is a probiotic that can regulate intestinal microbial composition and improve meat quality. Rumen protected fat (RPF) has been shown to increase the dietary energy density and provide essential fatty acids. However, it is still unknown whether dietary supplementation with CB and RPF exerts beneficial effects on growth performance and nutritional value of goat meat. This study aimed to investigate the effects of dietary CB and RPF supplementation on growth performance, meat quality, oxidative stability, and meat nutritional value of finishing goats. Thirty-two goats (initial body weight, 20.5 ± 0.82 kg) were used in a completely randomized block design with a 2 RPF supplementation (0 vs. 30 g/d) × 2 CB supplementation (0 vs. 1.0 g/d) factorial treatment arrangement. The experiment included a 14-d adaptation and 70-d data and sample collection period. The goats were fed a diet consisted of 400 g/kg peanut seedling and 600 g/kg corn-based concentrate (dry matter basis). Result Interaction between CB and RPF was rarely observed on the variables measured, except that shear force was reduced ( P < 0.05) by adding CB or RPF alone or their combination; the increased intramuscular fat (IMF) content with adding RPF was more pronounced ( P < 0.05) with CB than without CB addition. The pH 24h ( P = 0.009), a * values ( P = 0.007), total antioxidant capacity ( P = 0.050), glutathione peroxidase activities ( P = 0.006), concentrations of 18:3 ( P < 0.001), 20:5 ( P = 0.003) and total polyunsaturated fatty acids ( P = 0.048) were increased, whereas the L * values ( P < 0.001), shear force ( P = 0.050) and malondialdehyde content ( P = 0.044) were decreased by adding CB. Furthermore, CB supplementation increased essential amino acid ( P = 0.027), flavor amino acid ( P = 0.010) and total amino acid contents ( P = 0.024) as well as upregulated the expression of lipoprotein lipase ( P = 0.034) and peroxisome proliferator-activated receptor γ ( PPARγ ) ( P = 0.012), and downregulated the expression of stearoyl-CoA desaturase ( SCD ) ( P = 0.034). The RPF supplementation increased dry matter intake ( P = 0.005), averaged daily gain (trend, P = 0.058), hot carcass weight ( P = 0.046), backfat thickness ( P = 0.006), concentrations of 16:0 ( P < 0.001) and c 9-18:1 ( P = 0.002), and decreased the shear force ( P < 0.001), isoleucine ( P = 0.049) and lysine content ( P = 0.003) of meat. In addition, the expressions of acetyl-CoA carboxylase ( P = 0.003), fatty acid synthase ( P = 0.038), SCD ( P < 0.001) and PPARγ ( P = 0.022) were upregulated due to RPF supplementation, resulting in higher ( P < 0.001) content of IMF. Conclusions CB and RPF could be fed to goats for improving the growth performance, carcass traits and meat quality, and promote fat deposition by upregulating the expression of lipogenic genes of Longissimus thoracis muscle. Keywords
Abbreviations Amino acid Acetyl-CoA carboxylase α Body weight Catalase Clostridium butyricum CCAAT/enhancer binding protein alpha Carnitine palmitoyltransferase-1B Fatty acids Fatty acid methyl esters Fatty acid synthase Glutathione peroxidase Hot carcass weight Hormone-sensitive lipase Intramuscular fat Lipoprotein lipase Longissimus thoracis Live weight before slaughter Malondialdehyde Monounsaturated fatty acids Peroxisome proliferators activated receptor γ Polyunsaturated fatty acids Rumen protected fat Stearoyl-CoA desaturase Saturated fatty acids Superoxide dismutase Sterol regulatory element-binding transcription factor 1 Total antioxidant capacity Acknowledgements The authors thank the Northeast Agricultural University for the supporting facility. Authors’ contributions MMZ, XLX and PXJ designed and conceived the experiments. MMZ, ZYZ, XLZ, CML, XTL and XYY carried the experiments. MMZ and PXJ drafted the manuscript. WZY, HSX, MBN and XYL supervised the work and revised the final version of the manuscript. All authors read and approved the final manuscript. Funding This research was supported by the National Key Research and Development Program of China (2022YFD1301105), the earmarked fund for CARS (CARS-36), the Natural Science Foundation of Heilongjiang Province (YQ2021C018), the Postdoctoral Foundation of Heilongjiang Province (LBH-Z21100), and the Open Project Program of International Joint Research Laboratory in Universities of Jiangsu Province of China for Domestic Animal Germplasm Resources and Genetic Improvement (IJRLD-KF202204). Availability of data and materials The datasets produced and/or analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The animal protocol in this study was approved by the Animal Care and Use Committee of Northeast Agricultural University (Permit number NEAU-[2011]-9). Consent for publication Not applicable. Competing interests The authors declare that they have no conflicts of interest.
CC BY
no
2024-01-16 23:45:34
J Anim Sci Biotechnol. 2024 Jan 15; 15:3
oa_package/66/5a/PMC10789026.tar.gz
PMC10789027
38225640
Introduction Background and rationale {6a} Creation of distal surgical anastomoses can be difficult in patients with severe complex or diffuse coronary artery disease (CAD) undergoing coronary artery bypass grafting (CABG). Incomplete revascularization can affect patient survival and quality of life [ 1 ]. Coronary endarterectomy (CE) was first introduced in 1957 by Bailey et al. and is now being performed as an adjunct with CABG (CE-CABG) in patients with diffuse CAD to achieve complete revascularization [ 2 ]. Although long-term survival is comparable between CE-CABG and CABG alone, early surgical outcomes related to postoperative myocardial infarction (POMI) and mortality are worse in those undergoing CE-CABG [ 3 – 5 ]. These patients mainly experience graft occlusion owing to thrombosis in endarterectomized regions. During endarterectomy, the vascular intima is carefully dissected and removed, which exposes the subendothelial tissue to blood flow, causing fibrin platelet mural thrombus and activation of the coagulation cascade [ 6 – 8 ]. Dual antiplatelet therapy (DAPT) and postoperative anticoagulation may play a role in avoiding POMI after CE-CABG. Current clinical guidelines recommend DAPT as soon as feasible after CABG and continuation for at least 12 months [ 9 ]. However, the use of antithrombotic medication within 24 h of CE-CABG is controversial. Reported rates of hemorrhage, POMI, and short-term mortality after CE-CABG vary substantially between studies [ 10 – 12 ]. Some centers have begun to use a heparin infusion 4 h after surgery in patients undergoing CE if the chest tube output is less than 50 to 100 mL/h. However, the incidence of POMI appears to be relatively high even with early heparin infusion [ 10 , 11 ]. Tirofiban is a small-molecule peptide that selectively inhibits fibrinogen–platelet GP IIb/IIIa binding and reduces ischemic events in patients with acute coronary syndrome (ACS) undergoing percutaneous coronary intervention [ 13 , 14 ]. Since perioperative use of GP IIb/IIIa inhibitors does not increase surgical bleeding after CABG [ 15 – 17 ], we hypothesized that early tirofiban administration before initiation of DAPT reduces the incidence of thrombosis and POMI after CE-CABG but does not increase bleeding incidence or severity. Objectives {7} The primary goal of the THACE-CABG trial is to demonstrate the safety of early tirofiban administration after CE-CABG (phase I). If safety is demonstrated, its efficacy in reducing the incidence of major cardiovascular and cerebrovascular events (MACCEs) will be evaluated in comparison to early heparin administration (phase II). Trial design {8} The THACE-CABG trial is a multicenter prospective randomized controlled trial. Patients will be randomized into one of two parallel groups at a 1:1 allocation ratio: an experimental group (tirofiban) and a control group (heparin). The safety portion of the study is a noninferiority trial with a primary endpoint of cumulative chest tube drainage in the first 24 h after surgery. The efficacy portion of the study is a superiority trial that will evaluate MACCEs in the 30 days after surgery as the primary endpoint. MACCEs include all-cause death, nonfatal MI, nonfatal acute ischemic stroke, and early revascularization.
Methods: participants, interventions, and outcomes Study setting {9} Beijing Anzhen Hospital, Capital Medical University, Beijing, China. Beijing Tiantan Hospital, Capital Medical University, Beijing, China. Peking University Shenzhen Hospital, Shenzhen, Guangdong Province, China. The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang Province, China. Eligibility criteria {10} Inclusion criteria Age 18–80 years. Primary diagnosis of non-ST elevation ACS or stable ischemic heart disease and appropriate candidate for isolated CABG. Ability to understand the nature of the study and the study-related procedures and comply with them. Undergoing CE-CABG. Agreeable to provide written informed consent for surgery and study participation. Exclusion criteria Severe congestive heart failure (New York Heart Association class IV) or left ventricular ejection fraction ≤ 35%. Tumor or suspected tumor. Severe renal insufficiency (creatinine clearance < 30 mL/min) and/or chronic hemodialysis. Cirrhosis or positive serum HBsAg/HBeAg, HCV-RNA, or anti HCV antibodies. History of heparin/tirofiban allergy or thrombocytopenia after heparin/tirofiban. Coagulation dysfunction, history of platelet abnormality or thrombocytopenia (preoperative platelet count < 150,000/mm 3 ). History of active internal hemorrhage, intracranial hemorrhage, or intracranial tumor, arteriovenous malformation, or aneurysm. History of gastrointestinal or urogenital bleeding in the 6 months prior to randomization. Stroke (any cause) or transient cerebral ischemia in the 6 months prior to randomization. Pregnancy or breastfeeding. Use of extra anticoagulation therapy, intra-aortic balloon pump, extracorporeal membrane oxygenation, or other circulatory support before, during, or in the first 2 h after surgery (individuals who experience excessive bleeding or require mechanical circulatory support and excessive anticoagulant medication within 2 h have intrinsic bleeding risks and are not candidates for further antithrombotic therapy). CE-CABG surgery failed. CE-CABG failure. Graft flow and pulsatility index (PI) are measured at the end of the operation before protamine administration. If PI ≥ 5, the graft requires re-anastomosis; if still ≥ 5 after excluding the influence of operative technique, the operation is considered to have failed. Chest tube bleeding > 150 mL in the first 2 h after surgery. New-onset abnormal Q wave in the ECG within 2 h after surgery (indicates POMI; tirofiban or heparin are started 2 h after surgery for reducing POMI, thus individuals already having POMI within 2 h will be eliminated). Who will take informed consent? {26a} A member of the study team will check eligibility and obtain written informed consent after explaining the study and answering any questions during a visit to the research site for the baseline assessment. No study operations will take place until consent is provided. Additional consent provisions for collection and use of participant data and biological specimens {26b} Individuals who choose to participate in the study will be asked to give permission for long-term monitoring of their electronic medical records (with no additional patient contact). Permission will be obtained to utilize study data in other ways, such as for pooled analyses of anonymized data, natural history studies, meta-analyses, and health outcomes research.
Discussion The THACE-CABG trial is a multicenter randomized controlled trial designed to assess the safety and efficacy of tirofiban infusion followed by DAPT in patients undergoing CE-CABG. In non-inferiority trials, the null and alternative hypotheses are reversed compared to superiority trials; hence, the null hypothesis supposes a difference between the compared treatments. In other words, the alternative hypothesis supposes they do not differ or that one is non-inferior to the other. In phase I of the THACE-CABG trial, the null (H0) and alternative (H1) hypotheses are as follows: H0: N—C > Δ (that is, chest tube drainage in the first 24 h after CE in the tirofiban group (N) is not non-inferior to that in the heparin group (C) by a pre-defined noninferiority margin of -Δ% or less, where Δ = 200 mL, or non-inferiority is not shown). H1: N—C ≤ Δ (that is, chest tube drainage in the first 24 h after CE in the tirofiban group is non-inferior to that in the heparin group). In a non-inferiority study, five trial outcomes are possible (Fig. 3 ). For non-inferiority to be demonstrated (cases i, ii, and iii), the point estimate and 95% confidence interval of the difference in chest tube bleeding between the tirofiban and heparin groups have to be less than the non-inferiority margin of 200 mL. If non-inferiority is shown, three situations are possible: (1) non-inferiority and superiority, in which the point estimate and the confidence interval are less than 200 mL (case i); (2) non-inferiority and non-superiority, in which the point estimate lies below 200 mL and the confidence interval includes zero but not 200 mL (case ii); and (3) non-inferiority and inferiority, in which the point estimate and the entire confidence interval are between zero and 200 mL (case iii). Non-inferiority will not be shown if the confidence interval includes both the 200 mL non-inferiority margin and the zero line (case iv). Finally, inferiority will be established if the point estimate and entire confidence interval are above the 200 mL line (case v). For the primary efficacy outcome, the hypothesis to be tested is whether incidence of MACCEs after CE-CABG is superior with tirofiban than with heparin. The study will be considered positive if statistical significance at the level of 0.05 (two-tailed) is achieved. Risk of surgery is high in patients with ejection fraction < 35%. Bleeding events in these patients may be life-threatening. Therefore, such patients will not be included until tirofiban safety has been confirmed in others. Although this approach may introduce bias, it is warranted from a patient safety perspective. In previous research, PI was proposed as a significant predictor of graft quality, showing the distal resistance of the graft vascular, with a suggested cutoff value of 5 [ 28 ]. A high PI suggests serious anastomotic stenosis, competitive flow, or severe distal coronary stenosis. All of these factors can contribute to thrombosis and graft failure. This thrombosis mechanism, however, differs from that of endothelial detachment, which is generated by blood stasis. To avoid misunderstanding, CE-CABG failure will be excluded. Our cohort is relatively selected, which could be a source of bias. In conclusion, this is the first randomized controlled trial designed to compare safety and MACCEs between tirofiban and heparin administration after CE-CABG. In addition, we plan to use the trial’s database to conduct follow-up studies that evaluate long-term CE-CABG outcomes. Trial status The study began recruitment on September 1, 2022, and is expected to end in late 2026. Current protocol: version 20,220,620.
Background For complete revascularization, patients with diffuse coronary artery disease should have a coronary endarterectomy and a coronary artery bypass graft (CE-CABG). Sadly, CE can lead to a lack of endothelium, which raises the risk of thrombotic events. Even though daily dual antiplatelet therapies (DAPT) have been shown to reduce thrombotic events, the risk of perioperative thrombotic events is high during the high-risk period after CE-CABG, and there is no consistent protocol to bridge DAPT. This trial aims to compare safety and efficacy between tirofiban and heparin as DAPT bridging strategies after CE-CABG. Methods In phase I, 266 patients undergoing CE-CABG will be randomly assigned to tirofiban and heparin treatment groups to compare the two treatments in terms of the primary safety endpoint, chest tube drainage in the first 24 h. If the phase I trial shows tirofiban non-inferiority, phase II will commence, in which an additional 464 patients will be randomly assigned. All 730 patients will be studied to compare major cardiovascular and cerebrovascular events (MACCEs) between the groups in the first 30 days after surgery. Discussion Given the possible benefits of tirofiban administration after CE-CABG, this trial has the potential to advance the field of adult coronary heart surgery. Trial registration chictr.org.cn, ChiCTR2200055697. Registered 6 January 2022. https://www.chictr.org.cn/com/25/showproj.aspx?proj=149451 . Current version: 20,220,620. Keywords
Administrative information Note: the numbers in curly brackets in this protocol refer to SPIRIT checklist item numbers. The order of the items has been modified to group similar items (see http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/ ). Interventions Explanation for the choice of comparators {6b} After CE, the lack of endothelium leads to activation of the coagulation cascade [ 18 – 20 ]. Therefore, antithrombotic treatment is required [ 20 ]; however, no standard protocol exists. Heparin infusion followed by warfarin for several months has been recommended by several authors [ 21 , 22 ]. Pre- or intraoperative clopidogrel dosing followed by postoperative aspirin and clopidogrel administration is another anticoagulation scheme which has been used [ 22 ]. In our hospital, we have used a postoperative heparin infusion followed by DAPT (aspirin and clopidogrel or ticagrelor) for many years. Tirofiban infusion followed by DAPT is another potential regimen which is purely antiplatelet in nature. Intervention description {11a} Standard procedure before randomization Before surgery, patients hospitalized for isolated CABG will receive the following standard antithrombotic regimen: aspirin will be administered up to the day of the procedure; clopidogrel, ticagrelor, and prasugrel will be stopped at least 5, 5, and 7 days before, respectively. CE-CABG will be performed by experienced surgeons via a median sternotomy utilizing either an on- or off-pump grafting technique. CE will be indicated for patients with: (1) a long, diffusely diseased coronary artery segment or severely calcified coronary artery, or (2) severe long-segment in-stent restenosis with or without major side branch occlusion [ 23 ]. CE is occasionally unanticipated prior to CABG surgery. CE-CABG is typically the last option for obtaining full revascularization. Patients who do not undergo CE have a better prognosis than those that do when antithrombotic medication is used according to current recommendations; therefore, they are not required to receive tirofiban therapy and will not be included. Graft flow and PI will be assessed at the end of the operation before protamine administration using the VeriQ Flowmeter System (MediStim ASA, Horten, Norway), which is based on an ultrasonic transit time flow meter. If the PI is more than 5, the graft must be anastomosed again. If it remains above 5, the surgery is deemed a failure and the patient will be removed from the study. After the procedure, protamine will be administered to reverse the heparin (100 IU heparin: 1 mg protamine) and the active coagulation time will be corrected to the pre-heparinization value of 5%. Chest tube blood loss will be evaluated 2 h after surgery; if < 150 mL, the patient will be randomized. Randomization and intervention Eligible patients will be randomized to either the heparin or tirofiban group in a 1:1 ratio utilizing an interactive online response system. Using the biased-coin minimization approach, a static randomization list will be created and applied taking three strata into consideration: research site, patient age group (< 70 and ≥ 70 years), and gender. Figure 1 depicts the enrollment and randomization procedures. Two hours after randomization, heparin (100 IU/kg) will be administered intravenously in 5 mL of 0.9% sodium chloride every 4 h in the heparin group. In the tirofiban group, tirofiban (0.05 g/kg/min) will be infused over 22 h using a micropump starting within 30 min of randomization. To maintain treatment blinding, a 5-mL volume of 0.9% sodium chloride will be administered to patients in the tirofiban group, and patients in the heparin group will receive an infusion of 0.9% sodium chloride via micropump. Only the nurses who prepared the fluids will know which are genuine medications. Once heparin or tirofiban stop at 24 h, DAPT will be initiated the next day of surgery and continued daily (ticagrelor 90 mg twice daily plus aspirin 100 mg daily in patients with platelet count ≥ 9 × 10 9 cells/L; aspirin alone daily in patients with platelet count < 9 × 10 9 cells/L). For patients who are unable to be extubated within 24 h, heparin or tirofiban will be continued until the patient is extubated, at which point DAPT will start. The timeline of antithrombotic drugs is showed in Fig. 2 . To ensure patient safety, heparin and tirofiban doses will be adjusted according to routine blood and coagulation parameters obtained 4 and 10 h after treatment initiation. If the platelet count is < 90,000/mm 3 after a repeat test, tirofiban infusion will be discontinued. If the activated partial thromboplastin time increases to 2.5 to 3 times the normal upper limit value, the heparin or tirofiban dose will be halved; if more than 3 times, they will be discontinued. If the fibrinogen level decreases to < 1.0 g/L, heparin and tirofiban infusions will be discontinued and fresh frozen plasma will be administered. Criteria for discontinuing or modifying allocated interventions {11b} Participants may leave the experiment at any time for any reason as required by the Declaration of Helsinki. Although there are no particular withdrawal requirements, researchers can withdraw patients as necessary and note the cause. All participants will be followed for clinical outcomes unless permission is expressly revoked. Pregnant women will be excluded. Menstrual history and human chorionic gonadotropin level will be examined in non-menopausal women. Pregnancy after enrollment will not be regarded as an adverse event or significant adverse event (SAE) in and of itself and will not be considered grounds for study exclusion; however, the patient will be given the opportunity to withdraw if she wishes. During each study visit, patient safety will be addressed. Participants will receive information regarding who to contact in case of a SAE. Those who experience a SAE will be counseled to withdraw. Strategies to improve adherence to interventions {11c} Three strategies to improve subject adherence to interventions will be implemented. First, a portion of the individuals’ health care and travel expenditures will be reimbursed. Second, physicians will offer free consultations. Finally, the researchers will communicate with the subjects at least once a week. Relevant concomitant care permitted or prohibited during the trial {11d} Standard secondary CAD prevention measures will be implemented. Oral beta-blockers will be used before and after surgery to maintain heart rate between 60 and 80 beats per minute. To maintain low density lipoprotein concentration below 1.88 mol/L, an oral statin or combination of lipid-lowering medications will be prescribed. All episodes of care will be recorded in an electronic case report form (eCRF). Provisions for post-trial care {30} Patients will be contacted every week in 30 days and once a year after trial to monitor for adverse effects, provide encouragement and motivation, answer questions, and assist with any problems. Outcomes {12} Primary outcomes The primary safety outcome (phase I) is total chest tube drainage output over 24 h. The primary efficacy outcome (phase II) is incidence of MACCEs in the first 30 days after surgery. Secondary outcomes The secondary outcomes of the phase I trial are as follows: (1) the incidence of universal definition of perioperative bleeding (UDPB) in adult cardiac surgery class 3 or higher in the first 24 h after surgery [ 24 ] and (2) the incidence of Thrombolysis in Myocardial Infarction (TIMI) major bleeding in the first 30 days [ 25 ]. Table 1 shows the various classes of the universal definition of perioperative bleeding in adult cardiac surgery. TIMI major bleeding is defined as intracranial hemorrhage, > 5 g/dL decrease in hemoglobin concentration, or ≤ 15% absolute decrease in hematocrit. The secondary outcomes of the phase II trial are as follows: (1) the incidence rates of the four different events comprising MACCEs and (2) the incidence of postoperative acute kidney injury. Both CABG and tirofiban have been associated with high risk of such injury. Definitions All-cause death is defined as death from any cause. POMI is defined as MI within 48 h of CABG. MI is defined according to the fourth universal definition of MI: cardiac troponin concentration > 10 times the 99th percentile upper reference limit or > 20% increase in cardiac troponin concentration plus new pathologic Q waves on electrocardiography, graft occlusion on angiography, or new abnormal segmental ventricular wall motion on echocardiography. MI more than 48 h after surgery is defined as cardiac troponin concentration above the 99th percentile upper reference limit and at least one of the following: (1) new ischemia or new pathologic Q waves on electrocardiography, (2) imaging of new loss of viable myocardium or new regional wall motion abnormality in a pattern consistent with an ischemic cause, and (3) identification of a coronary thrombus on angiography [ 26 ]. All patients with stroke symptoms will be examined to rule out conditions that mimic stroke (e.g., seizure, conversion or somatoform disorder, migraine headache, and hypoglycemia) and then undergo urgent computed tomography and/or magnetic resonance imaging to confirm the diagnosis. Acute ischemic stroke is defined as a new neurologic deficit lasting more than 24 h in conjunction with imaging findings of cerebral infarction. Early revascularization is defined as any percutaneous or surgical revascularization performed for graft failure or ACS resulting from a lesion inside or adjacent to a graft within 30 days. Acute postoperative kidney injury is defined according to the Society of Thoracic Surgeons as the following: (1) serum creatinine ≥ 2 0.0 mg/dL or increase to twice the preoperative baseline concentration or (2) new requirement for dialysis [ 27 ]. Participant timeline {13} The participant timeline is shown in Table 2 . Sample size {14} We computed a sample size of 266 patients (133 per group) for the phase I trial, assuming a one-sided significance level of 2.5%, test power of 90%, chest tube drainage volume standard deviation of 500 mL, and non-inferiority limit of 200 mL. Thus, the non-inferiority will be demonstrated if the upper boundary of the 97.5% confidence interval for the mean difference is lower than 200 mL. A non-inferiority margin of 200 mL was arrived at by consensus among the investigators based on their clinical judgment and the data available at the time of trial design. Based on data from the Anzhen Hospital database from 2020 to 2021, we estimate a 30-day MACCE incidence of approximately 40% in the control group (heparin). Using an absolute risk reduction of 25% as the anticipated intervention effect of tirofiban, acceptable alpha of 5%, and acceptable beta of 20%, 712 participants are required in each group. To allow for loss to follow-up, 730 participants will be recruited; therefore, after recruitment of the phase I trial participants, an additional 464 participants (232 per group) are required to satisfy the requirements for phase II. Recruitment {15} The average number of annual CE-CABG procedures over the past three years at Anzhen hospital was approximately 600. Combined with the other two centers, achieving the required number of participants is feasible. Assignment of interventions: allocation Sequence generation {16a} All eligible subjects will be allocated by clinical staff to either heparin group or tirofiban group in a 1:1 ratio at 2 h after CE-CABG using an interactive web response system (IWRS). A static randomization list is generated and implemented within the system which accounted for three strata: study site, patient age group (< 70, ≥ 70 years) and gender group (male, female) using the biased-coin minimization method. Each center will have a designated user of the interactive system. This user will generate a random sequence for patients who meet criteria for enrollment and communicate the allocation to the research nurse. Concealment mechanism {16b} The random sequence will be immediately sent to a designated nurse who prepare the fluids and medications. For the tirofiban group, a 50-mL volume of tirofiban fluid and a 5-mL volume of 0.9% sodium chloride (labeled as heparin) will be prepared. For the heparin group, a heparin fluid and a 50-mL 0.9% sodium chloride (labeled as tirofiban) will be prepared. Implementation {16c} Doctors and nurses working near the bed will have no idea which fluid is genuine medication. These fluids will be infused according their label names. Thus, all subjects, site staff, sponsor staff, assessors, and data analysts will be blinded to randomization outcomes. Assignment of interventions: blinding Who will be blinded {17a} All subjects, site staff, sponsor staff, staff (with exceptions as indicated below), assessors, and data analysts will be blinded to randomization outcomes until the database is unlocked. Exceptions will include (1) nurses who prepare the fluids and medications (heparin, tirofiban, or just 0.9% sodium chloride), (2) computer programmers involved in randomization and drug management processes, and (3) the biostatistician creating reports for the Data and Safety Monitoring Board (DSMB). Procedure for unblinding if needed {17b} The DSMB will periodically examine all safety and outcomes data. Appropriate information regarding adverse occurrences will be systematically gathered and presented to regulatory authorities. Emergency unblinding will be performed only when necessary for subject safety. Only those individuals who are required to know treatment allocation will be given this information in such an event. If medically appropriate, all subjects will resume the study treatment after recovery and continue to do so until the end of the study. Data collection and management Plans for assessment and collection of outcomes {18a} Data will be collected using a study-specific eCRF (Digital Health China Technologies Co., LTD, Beijing, China) and entered onto a paper case report form. Throughout the trial, data management personnel will validate the data. Data fields will be checked and any missing data or inconsistencies will be corrected before downloading into the study database. All identifying information will be hidden to protect patient confidentiality. All study personnel will have 24-h access to the study coordinating center. A data monitoring committee will monitor data accuracy and completeness, evaluate adverse events, and make the final decision to terminate the trial. When all data management and statistical data validation procedures have been completed, the database will be unlocked to perform the final analysis. Plans to promote participant retention and complete follow-up {18b} There are no further plans for intervention engagement tactics beyond those that have already been described. Data management {19} The DSMB will consist of a chair, co-chair, and members with recognized expertise in clinical trials, cardiovascular disease, and biostatistics who are not involved in the routine conduct of the THACE-CABG trial. The DSMB will be responsible for two main tasks. First, it will regularly assess the safety and effectiveness of the drugs under investigation by reviewing periodic updates provided during the course of the study. Second, it will provide recommendations to the trial’s chair and vice chair regarding the appropriate course of action, which may involve continuing, modifying, or terminating the study. The DSMB’s primary responsibility is study oversight. Subject risks and benefits will be determined after an unblinded evaluation of aggregate data. The DSMB charter and meeting schedule will be finalized during the first meeting. A biostatistician, who will not be blinded for report preparation, will attend meetings to provide support. Conference attendees will be able to hear updates and ask questions of the trial’s chair and/or primary investigators during the open section of the conference. Confidentiality {27} Only the lead investigator and necessary staff will have access to collected data. To protect patient confidentiality, each subject’s laboratory samples, completed forms, reports, and other data will be identified by a special participant ID number. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} Not applicable: no biological specimens will be collected. Statistical methods Statistical methods for primary and secondary outcomes {20a} Patient characteristics will be compared using the two-sample t -test or Wilcoxon rank sum test for continuous variables and the chi-square test or Fisher’s exact test for categorical variables. Variables that significantly differ between groups will be used as covariates for efficacy analysis. Both intention-to-treat and per-protocol analyses will be conducted. The primary safety outcome will be compared between groups using a one-sided non-inferiority test. The primary efficacy outcome will be evaluated using the chi-square test or Fisher’s exact test. Interim analyses {21b} Interim assessments and safety and efficacy monitoring will be conducted by the DSMB, which will review unblinded event rates. An outside statistician will perform interim data analyses independently for the DSMB. A formal interim analysis is planned after the expected number of accumulated primary safety outcome events accrue (phase I). If the interim analyses show clear and consistent safety in both treatment arms, the DSMB may recommend commencement of the phase II trial. If the conditional probability of rejecting the non-inferiority hypothesis appears to be unacceptably high, the DSMB may consider postponing or ceasing the phase II trial. Methods for additional analyses (e.g., subgroup analyses) {20b} A subgroup analysis of primary outcomes will be performed to compare CE in the left anterior descending coronary artery with CE in other coronary arteries. Subgroup analyses according to age, gender, diabetes, left ventricular ejection fraction, type of CE (open or closed), and graft type (vein or arterial) will also be performed. Patient-reported outcomes and an analysis of cost-effectiveness will be reported at the end of the study. Projected changes in economic costs and health outcomes from broad use of tirofiban will be quantified. Market prices or shadow prices will be used to value costs. By counting and assessing the resources used, the costs of delivering the intervention will be determined. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} The analyses will not include participants who were randomly assigned but later determined to be ineligible. Additional types of non-compliance that can be identified from the research data, such as visits outside of predetermined times, will be evaluated and classified as significant or minor. Similarly, protocol violations will be rated as significant or minor. After participants with significant procedure violations are eliminated, sensitivity analyses will be conducted. Missing data will not be imputed for these analyses. Plans to give access to the full protocol, participant-level data, and statistical code {31c} The corresponding author will provide trial datasets and algorithms upon reasonable request. Oversight and monitoring Composition of the coordinating center and trial steering committee {5d} The trial steering committee comprises: Prof. Yang Yu (Chair) Prof. Ran Dong (Vice-chair) Prof. Dong Xu (Chief Investigator) Prof. Zhen Han (Chief Investigator) Prof. Bao-Dong Xie (Chief Investigator) Prof. Rong Han (Study Statistician) Prof. Xin Du (Study Project Manager) Composition of the data monitoring committee, its role and reporting structure {21a} The Heart Health Research Center will serve as the data monitoring committee and receive all trial data. Data will be securely transmitted to the center for data entry and verification in accordance with their normal operating procedures. Adverse event reporting and harms {22} Complete information regarding all SAEs (nature of the event, start and end dates, severity, link to the trial and/or trial protocols, and outcome) shall be noted in the medical record and eCRF within 24 h of occurrence. Such incidents will be followed up until satisfactorily resolved and stabilized. With reference to the study protocol and Reference Safety Information, the site principal investigator will utilize medical judgment to assess seriousness, causation, severity, and expectedness. The site team and main investigator will determine the relatedness and expectedness. According to the study protocol, the sponsor will validate data collection and SAEs. The sponsor will report safety information to the chief investigator or a delegate for ongoing risk/benefit assessment and collaborate with the chief investigator to submit an annual safety report to the research ethics committee until the end of the study, when an end of study form will be submitted. The trial steering committee will assess safety in line with a predefined charter, reviewing recruiting and the overall status of the trial on a regular basis; it will also interact with the sponsor regarding safety problems. A relevant unexpected SAE will be reported to the sponsor, who will then notify the ethics committee. The study data center will offer an eCRF for central data collection of adverse events and SAEs which will be submitted to the trial steering committee every 6 months until the study is completed. Frequency and plans for auditing trial conduct {23} An audit of all entries into the eCRF and number of participants will be conducted annually. Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25} Any protocol amendments will be authorized by the local ethics committee before being implemented and updated in the ChiCTR.org registration. All investigators and study participants will be informed. Deviations from the protocol will be thoroughly documented using a report form. Dissemination plans {31a} The findings of the study will be communicated largely through patient timeline and public forums, conference presentations, and scientific articles. The final manuscript will be reviewed and approved by all writers.
Abbreviations Coronary endarterectomy Coronary artery bypass grafting Coronary endarterectomy combined with coronary artery bypass grafting Dual antiplatelet therapies Major cardiovascular and cerebrovascular events Myocardial infarction Acute ischemic stroke Coronary artery disease Postoperative myocardial infarction Percutaneous coronary intervention Pulsatility index Active coagulation time Ultrasonic transit time flow meter Universal definition for perioperative bleeding Thrombolysis in Myocardial Infarction Adverse event Significant adverse event Electronic case report form Acute kidney injury Echocardiography Interactive web response system Data Monitoring Committee The Heart Health Research Centre Relevant Unexpected Serious Adverse Event Non-ST elevation acute coronary syndrome Stable ischemic heart disease New York Heart Association Hepatitis B surface antigen Hepatitis B e antigen Hepatitis virus C Transient cerebral ischemia Intra-aortic balloon pump Extracorporeal membrane oxygenation Electrocardiogram Platelet concentrates Prothrombin complex concentrates Recombinant activated factor VII Acknowledgements We gratefully acknowledge the research staff who are supporting this project and the participants. We also thank Shan Yan and Hou-Jian Zhao (Digital Health China Technologies Co., LTD) for providing technical support and the members of the Heart Health Research Center. We thank Liwen Bianji (Edanz) ( https://www.liwenbianji.cn ) for editing the English text of a draft of this manuscript. Authors’ contributions {31b} The final report is authorship eligible for Liang Chen, Ming-Xin Gao, or other doctors authorized by the chair and vice chair. Although we will not engage experienced writers, the articles may require polishing. Funding {4} The THACE-CABG trial is supported by grants from the Capital Health Research and Development of Special Fund (No. 2020–1-2061), Beijing Hospitals Authority’s Ascent Plan (Code: DFL20220605), and Beijing Municipal Natural Science Foundation (No. 7214222). Availability of data and materials {29} Trial data will be available upon reasonable request. All researchers will not have direct access to export patient data from the system. Researchers will be able to obtain data only with the agreement of the head investigator. Others involved will only be able to view data from other centers with the agreement of the project managers. Declarations Ethics approval and consent to participate {24} The THACE-CABG research project was approved by the institutional review boards of Beijing Anzhen Hospital, Beijing Tiantan Hospital, Peking University Shenzhen Hospital, and the First Affiliated Hospital of Harbin Medical University in August 2022 (Ethics Number: KS2022051). Informed consent will be obtained from all study participants. Consent for publication {32} Consents for publication will be available from the corresponding author on request. No identifying images or other personal or clinical details of participants are presented here or will be presented in reports of the trial results. Informed consent materials are attached as supplementary materials. Competing interests {28} The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
Trials. 2024 Jan 15; 25:52
oa_package/1e/10/PMC10789027.tar.gz
PMC10789028
38225575
Background Migraine aura is a neurological condition that precedes or accompanies the onset of headache in one-third of migraine patients. Aura symptoms include transient sensory (mainly visual) or motor disturbances. Several lines of evidence suggest that spreading depolarization (SD), a self-propagating wave of massive neuroglial depolarization, underlies the aura symptoms and may be a potential activator of downstream pain pathways in migraine with aura patients [ 1 – 3 ]. Intracranial recordings from experimental animals and patients with acute brain injury have revealed two reliable electrographic markers of SD – (1) large-amplitude negative shift of direct current (dc) potential produced by massive near-complete cellular depolarization in the affected tissue and (2) transient depression of ongoing electrical activity [ 4 ] resulted from reversible disruption of neuronal signaling [ 5 ]. The temporary silence of cortical activity sweeping across the cortex is supposed to underlie spreading negative symptoms of migraine aura [ 2 , 3 , 6 ]. However, surface electroencephalographic (EEG) recordings, in migraine patients failed to demonstrate consistent electrographic abnormalities during migraine attacks [ 7 , 8 ]. Routine EEG technique is unable to detect dc potential shifts, the gold standard hallmark of cortical SD. The failure to reveal suppression of EEG signal during migraine aura was related with insufficient sensitivity of standard clinical EEG to reveal spatially/temporary restricted depression of cortical activity during SD [ 2 ]. Evidence of electrocorticographic (ECoG) depression during SD have been mainly obtained in anesthetized animals or sedated patients with traumatic and ischemic brain injury [ 4 , 5 , 9 – 11 ]. EEG recordings during migraine attacks are usually performed in conscious humans but experimental studies of SD in awake animals are scarce. Commonly used anesthetics have been shown to impact susceptibility of neuronal tissue to SD [ 12 – 14 ] but their effect on SD-induced ECoG depression have never been studied in details. A role of anesthesia in suppressive effects of SD is usually neglected although general anesthesia is known to change the functional state of cortical tissue and brain network activity that can potentially modify SD effects on spontaneous cortical activity. Some experimental studies did not find significant differences in ECoG effects of SD between awake and anesthetized rodents [ 15 , 16 ] while others mentioned incomplete (partial) suppression of spontaneous cortical oscillations during SD in awake animals [ 17 , 18 ]. Migraine aura/SD occurs in the undamaged cortex of migraine patients while most experimental approaches of SD induction include direct mechanical or chemical stimulation of the cortex. To exclude potential confounding effects of direct cortical stimulation on cortical activity [ 19 ], we initiated SD extracortically –by a pinprick of the amygdala connected with the cortex by a gray matter bridge allowing slow non-synaptic propagation of SD [ 18 , 20 , 21 ]. SD induced in the amygdala cannot reach the cortex via direct pathway because SD is unable to cross the thick layer of myelinated fibers separating the cortex from subcortical structures. Therefore, SD propagates from the amygdala to the cortex via long (12–15 mm) devious paths subsequently invading the striatum, frontal pole and temporal cortex [ 22 ]. The amygdala, playing an important role in pain processing, attracts growing attention due to its potential role in migraine pathogenesis [ 23 – 26 ]. SDs involving subcortical structures (basal ganglia, thalamus, amygdala) are referred as a plausible mechanism for some aura symptoms in migraine with aura patients [ 3 , 6 , 14 , 27 ]. In awake rats, cortical SD has been shown to occur in association with thalamic SD [ 14 ]. Mice with familial hemiplegic migraine (FHM) mutations exhibit enhanced susceptibility to subcortical SD and facilitated cortico-subcortical propagation of SD [ 27 ]. We hypothesized that the influence of SD on cortical activity depends on the vigilance state and some of SD-induced changes may be revealed only in the conscious brain. To test the hypothesis, we studied spectral and spatiotemporal features of ECoG alterations induced by a single unilateral SD in awake freely behaving rats and after introduction of anesthesia. Spontaneous activity of the occipital and frontal cortices was analyzed. Transient dysfunction of the occipital cortex is suggested to underlie visual aura, the most common in migraine patients. Changes in activity of the frontal cortex may be involved in generation of motor and language impairments. Given extensive connections of the frontal cortex with arousal- and pain-modulation subcortical nuclei [ 28 , 29 ], activity of the cortical region may be important for pain perception and consciousness. Our findings show that in the conscious brain SD elicits depression of cortical activity with characteristics some of which are absent in anesthetized animals and may underlie several clinical symptoms of migraine aura.
Materials and methods Subjects Adult male Wistar rats (350–450 g, Scientific center for Biomedical Technologies of the Federal Medical and Biological Agency, Russia) were housed in a temperature-controlled vivarium (22 ̊C ± 2 ̊C, a 12-h light/dark cycle, lights on at 08.00 h) with food and water ad libitum. All experimental procedures were conducted in accordance with the ARRIVE guidelines and Directive 2010/63/EU for animal experiments. The study protocol was approved by the Ethics Committee of the IHNA RAS (protocol N1 from 01.02.2022). Every effort was made to minimize animal suffering and to ensure reliability of the results. Stereotaxic surgery Under isoflurane anesthesia, rats were bilaterally implanted with electrodes for SD/ECoG recording and guide cannulas for SD induction (Fig. 1 ). Recording electrodes (insulated silver or nichrome wire, diameter of 0.25–0.30 mm) were positioned in the frontal (AP: + 1.2, ML: ± 2.3 mm, DV:—1.8) and occipital (AP: -5.88, ML: ± 3.5 mm, DV:—1.5 mm) cortices [ 30 ]. Reference electrode was placed over the cerebellum. Stainless steel guide cannulas (23 gauge) aimed at the basolateral nuclei of the amygdala (AP: -2.76, ML:—4.8 mm DV:—7.5 mm) of the left and right hemispheres. The guide cannulas, recording electrodes and pin connector were fixed on the skull with acrylic dental plastic. A 30-gauge stylus of the same length as guide cannula was inserted into it to prevent clogging. During three-four days before the start of experiments, all animals were pre-handled and habituated to the stylus removal. Initiation of SD and recording of cortical activity Experiments started two weeks after the surgery.In each rat,three tests with a week interval were performed – the first and second tests under wakefulness and the third test after introduction of urethan anesthesia (1.5 mg/kg, i.p.). In each test, rats were individually placed in a shielded chamber and the implanted connector was attached to the recording cable and spontaneous cortical activity was recorded before (baseline) and after bilateral pinprick of the amygdala as described previously [ 18 , 21 ]. Briefly, the needle was inserted into the guide cannulaand extended 1.0 mm from its tip, thus providing a small standard damage of the neuronal tissue (Supplementary Fig. S 1 ). As reported previously [ 21 ], the local injury of the amygdala triggered SD with about 60% probability. In the present study, we analyzed recordings obtained in rats with histologically verified damaging the basolateral amygdala (BLA) that exhibited maximal susceptibility to SD [ 20 , 21 ]. At a week interval, the BLA pinprick induced SD with similar probability in test 1 (59%, 19/32) and test 2 (72%, 23/32) in awake rats ( p = 0.43, Fisher exact test) and reduced probability (19%, 6/32 in test 3) in anesthetized rats ( p < 0.001). Due to probabilistic nature of SD occurrence after the amygdala damage, simultaneous bilateral microinjury of the amygdala produced three outcomes—a single bilateral SD, a single unilateral SD or no SD. Most rats (14/16) exhibited variable response in three repeated tests (bilateral/unilateral/no SD). In the present study, only artefact-free LPF recordings with induction of by a unilateral SD were analyzed. Tests with initiation of a bilateral SD and with lesions localized outside the BLA were excluded from analysis. Recordings from tests, in which the bilateral BLA pinprick failed to trigger SD, were used as sham controls. Full-band cortical activity (0–100 Hz, 1 kHz sampling rate) was recorded using a four-channel, high-input impedance (1 gΩ) dc amplifier and a/d converter (E14-440, L-Card, Russia) with simultaneous video-monitoring of behavior. Cortical activity was recorded during 15-min before (baseline activity) and 15 min after the amygdala stimulation. In off-line analysis of cortical activity, recordings of local field potential (LFP) were filtered with bandpass filters 0–50 Hz (direct current, dc) and 1–50 Hz (ECoG). SD was identified by the occurrence of a high-amplitude dc potential shift, the most reliable electrophysiological manifestation of SD. Data processing For spectral analysis, artifact-free 600-s epochs of LFP recordings of baseline activity and after induction of a single unilateral SD ( n = 13) or no SD (sham stimulation, n = 6) were used. The segments were filtered with a highpass (1 Hz highcut) and bandstop (48 Hz lowcut and 52 Hz highcut) Butterworth digital filters using scipy package (all calculations here and below were performed in Python language). Further, the 600-s epochs were divided into 10-s length intervals and the mean power for each interval in each frequency band was evaluated without overlapping using fft function from numpy package. Spectral power was computed using a Fast Fourier Transform (FFT) routine for five frequency bands: delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–25 Hz) and gamma (25–50 Hz). The amplitude of ECoG depression during SD was measured as a percentage of average power during depolarization phase of SD relative to the baseline level. Data processing was performed by T.M. blinded to SD presence during the analyzed period. Spectrograms were obtained using specgram function from matplotlib package with 2048 data points (approximately 2 s) used in each block for the FFT and overlapping of 90%. Histology For histological verification of amygdala injury and localization of recording electrodes, animals were euthanized and perfused intracardially with 0.9% saline after the end of the experiments. The brains were removed, stored in 10% formalin for 48 h, sectioned in coronal 50-μm slices and stained with 0.1% cresyl violet. Statistical analysis Statistical analysis was performed using Statistica software12.0 (StatSoft). Significant difference in spectral power dynamics between baseline and post-SD periods was assessed using ANOVA for repeated measures with SD as a between-subject factor and time (10-s intervals) as a within-subject factor. One-way ANOVA was used for post-hoc comparison of spectral power dynamics during baseline and post-SD periods. Inter-regional differences in the ECoG power magnitudes were estimated with Wilcoxon signed-rank test. The ECoG power magnitudes in awake and anesthetized animals were compared using Mann–Whitney test. Fisher exact test was used to compare behavioral changes in rats with SD and sham-treated animals. The data were expressed as mean ± S.E.M. The significance was set at p < 0.05.
Results Propagation of SD to the cortex and SD-induced ECoG depression in awake conditions and after introduction of anesthesia Local microinjury of the amygdala triggered a single SD wave that non-synaptically propagated to the frontal and occipital regions of the cortex via gray matter bridges connecting the amygdala and cortex (Fig. 1 ) [ 18 , 20 , 22 ]. SD induced in the amygdala can reach the cortical regions via the piriform cortex [ 20 ] (Fig. 1 A) and through the striatum and frontal pole (Fig. 1 B). Amygdalar SD always spreads to the striatum [ 18 ] and expires at the boundaries with corpus callosum. But, as shown previously [ 22 ], SD can leave the striatum and penetrate the frontal cortex via a rostral pathway. By sequential invading adjacent temporal lobe and striatum, SD reached the frontal and occipital cortices in about three min post-injury, irrespective of the vigilance state (Table 1 ). During the first two minutes, i.e. before arrival to the frontal and occipital cortices, SD traveled over deep brain regions, including the striatum. When SD invaded the striatum (40–100 s after the amygdala pinprick), rats exhibited several episodes of forced circling, a reliable behavioral marker of striatal SD [ 18 ]. Introduction of anesthesia slightly increased the latencies of SD appearance in the cortex and durations of SD-associated dc-potential shifts (Table 1 ). As mentioned above, here we analyzed effects of a unilateral SD induced by a bilateral BLA pinprick. Tests, in which the damage failed to trigger SD, were used as sham controls. Electrographic manifestations of a single unilateral SD arrived to the occipital cortex recorded in the same rat under awake and anesthetized conditions are shown on Figs. 2 and 3 , respectively (the traces were obtained immediately after the BLA microinjury). In awake conditions, SD appeared in the cortex in 150 s after its initiation in the amygdala (Fig. 2 A). Visual inspection of the ECoG recording and spectrogram showed that SD transiently reduced amplitude of ipsilateral cortical activity without changes in the contralateral cortex (Fig. 2 B, C). Under urethan anesthesia, SD appeared in the cortex a bit later—at 220 s post-injury (Fig. 3 A) and produced pronounced ipsilateral ECoG depression (Fig. 3 B, C) that corresponded well to the pattern previously described in anesthetized animals [ 4 , 5 , 10 ]. Effects of SD on spectral and spatiotemporal characteristics of cortical oscillations in awake conditions and after induction of anesthesia Arrival of SD to the cortex produced a pronounced drop of cortical activity power (Figs. 4 and 5 ). In the ipsilateral cortex, significant effects of SD on dynamics of oscillation power were found across all frequency bands in both awake and anesthetized conditions ( p < 0.001, Table S 1 ). In the unaffected contralateral cortex, SD significantly affected the gamma oscillation power only in awake rats ( p < 0.001, Fig. 4 , Table S 1 ). Sham stimulation, i.e. identical amygdala damage without SD induction, did not change cortical activity (Supplementary Fig. S 2 ), indicating that the ECoG depression was produced by injury-induced SD but not the injury per se. As seen in Figs. 4 and 5 , the power drop peaked during the depolarization phase of SD (marked by gray vertical areas). Under awake condition, slow delta (1–4 Hz) oscillations did not reduce their power during SD and even overshot afterwards in the occipital cortex (Fig. 4 ). On contrary, fast gamma (25–50 Hz) activity showed the pronounced depression, especially in the frontal cortex where it started long before SD arrival (Fig. 4 ). After introduction of anesthesia, the early pre-SD depression disappeared and gamma power decline started near the onset of dc potential shift while depression of other frequency band oscillations began to outlast termination of dc-shift, especially in the occipital cortex (Fig. 5 ). The degrees of the SD-induced depression in different frequency bands that were expressed as percentages of average power during dc potential shift relative to respective baseline levels within each band are compared in Figs. 6 and 7 . Under awake condition (Fig. 6 ), power of delta oscillations did not change significantly, theta (4–8 Hz), alpha (8–12 Hz) and beta (12–25 Hz) oscillation power showed two-fold reduction in the ipsilateral cortex ( p < 0.05, Wilcoxon test), high-frequency gamma (25–50 Hz) activity exhibited maximal decrease and involved both ipsi- and contralateral cortices (to about 40% and 60% of baseline level, respectively). After introduction of anesthesia, SD elicited wideband depression of ipsilateral cortical activity ( p < 0.05, Wilcoxon test, Fig. 7 ). In anesthetized animals, the strongest (4–fivefold) reduction was found in the slow delta range (to a mean 20% of baseline), faster cortical rhythms were less depressed and minimal changes were observed in the high-frequency gamma band (to a mean 50% of baseline). Comparison of awake and anesthetized rats showed that under wakefulness SD produced significantly weaker suppression of delta, theta, alpha and beta oscillations but stronger depression of fast gamma activity than in anesthetized rats ( p < 0.05, Fig. 6 ). Although maximal drop of cortical activity power was time-locked the depolarization phase of SD, ECoG depression usually lasted two threefold longer (Table 2 , Figs, 4 and 5 ). In wakefulness,the longest suppression was observed in high-frequency gamma (25–50 Hz) range—up to 250 s, i.e. four–sixfold longer than dc shift. Anesthesia shortened the gamma depression but lengthened silencing cortical oscillations in other frequency bands due to slow post-SD recovery, especially in the occipital cortex. Thus, slow and fast cortical oscillations exhibited pronounced difference in vulnerability to suppressive effect of SD that strongly depended on the vigilance state. Slow delta oscillations were not depressed by SD and even aggravated afterwards in awake rats but were maximally reduced during SD in anesthetized animals. On contrary, fast gamma activity showed the strongest and longest power decline during SD in awake animals but was minimally affected by SD in anesthetized conditions. Remote suppressive effects of SD on fast cortical oscillations in awake state As mentioned above, unilateral SD induced in awake rats exerted a significant effect on contralateral gamma oscillations (Table 1 ) eliciting bilateral gamma depression (Figs. 4 and 6 ). In both frontal and occipital regions of the unaffected contralateral cortex, power of gamma oscillations showed significant, though milder, decline similar to that observed in the ipsilateral to SD cortex—brief drop in the occipital cortex and prolonged building-up suppression in the frontal cortex. Gamma power in the frontal cortex started to decline immediately after induction of SD in the distant subcortical region, progressively dropped till SD arrival to the recording cortical site, peaked during dc shift and recovered with its termination. Alpha and beta bands also showed early-onset depression but shorter than gamma one and only in the ipsilateral frontal cortex. During the early period of the depressed fast cortical activity (first two minutes post-injury), SD propagated over the remote subcortical sites (amygdala, striatum) and distant cortical (prefrontal and temporal) regions as shown in Fig. 1 and described previously [ 18 , 20 – 22 ]. To clarify whether the early frontal gamma depression was produced by mechanical stimulation per se or by SD triggered by the stimulation, we compared changes in frontal gamma power following amygdala pinprick that induced a single unilateral SD and sham stimulation that failed to initiate SD (Fig. 8 ). As can be seen, sham stimulation did not change the power of gamma oscillations while identical stimulation triggering SD elicited strong gamma depression. Thus, the suppression of high-frequency gamma oscillations preceding SD arrival to the cortex of awake rats is likely to reflect remote effects of SD traveling over the distant subcortical and cortical regions. Behavior during propagation of SD from the amygdala to the cortex To identify behavioral patterns associated with SD, we compared data of video-monitoring obtained after BLA pinpricks induced no SD (sham-treated animals) or a unilateral SD. During the 15-min observation period, rats of both sham and SD groups exhibited exploratory behavior with sniffing and rearing, periods of grooming, quiet standing and lying. All rats with SD (7/7) showed forced circling when SD invaded the striatum (40–100 s) and recurrent episodes of freezing that started to appear immediately after the BLA pinprick and repeatedly occurred during subsequent SD propagation from the amygdala to the cortex (4–5 min). Most animals with SD (6/7) also exhibited repeated bouts of purposeless masticatory jaw movements during SD traveling (1- 4 min after pinprick) and head shakes/wet dog shakes following grooming behavior during late post-SD period (5–15 min). In sham group ( n = 6), rats never expressed circling behavior, two animals showed several episodes of freezing and masticatory movements and one rat exhibited wet dog shakes after late grooming. As compared to sham-treated animals, rats with SD more frequently expressed circling behavior ( p < 0.005), episodes of freezing ( p < 0.05), mastication ( p < 0.05) and head/wet dog shakes ( p < 0.05, Fisher test).
Discussion Suppressive effect of SD on spontaneous cortical activity of awake and anesthetized animals The present study shows that vulnerability of slow and fast cortical oscillations to suppressive effects of SD profoundly differs, especially in awake state. In freely behaving rats, SD is accompanied by strong cessation of fast cortical oscillations, particularly pronounced in the gamma (25–50 Hz) range, while the slowest (delta, 1–4 Hz) activity was not depressed during SD but even increases afterwards. The pattern of partial ECoG depression may explain incomplete cessation of cortical activity during SD previously reported in awake rats and rabbits [ 17 , 18 ] and the failure of most clinical studies to detect clear EEG depression during migraine aura in conscious patients [ 7 , 8 ]. Similar to our experimental findings, MEG/EEG data from migraine patients reported (1) ipsilateral suppression of high-frequency (alpha and gamma) cortical activity during visual aura that was suggested to contribute to inhibition of visual function and phosphene generation [ 31 ] and (2) increased delta power in the occipital cortex (posterior slow waves) during typical migraine aura [ 32 ] and FHM aura [ 33 ]. Introduction of anesthesia, despite its rather mild effect on parameters of dc shifts associated with SD, significantly modifies the pattern of SD-induced ECoG depression, weakening suppression of gamma oscillations and intensifying depression of cortical activity in other frequency bands. Under anesthesia, SD is accompanied by wideband suppression of cortical activity with the strongest power drop of slow delta activity and milder reduction of faster oscillations. The result is in line with clinical data obtained in sedated patients with traumatic brain injury in which EEG/ECoG suppression during SD was mainly determined by suppression of slow cortical activity in the delta frequency band (reduction to 47%) while high-frequency oscillations were less depressed [ 9 ]. Intense wideband depression found during SD under anesthesia in our study corresponds well to SD-induced complete ECoG depression previously described in anesthetized rabbits, rats, mice and pigs [ 4 , 5 , 10 ]. In both awake and anesthetized states, drop of ipsilateral cortical oscillations power always peaks during DC-shifts confirming the well-known idea that the most prominent deactivation of the cortex occurs during depolarization phase of SD as a result of depolarization block of neuronal activity [ 4 ]. However, multiple preclinical evidence show that SD-induced suppression of spontaneous cortical activity lasts significantly longer than the depolarization phase of SD ([e.g., [ 5 ]). Our results are in line with the well-known data and show that duration of the ECoG depression may depend on vigilance state, cortical region and type of cortical oscillations. Remote effects of SD on high-frequency cortical activity of awake brain The striking feature of SD effects on cortical activity of awake animals was bilateral depression of high-frequency gamma oscillations induced by unilateral SD. A decrease in gamma power was observed both in the cortex invaded by SD and in the unaffected contralateral cortex with region-specific time courses – short-lasting depression in the occipital cortex and long-lasting early-onset decline in the frontal cortex. Bilateral suppression of alpha band (8–11 Hz) cortical oscillations was described following KCl-induced unilateral cortical SD that was referred as a diaschisis manifestation [ 34 ]. Our study shows that the SD diaschisis selectively involves fast cortical oscillations and exhibits state- and region-dependent features. Recently, we reported that in awake rats a single unilateral cortical SD elicited a transient loss of interhemispheric functional interactions, especially pronounced in the beta-gamma frequency bands [ 35 ]. The functional decoupling may underlie the ECoG depression produced by contralateral SD. In the frontal cortex of awake rats, beta and gamma band power began to reduce soon after SD initiation in the BLA and progressively diminished with SD approach to the cortex (during the first two minutes post-injury). Similar depression of beta cortical activity starting long before SD appearance at the recording sites was reported in patients with traumatic brain injury, in which SD occurrence was closely associated with reduced beta band power [ 36 ]. Given that the gamma depression preceding cortical SD was absent after sham stimulation not triggering SD and that during the early period SD traveled over the remote subcortical and cortical regions [ 18 , 20 – 22 ] (see Fig. 1 ), we conclude that the early-onset cessation of fast activity was produced by network effects of SD invading distant brain regions. Previously, it has been shown that neuronal (unit) activity and sensory evoked responses of the cerebral cortex were reduced during subcortical (striatal and thalamic) SDs [ 37 , 38 ]. That is, subcortical SD can alter cortical function by transient elimination of afferent inputs to the cortex and functional disconnection of the cortex from deep brain regions during the depolarization phase of subcortical SD [ 6 , 16 ]. However, recent experimental evidence indicate that distant effects of SD may be more complex. In awake mice, cortical SD has been reported to elicit transient neuronal activation of the ipsilateral thalamus [ 39 ]. The frequency-, state- and region-specific character of the remote effects of SD may explain why it usually remained unnoticed in experimental studies. Also, in most studies SD was initiated within the rodent cortex, the small size of which hampered detecting the distant effects of SD. A role of initiation site localization (the parieto-occipital cortex in most studies and the amygdala in the present work) cannot be excluded. Our experimental design with initiation of SD in remote extracortical region and a significant time lag between SD induction and its arrival to recording points mimics better SD traveling over long distances such as those observed in the human cortex. It remains unclear why the remote effects of SD are strongly expressed by the frontal cortex. In the cortical region, gamma-band depression preceded SD arrival and involved both affected and unaffected hemispheres (Fig. 4 ). Urethan anesthesia abolished the early pre-SD and contralateral gamma depression (Fig. 5 ). Similarly, thalamic activation during cortical SD was eliminated by anesthesia [ 39 ]. Anesthetics are known to diminish activity of brainstem arousal nuclei and affect bidirectional communication across the brainstem, thalamus, and cortex. The frontal cortex receiving robust ascending projections from arousal- and pain-modulation brainstem nuclei [ 28 , 29 ] may be particularly sensitive to the changes in cortico-subcortical interactions. Also, the vulnerability of the frontal cortex may result from its contiguity to subcortical pathways of SD traveling from the amygdala (Fig. 1 ) that implies the existence of spatial limits for the remote effect expression. At last, anatomical/functional connections between sites driving the remote SD effects and the two cortical regions may differ. The frontal cortex is the most important recipient of a direct input from the periaqueductal gray matter (PAG) while occipital cortex receives only a minor PAG projection [ 28 ]. Migraine is a disorder of cortico-subcortical interactions. It is thought that activation of subcortical structures drives symptomatology of premonitory and headache phases of the migraine attack while cortico-thalamic events are accepted to determine sensory manifestations of the aura phase [ 1 , 2 , 14 , 26 ]. Cortical SD has shown to invade the visual domain of thalamic reticular nucleus [ 14 ] and to activate thalamic ventral posteromedial nucleus [ 37 ], which are both relevant to sensory information processing. Aberrant activity of brainstem arousal and nociceptive networks during premonitory period is suggested to initiate migraine attacks [ 1 , 2 , 26 ]. Hyperexcitation of ascending subcortico-cortical pathways can trigger cortical SD in awake animals [ 40 , 41 ]. On the other hand, SD involving subcortical structures is also referred as a plausible mechanism for some aura symptoms in patients [ 3 , 6 , 14 ]. The present study shows that in awake conditions SD exerts remote effects on fast activity of the cortex and this effect is abolished by urethan anesthesia. This suggests that SD occurring in the conscious brain of migraine patients exerts not only direct local ECoG depression in the affected tissue but may also produce indirect suppression of high-frequency gamma oscillations in distant brain regions. Effect of a single unilateral SD induced in the amygdala on spontaneous behavior The present study shows that traveling SD from BLA to the cortex is reliably accompanies by episodes of forced circling, freezing behavior and “chewing” movements. As shown previously, circling behavior time-locks SD invasion of the striatum [ 18 ]. Its reproducible occurrence soon after the BLA pinprick indicates regular propagation of SD initiated in the amygdala to the striatum. Association of cortical SD with freezing behavior has been reported previously [ 14 , 16 , 18 ]. It has been suggested that mechanisms of the SD-related freezing involve the amygdala playing the critical role in expression of the fear and anxiety behavior [ 16 ]. Our findings support the idea and show that recurrent episodes of freezing appear immediately after SD initiation in the amygdala. In the present study, new behavioral pattern associated with SD – recurrent masticatory jaw movements – was identified. Given that the behavior is generated by trigeminal circuits controlling orofacial motor function [ 42 ], the SD-associated “chewing” movements may indicate activation of downstream nociceptive pathways during SD propagation in the brain. Relevance to pathogenesis of migraine aura Migraine aura is a neurologic condition characterized by transient visual, somatosensory and language symptoms that develop before headache phase of migraine attack. Cortical SD induced in experimental animals represents a highly translational model of the acute neurological deficit. Though experimental SD recapitulates many characteristics of migraine aura in human subjects, some features of SD do not match well clinical pattern of aura [ 3 ]. Our study shows that the mismatch may be related, at least partially, to the fact that the main body of our knowledge about electrographic characteristics of SD has been obtained in anesthetized animals. Here, we found that SD elicits more complex changes in cortical activity in the awake state compared to those observed under anesthesia. Some of the changes detected only under awake condition may underlie several unexplained features of migraine aura. First, bilateral aura symptoms are frequently observed in migraine patients but mechanisms of the aura pattern remain unclear based on properties of unilateral SD described in anesthetized animals (depression is confined to the cortex affected by SD). Multiple experimental studies, including the present one, showed that under anesthesia suppressive effect of unilateral SD is confined to the ipsilateral cortex. Here, we found for the first time that in awake conditions the contralateral cortex unaffected by SD also shows transient depression of cortical gamma oscillations. It is known that high-frequency cortical activity plays the critical role in processing of sensory information and impaired regulation of the activity is referred as a hallmark of neurological dysfunction. The ability of unilateral SD to produce in awake brain reversible bilateral depression of gamma oscillations may potentially underlie bilateral sensory disturbances during migraine aura. Second, visual and somatosensory aura symptoms can appear in rapid succession or simultaneously. Such symptomatology cannot be explained by direct traveling SD over the human cortex due to a long distance between the visual and somatosensory cortical regions. Moreover, functional imaging studies did not find such propagation patterns in patients and showed that the event underlying visual aura propagates along a single gyrus or sulcus [ 43 ]. Based on the clinical data, multifocal triggering cortical SD during aura has been suggested [ 2 ]. Our study revealed that in wakefulness beta-gamma depression spreads beyond a spatially limited SD event and produces ECoG depression in broader cortical areas not invading them. Restricted traveling SD along the gyrus/sulcus thus can drive visual aura and exert distant effect on activity of the somatosensory cortex, yielding several sensory symptoms simultaneously. Given an important role of high-frequency gamma oscillations in the frontal cortex in network-level computations, their prolonged depression produced by SD in awake brain may underlie cognitive impairments during migraine attacks. Finally, the majority of migraine patients exhibit positive sensory symptoms which remain unexplained based on mainly suppressive effect of SD on cortical activity. Previously, we have shown that in awake rats cortical SD is followed by transient hyperexcitation of the ipsilateral cortex [ 35 ]. The present study confirmed the finding and showed that in awake state SD is followed by increased delta power in the occipital cortex. It can be speculated that the post-SD activation of the visual cortex may be perceived as positive aura symptoms. The strength of the present study was reliable induction and recording of SD in freely behaving animals that mimics better conditions of migraine aura in patients. Further, detailed investigation of temporal evolution of cortical activity following SD is important advantage of the study. Limitations include small groups of animals that resulted from difficulties of obtaining long artifact-free ECoG recordings in freely behaving rats, and low spatial covering of SD propagation. The lack of direct electrographic evidence of SD occurrence during migraine attack in patients complicates translation of the experimental results to humans. Pathways of the non-synaptic propagation of SD over the lisencephalic cortex of rodents may differ from those in the gyrencephalic cortex of humans. Non-uniform velocity of SD propagation in gyri and sulci [ 44 ] is distinct from the constant rate of SD expansion across the lisencephalic cortex of rats. Complex spatiotemporal patterns of SD spread, including spiral and reverberating waves, seem to be more common in the gyrencephalic cortex [ 45 ]. To sum up, our study shows that slow and fast cortical oscillations exhibit pronounced difference in their vulnerability to suppressive effect of SD. In conscious drug-free brain, high-frequency gamma oscillations involved in sensory and pain processing are particularly sensitive to SD influence and show spatially broad long-lasting cessation. Why gamma activity playing the critical role in the function of the conscious brain and pain perception is more vulnerable to suppressive effects of SD in awake conditions remains unclear and needs further investigation. The state-dependent features of transient cortical dysrhythmia induced by SD should be considered in translation of experimental data to clinic of migraine and understanding pathophysiological mechanisms of migraine aura.
Background Spreading depolarization (SD), underlying mechanism of migraine aura and potential activator of pain pathways, is known to elicit transient local silencing cortical activity. Sweeping across the cortex, the electrocorticographic depression is supposed to underlie spreading negative symptoms of migraine aura. Main information about the suppressive effect of SD on cortical oscillations was obtained in anesthetized animals while ictal recordings in conscious patients failed to detect EEG depression during migraine aura. Here, we investigate the suppressive effect of SD on spontaneous cortical activity in awake animals and examine whether the anesthesia modifies the SD effect. Methods Spectral and spatiotemporal characteristics of spontaneous cortical activity following a single unilateral SD elicited by amygdala pinprick were analyzed in awake freely behaving rats and after induction of urethane anesthesia. Results In wakefulness, SD transiently suppressed cortical oscillations in all frequency bands except delta. Slow delta activity did not decline its power during SD and even increased it afterwards; high-frequency gamma oscillations showed the strongest and longest depression under awake conditions. Unexpectedly, gamma power reduced not only during SD invasion the recording cortical sites but also when SD occupied distant subcortical/cortical areas. Contralateral cortex not invaded by SD also showed transient depression of gamma activity in awake animals. Introduction of general anesthesia modified the pattern of SD-induced depression: SD evoked the strongest cessation of slow delta activity, milder suppression of fast oscillations and no distant changes in gamma activity. Conclusion Slow and fast cortical oscillations differ in their vulnerability to SD influence, especially in wakefulness. In the conscious brain, SD produces stronger and spatially broader depression of fast cortical oscillations than slow ones. The frequency-specific effects of SD on cortical activity of awake brain may underlie some previously unexplained clinical features of migraine aura. Supplementary Information The online version contains supplementary material available at 10.1186/s10194-023-01706-x. Keywords
Supplementary Information
Acknowledgements Not applicable. Authors’ contributions T.M.: analysis and interpretation of data, design of the work, writing the manuscript and preparation of Figs. 2 , 3 , 4 , 5 , 6 , 7 and 8 ; M.S.: data acquisition, histological analysis; preparation of Fig. 1 ; I.P.: data acquisition; L.V.: conception and design of the work, acquisition and interpretation of data, writing the manuscript. All authors read and approved the final manuscript. Funding This work was supported by Russian Science Foundation, grant number 22–15-00327. Availability of data and materials The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The study protocol was approved by the Ethics Committee of the IHNA RAS (protocol N1 from 01.02.2022). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
J Headache Pain. 2024 Jan 15; 25(1):8
oa_package/d7/97/PMC10789028.tar.gz
PMC10789029
38221606
Introduction There is a steadily increasing interest in monoclonal antibodies (mAbs) to prevent and treat infectious diseases both as endemic and imported conditions. This review focuses exclusively on the aspect of applications in travel medicine, and not on potential use in disease-endemic settings. A few years ago, only two mAbs were registered; in 2023, more than ten mAbs are registered or have been granted emergency use authorization [ 1 ]. Not least due to the coronavirus disease 2019 (COVID-19) pandemic, mAbs have been put into the spotlight; although multiple phase 1 studies were already underway in 2019 for other infectious diseases, such as malaria and yellow fever [ 2 – 4 ]. Monoclonal antibodies (i) could be applied prophylactically before traveling abroad (i.e., for the prevention of malaria), which is called passive immunization (in contrast to the active immunization by means of vaccination), or could be used (ii) as post-exposure prophylaxis for preventing active disease (e.g., rabies); or (iii) to treat manifest travel-acquired infections (dengue fever, yellow fever). The use of mAbs in travel medicine might have its benefits under specific circumstances when compared to standard vaccination and prophylaxis strategies. For example, using mAbs to prevent a Plasmodium falciparum infection, as recently demonstrated, would require only one single administration intravenously or intramuscularly before departure, inducing protective immunity lasting for at least 12 weeks without significant adverse effects, as compared to daily or weekly oral drug intake with gastro-intestinal or psychiatric adverse effects [ 2 ]. Other examples would be the prophylactic use of single-dose mAbs for hepatitis A, or yellow fever for immunocompromised travelers, who might be – although not necessarily though – unable to generate an adequate antibody response, or who should not be given live-attenuated vaccines (i.e., yellow fever vaccine) [ 5 , 6 ]. Furthermore, successful effort has been put in the treatment of diseases with a high mortality and morbidity such as Ebola virus disease (EVD) and yellow fever using mAbs [ 3 , 7 ]; and newer therapeutic options are being developed for rabies and dengue fever [ 8 ]/ (NCT04273217/NCT03883620). This review discusses the prospects of using mAbs for the prevention (pre- and post-exposure) and treatment of (‘tropical’) infectious diseases seen in travelers, and provides an update on the mAbs currently being developed against other infectious diseases, which could potentially be of interest for the field of travel medicine. Immunoglobulins administered for the prevention and treatment of infectious diseases Immunoglobulins have been used for decennia as primary prophylaxis, as post-exposure prophylaxis (PEP), and as treatment of fulminant infections, severe toxin-mediated, auto-immune-mediated post-infectious complications, or chronic infections (Table 1 ). In travel medicine, it is rather common to administer hyper-immune globulins against hepatitis A Virus (anti-HAV) or hepatitis B Virus (anti-HBV) derived from human convalescent plasma for passive immunization when a traveler is unable to produce immunoglobulins due to an immunodeficiency, or when there is not sufficient time to become fully vaccinated before departure, or when too young to be vaccinated (children < 6 months of age). Post-exposure prophylaxis (PEP) HRIG (human rabies immunoglobulin), convalescent plasma therapy (CPT) against rabies, is well-known and widely used. Depending on the severity of the contact with the suspected rabid animal and the vaccination status of the patient prior to the bite, HRIG is advised by the World Health Organization (WHO) as PEP, and should be given within a short time frame to prevent infection [ 9 ]. In the past, there have been cases where convalescent plasma against rabies and EVD have been administered to prevent mortality, with mixed results [ 10 , 11 ]. However, over the past decennia, synthetically derived mAbs have proven to be successfully targeting infectious diseases; and that they could potentially replace the human- and or animal-derived hyper-immune globulins, or hyper-immune sera. An introduction to monoclonal antibodies Structure and function Human antibodies are molecules generated by plasma cells or stimulated memory B cells following infection with a pathogen, or in response to vaccination. Immunoglobulins (Ig) are structured as Y-shaped heterodimers composed of two light chains of 25 kDa each, and two heavy chains of at least 50 kDa, depending on the Ig-isotype. Furthermore, the heavy and light chains, which are linked by multiple disulfide bridges and non-covalent interactions, vary in both the number of bridges and interactions [ 12 ]. Functionally, the two-fragment antigen-binding domains (Fabs) can bind and neutralize pathogens, and are linked to the crystallizable fragment (Fc) domain by a hinge region giving them more flexibility, thereby enabling them to strongly interact with any antigen. The Fc domain is able to mediate effector functions (antibody dependent cellular toxicity, complement-dependent cytotoxicity, and antibody-dependent phagocytosis) on various immune cells and complement protein C1q. It is able to bind to other proteins such as the Fcy receptors (FcyRs). The Ig-isotypes may vary depending on whether the gene segments (alpha, mu, gamma, epsilon or delta) recombine with the variable region, whereby each subclass specializes in the elimination of different types of pathogens. The IgG class is the main isotype in the blood and extracellular fluid and the IgG1 isotype is the mAb which has been used most as basis for the development of therapeutic mAbs used against infectious diseases [ 12 , 13 ]. Strategies to identify human therapeutic mAbs for infectious diseases can be classified as either targeted – whereby the mAbs which bind to a specific antigen is directly isolated, or targeted agnostically – in which functional assays are performed on secreted immunoglobulins obtained from the supernatant of single cell cultures. More details on the function and strategies to develop mAbs are described in the review of Pantaleo et al. [ 12 ]. Monoclonal antibodies and their clinical use Synthetically derived mAbs (from mouse or human cell lines) were first described in 1975 by Kohler and Milstein targeting sheep red blood cells [ 14 , 15 ]. The first mAb registered in connection with an infectious disease was palivizumab (Synagis®, AstraZeneca) in 1998; which was developed as prophylactic agent against RSV infection in premature infants and infants with bronchopulmonary dysplasia [ 16 ]. Although multiple clinical trials for newer mAbs had already started prior to the COVID-19 pandemic, the number of registered mAbs for infectious diseases has grown exponentially (Fig. 1 ). The advantages of neutralizing mAbs compared to convalescent plasma therapy are numerous. Because they are synthetically derived, there is no risk of a blood-borne infection; the time to development of detectable high-affinity antibodies is shorter; molecules per unit are identical; availability does not depend on patient material and number of patients available, and there is no risk of low antibody titers which prevents inadequate pathogen neutralization. Furthermore, there is less chance of developing anaphylaxis (no relation with selective IgA-deficiency) or prion transmission. Lastly, due to molecular engineering, the half-life of mAbs could be prolonged compared to convalescent plasma therapy, and the potential risk of antibody-dependent enhancement (ADE) can be reduced by administrating large amounts of pathogen-specific antibodies and using plasma with high-affinity neutralizing antibodies [ 4 , 17 ]. A potential disadvantage of mAbs could be the risk of loss of efficacy, as the mAbs are targeting a single specific epitope instead of convalescent plasma therapy, which could be derived from multiple donors, and which is therefore polyclonal. The latter, however, could be overcome by combining mAbs with different epitopes in order to create synergistic or additive effects [ 18 ]. Other disadvantages could be the risk of anaphylaxis or sensitization (which could be seen as an occupational hazard during drug handling). Of note, the costs of producing mAbs exceeds the production of vaccines, making them routinely available for high-income countries [ 19 ]. Moreover, fermentation tank production capacity is limited, thus rendering mass production difficult to envisage, if not impossible. For illustration, whereas mAbs are usually applied in microgram amounts per patients for non-infectious diseases indications, up to 10 g of mAbs might be needed, to treat an Ebola patient successfully [ 20 , 21 ]. Below, we summarize relevant novel mAbs developed for infectious diseases and discuss their potential as primary prophylaxis, PEP and therapeutic options for travel medicine applications.
Conclusion and future perspective The use of immunoglobulins as (preventive) treatment strategy against infectious diseases have a long-standing history. Development of mAbs for (non-infectious and) infectious disease applications has evolved into one of the most dynamic fields in therapeutics development today. In the field of infectious diseases, in any case since the beginning of the COVID-19 pandemic, the pharmaceutical industry seems to put all its effort in the (pre) clinical development of these mAbs, with no expenses spared [ 18 ]. The increasing use of mAbs for preventive and curative purposes shall lead to more pressure on healthcare systems and especially higher costs. Ethical questions arise whether asymmetrical use as a luxury to be affordable only for travelers from non-endemic areas is desirable whereas patients in endemic areas will be deprived from potential benefits for mainly cost reasons; or should resources be devoted completely to fight infectious (tropical) diseases on a global scale. One could argue this is comparing apples with pears and both, developing treatment strategies for travelers and concurrently working on the eradication of diseases with a high burden in endemic countries, could go hand in hand. Using both preventive and therapeutic mAbs targeting infectious diseases in endemic areas would greatly reduce the burden (see examples of mAbs created against P. falciparum malaria). However, understanding the various barriers in healthcare systems that prevent patients from getting medicines they need is critical to establishing a global operations strategy for these mAbs [ 89 ]. Barriers such as product pricing, patient insurance, regulatory approval delays, prescribing practices, funding uncertainty and inefficient supply chain could prevent patients from receiving reliable access to monoclonal antibodies, especially in low- and middle-income countries (LMIC). Operation goals for essential medicines are informed by the WHO and should be affordable, available, and accessible. In the past, leveraging economies of scale has been key to greatly expanding the global affordability, accessibility and availability of life-saving vaccines and antiretroviral small molecule drugs. Successful introduction of mAbs will require a similar high-volume, low-cost operations strategy before implementation. For example, it was calculated that the seasonal administration of extended half-life mAbs immunoprophylaxis targeting RSV at birth in children from Mali would prevent 1300 hospitalizations, 31 deaths, and 878 disabilities-adjusted life-years (DALYs) for children through the first three years of life. Using these extended half-life mAbs as part of the preventive strategy was shown to be the optimal next-generation strategy for RSV lower respiratory tract infection (LRTI) prevention in Mali, if the product were to be priced similarly to routine pediatric vaccines, which depends on many factors [ 90 ]. Process and operations strategies to enable global access to antibody therapies have been reviewed in detail by Kelley et al. [ 89 ]. When mAbs are used as therapeutic option for travelers; then, the cost–benefit ratio could be more optimistic as these mAbs are mostly targeting life threatening or severely debilitating diseases such as rabies, yellow fever and EVD, and when administered timely, could lead to significant reduction in patient mortality and cost in terms of cutting down on duration of hospitalizations. For travelers, the use of a single dose of extended half-live mAbs against malaria preventing disease for three consecutive months would be preferable compared to a daily dose of malaria chemoprophylaxis if it would also outweigh the costs. The cost of mAbs in high income countries are often dependable on price agreements negotiated by the governmental bodies with pharmaceutical companies and are therefore difficult to determine up front. Although preventive treatment strategy vaccines are most likely less costly than mAbs for the immunocompetent traveler, this group of travelers have much to gain from mAbs similar to the infants receiving RSV mAb as immunoprophylaxis at birth when the immune system has not been fully developed [ 90 ]. Luckily, production efficiency of mAbs has increased dramatically over recent decades, and cell-culture expression levels around 4 g/l or even higher are common [ 91 ]. A recent estimate which—depending on process and volume—range from US$20/g to US$80/g and could render mAbs product pricing more affordable across settings and applications [ 92 ]. If affordable, a wide range of mAbs applications to fight ‘tropical’ infectious diseases, or better infectious diseases in low-and middle-, and high-income countries alike; applications in returning travelers should pave the way for ubiquitous access, where indicated, to roll out mAbs to fight infectious diseases globally.
For decades, immunoglobulin preparations have been used to prevent or treat infectious diseases. Since only a few years, monoclonal antibody applications (mAbs) are taking flight and are increasingly dominating this field. In 2014, only two mAbs were registered; end of October 2023, more than ten mAbs are registered or have been granted emergency use authorization, and many more are in (pre)clinical phases. Especially the COVID-19 pandemic has generated this surge in licensed monoclonal antibodies, although multiple phase 1 studies were already underway in 2019 for other infectious diseases such as malaria and yellow fever. Monoclonal antibodies could function as prophylaxis (i.e., for the prevention of malaria), or could be used to treat (tropical) infections (i.e., rabies, dengue fever, yellow fever). This review focuses on the discussion of the prospects of, and obstacles for, using mAbs in the prevention and treatment of (tropical) infectious diseases seen in the returning traveler; and provides an update on the mAbs currently being developed for infectious diseases, which could potentially be of interest for travelers. Supplementary Information The online version contains supplementary material available at 10.1186/s40794-023-00212-x. Keywords
Approach For this scoping review, articles discussing mAbs with regard to infectious diseases treatment were searched and downloaded from the publicly available databases PubMed and Google Scholar. Registered immunoglobulin preparations for the prevention and treatment of infectious diseases (Table 1 ) were found on the publicly available website of the European Medicines Agency (EMA) and U.S. Food and Drug Administration (FDA), respectively; or found via public databases or websites of pharmaceutical companies producing the mAbs. Furthermore, articles of (pre-)clinical trials of unregistered mAbs targeting infectious diseases (Table 2 ) were searched and downloaded from PubMed using the key search terms: [diseases] AND [monoclonal antibody therapy]. As shown in Table 2 , we focused our analysis on infectious diseases found amongst the top-10 diseases seen in returning travelers to Europe over the past two decennia as reported earlier [ 22 ] (see first column of Table 2 for the full list) excluding diseases with a predominantly self-limiting clinical course such as travelers’ diarrhea caused by viral infections [ 22 ]. Furthermore, in the section labeled as ‘other’, some diagnoses have been added, as these diseases could also be seen frequently in a travel clinic such as typhoid fever, leptospirosis, and more, and seem therefore to be relevant for this review. Articles having been published between 2013 and 2023 (October 21 st ) and deemed relevant to our focused topic, were included in this review. All relevant literature including original studies and clinical trials, were considered as long as their topic fell within the scope. Articles older than ten years, non-English abstracts, or preclinical studies with in vitro data only (without in vivo experiments), were excluded from this review. Regarding clinical trials involving mAbs, the registry clinicaltrial.gov was searched by the authors (October 21th 2023), and mAbs undergoing phase 1, 2, 3, and 4 clinical trials (Table 2 ) were included into this review. no studies Literature review on the development of monoclonal antibodies with potential travel medicine applications Diarrheal disease Acute diarrheal disease Acute diarrheal disease is quite common among travelers both during, or shortly after their return, and was diagnosed in 9.3% of the evaluated ill travelers when presenting with symptoms to a EuroTravNet clinic between 1998 and 2018 [ 22 ]. Most disease courses are generally mild, self-limiting and most often do not necessitate use of any prescription drugs such as antibiotics; although in some cases, the condition could progress to dysentery and even toxic megacolon. Bacteria are regarded as the most predominant enteropathogens and account for most of the cases seen in travel clinics. Common pathogens cultured or find via PCR in stool of travelers are non-typhoid Salmonella (S.) spp. , Shigella spp ., Yersinia enterocolitica, Campylobacter jejuni, enterotoxigenic Escherichia (E.) coli , and in rare cases Vibrio (V.) cholerae . Over the past years, there have been some publications on mAbs targeting these bacteria especially S. typhimurium and V. cholera , but none of these have entered the clinical trial phase thus far (Table 2 ). Viruses such as astrovirus, norovirus and rotavirus, are also known to cause acute travelers’ diarrhea but are generally self-limiting in adults. Acute diarrheal disease could also be caused by protozoal parasites such as Entamoeba histolytica and Cryptosporidium spp. , although only the latter has targeting mAbs in the preclinical phase ( Supplementary file ). Chronic or persistent diarrheal disease Persistent or chronic diarrhea is also in the top-10 diagnoses seen in travelers or migrants presenting with symptoms to a travel clinic [ 22 ]. Parasites are most often isolated from these patients, although some bacteria are known to cause persistent symptoms such as enteroaggregative or enteropathogenic E. coli or Clostridioides (C.) difficile . For the latter, bezlotuxumab, a fully human mAb which binds to C. difficile toxins A and B, is used as pre-exposure prophylaxis (PrEP) for patients with recurrent C. difficile infections but generally not used in the travel medicine setting (Table 1 ). The risk of a traveler of acquiring a protozoal infection rather than a bacterial infection increases with the duration of symptoms. Giardia is the most likely parasitic pathogen to cause persistent symptoms, which may last for months if left untreated. Other protozoal pathogens such as Cryptosporidium spp ., Cyclospora , and Entamoeba histolytica are also found via PCR in stool of these patients. However, of almost all of the abovementioned pathogens none have targeted mAbs in the clinical stages thus far. Acute viral syndromes Most of the currently licensed mAbs which are used as (preventive) treatment strategies are targeting viral infectious diseases (Table 1 ). Since the emergence of COVID-19 in 2019, there have been six licensed mAbs targeting SARS-CoV-2. Before COVID-19, there were only four licensed mAbs, targeting a variety of viral infections including RSV, HIV-1, rabies, and EVD [ 18 ]. Viral syndromes were also part of the top-three diagnoses seen in the returning travel presenting with illness [ 22 ]. When searching for mAbs targeting viral infections, a wealth of (pre)clinical studies was identified, mainly targeting viruses that yield the highest disease burden due to their virulence (i.e., EVD, rabies), due to high prevalence (i.e., hepatitis B and C) or high incidence (i.e., dengue, Zika, chikungunya) (Table 2 ). In Table 2 , the number of published articles of preclinical studies which includes in vivo data is presented, and whose corresponding PMID identifiers can be found in the Supplementary file . Due to the wealth of studies including in vitro data only (especially on finding conserved epitope bindings site with potential high immunity) without evident clinical perspective, only in vivo (human and or animal) studies have been included in Table 2 . For diseases such as tick-borne encephalitis, Rift Valley fever, Lassa fever, Marburg virus disease, Crimean-Congo hemorrhagic fever, hantavirus disease, hepatitis A, hepatitis E and Mpox, only preclinical studies could be found but none of the potential mAbs progressed into a clinical trial trajectory. All mAbs targeting a viral disease and undergoing phase 1,2,3 or 4 clinical trials, however, are reviewed below. Dengue Dengue is a (sub) tropical arboviral disease with an exponentially increasing incidence worldwide [ 23 ], with estimates running up to 50% of the global population at risk, and dengue featuring now amongst the top-frequently established diagnoses in travelers returning with a febrile condition from endemic areas [ 22 ]. Most people only experience mild symptoms when infected with the dengue virus, although in some cases, patients could develop a hemorrhagic disease or shock syndrome. A risk factor for the development of severe disease is having immunity against different serotypes, heterologous antibodies, also called antibody-dependent enhancement (ADE). As the incidence is rising, the risk for travelers to get infected with a different serotype is also increasing. Currently, no specific treatment exists for dengue, although dengue vaccine development lately made quantum leap progress towards several vaccines entering late stages of development and registration [ 24 , 25 ]. Development of ADE, a feared complication of dengue vaccination seen in earlier vaccine trials, continues to be a matter of concern. The most recent registered dengue vaccine TAK-003 (Qdenga®), which has been marketed since spring 2023, did not show any important safety risks yet, and is registered for the indication of prevention of (secondary) dengue in travelers [ 26 ]. Although this is very promising, the current FDA/EMA licensed vaccines are live-attenuated and cannot be administered to pregnant or immunocompromised individuals. Due to the high incidence and potential progression to severe disease research on broadly protective antibodies, for instance targeting the flavivirus NS1 protein, are underway [ 27 ] (Table 2 and Supplementary file ). When targeting the NS1 binding site, the risk of ADE is reduced as this is mainly seen when targeting the E protein, and the highly conserved NS1 epiptope can achieve flavivirus (dengue virus serotypes 1 to 4, yellow fever virus, Zika virus, West-Nile virus) cross-protection [ 28 ]. Two phase 1 studies with mAbs targeting dengue (AV-1 and Dengushield) have been completed but at the time of writing, results have not been reported yet in the peer-reviewed literature, or in the clinical trials registry (NCT04273217/NCT03883620). Zika From 2015 onwards, Zika virus disease (ZVD), moving eastwards through the peri-equatorial Pacific region, swept through the Americas; also, naturally, with implications for travelers [ 29 , 30 ]. Although the risk of chronic morbidity was low and in relation to overall patient numbers, few deaths in adults were reported. The biggest threats arose from an increase in babies born with microcephaly during this epidemic (mainly in Brazil) due to mothers being infected especially during early pregnancy, a surge in Guillain-Barré Syndrome case numbers, an extremely rare but live-threatening immune-induced thrombocytopenia and overall, a risk of sexual transmission in the viremic phase [ 31 , 32 ]. As there is no vaccine or treatment available, mAbs neutralizing Zika virus would be of great interest especially for pregnant women traveling to an endemic area. Two phase 1 studies have been registered to study the safety and tolerability of Tyzivumab, a single IV infusion mAb. One study was completed in 2018, but has not yet been published. The other, deemed phase 1 trial, has been withdrawn due to the decline in Zika virus cases (NCT03443830/ NCT03776695). Furthermore, a phase 1 trial has been set-up to evaluate the safety, tolerability and pharmacokinetic profile of DMAb-ZK190 in humans (NCT03831503). Synthetic DNA-encoded monoclonal antibodies (DMAbs) are an approach enabling in vivo delivery of DNA of highly potent mAbs to control infections via direct in vivo host-generated mAbs. The DMAb-ZK190, encodes for the mAb ZK190 neutralizing antibody, which targets the ZIKV E protein DIII domain, when in vivo-delivered, and achieved expression levels persisting > 10 weeks in mice and > 3 weeks in non-human primate, which is protective against Zika virus infectious challenge [ 33 ]. As discussed earlier, mAbs targeting the NS1 epitope seem to also protect against Zika virus replication in preclinical studies [ 28 ]. Chikungunya Chikungunya virus (CHIKV), which is now prevalent in 110 countries worldwide, is an RNA virus in the alphavirus genus of the family Togaviridae and is transmitted by mosquitoes. Since 2004, outbreaks of chikungunya have become more frequent and widespread, and the incidence of chikungunya in returning travelers has since also increased [ 22 ]. CHIKV can cause a mild disease with fever, rash and arthralgia, but may also lead to a chronic polyarthritis in 50% of cases for which no cure exists [ 34 ]. Preclinical studies investigating mAbs in in vivo animal models seem promising ( Supplementary file ), for example in reducing the severity of CHIKV when administered to rhesus macaques [ 35 ]. In addition, another preclinical study showed that the use of CTLA4-Ig (Abatacept (Orencia®), registered for rheumatoid arthritis) provided partial clinical improvement (abolished swelling and markedly reduced levels of chemokines, pro-inflammatory cytokines, and infiltrating leukocytes) in a mouse model [ 36 ]. A phase 1 trial published in 2021 reports on the first mRNA-encoded mAb (mRNA-1944), showing in vivo expression and detectable ex vivo neutralizing activity against CHIKV in a clinical trial and may offer a potential treatment option for CHIKV infection [ 37 ]. The mRNA-1944 is a lipid nanoparticle-encapsulated messenger RNA encoding the heavy and light chains of a CHIKV-specific monoclonal neutralizing antibody, and, when intravenously administered, resulted in rapidly generated levels of neutralizing antibodies at all doses tested by 12 h that peaked within 48 h with a measured mean half-life of approximately 69 days. The high antibody levels achieved 36–48 h after infusion exceeded the target level of the protective CHIKV neutralizing antibody level of 1 μg mL −1 , which has been shown previously to be associated with protection from both symptomatic chikungunya infection and subclinical seroconversion. No major safety issues have been reported, and this mRNA technology for protein production may reduce the need to deliver high doses of antibodies which are typically required for therapeutic antibodies. Further studies are needed to determine the duration of protection and efficacy of mRNA-1944. Another phase 1 trial studying the mAb SAR440894 in a single dose escalation study is currently underway (NCT04441905). Japanese encephalitis Japanese encephalitis virus (JEV) causes a vaccine-preventable febrile disease with an encephalitic picture in Asia and the western Pacific [ 38 ]. Especially during flooding, the incidence will increase and more people in endemic areas should be (re-)vaccinated. Several highly effective vaccines are brought to market over the past decades, classified in four classes; inactivated mouse brain-derived vaccines, inactivated Vero cell-derived, live attenuated, and live recombinant (chimeric) vaccines [ 39 ]. As the risk for infection for travelers is low, the vaccine is only given to travelers under specific circumstances (i.e., will stay for longer periods or when spending time in rural areas). For those patients developing neurological symptoms, no specific treatment is available, and the use of antibodies would be desirable. No clinical studies for the use of mAbs have been registered. There has only been one randomized double-blind placebo-controlled phase 2 clinical trial with IVIG containing anti-JEV neutralizing antibodies (ImmunoRel®), 400 mg/kg/day for 5 days) given to a limited number of children with suspected JE in Nepal [ 40 ]. Although the proportion of patients fully recovering (without any sequelae) was similar between the groups at discharge and slightly higher among patients in the IVIG group at follow, this difference was not significant on intention-to-treat analysis. As the number of patients included was low, the efficacy of ImmunoRel® can only be studied in a full phase 3 randomized placebo-controlled trial. West Nile virus West Nile virus (WNV) is a mosquito-borne flavivirus that has a bird–mosquito–bird transmission cycle where humans are a dead-end host. As WNV has spread rapidly over many continents including Europe and North-America, it is now one of the most widely distributed arboviruses worldwide [ 41 ]. Similar to JEV, in most cases, the infection with WNV is subclinical. Only in a small percentage will it lead to an encephalitis or meningitis with a potentially devastating outcome. Furthermore, long-term sequelae have been reported such as muscle weakness, memory loss, and difficulties with activities of daily living after infection with WNV, which could be a risk for travelers [ 41 ]. Currently, no vaccine is registered but as the incidence is increasing, therapeutic options available when severe (neurologic) symptoms do occur, would be most welcome. Not many preclinical studies have been published (Table 2 and Supplementary file ). In humans, the safety and pharmacokinetics of a single dose of the iv-administered MGAWN1, a novel mAbs targeting the E protein of WNV, has been studied in a phase 1 trial [ 42 ]. A single iv infusion of saline or of MGAWN1 at escalating doses (0.3, 1, 3, 10, or 30 mg/kg of body weight) was administered to 40 healthy volunteers (30 receiving MGAWN1; 10 receiving placebo) and was well tolerated and no major safety concerns were reported. MGAWN1 had a half-life of 26.7 days and a maximum concentration in serum (C(max)) of 953 μg/mL, which exceeds the target level in serum estimated from hamster studies 28-fold, which is expected to yield neutralizing activity and penetration across the blood–brain barrier. A phase 2 study with MGAWN1 was started but has been early terminated due to the inability to enroll subjects (only 13 out of the 120 subjects estimated) (NCT00927953). Yellow fever Yellow fever is a primarily mosquito-transmitted disease affecting humans and non-human primates in tropical areas of Africa and South America. Due to the wildlife reservoir, eradication is almost impossible, but large-scale mass vaccination activities in Africa during the 1940s to 1960s reduced yellow fever incidence for several decades [ 43 ]. The yellow fever virus is known to cause an acute viral hemorrhagic disease with a mortality up to 20 to 50% especially when liver failure occurs. Imported cases in travelers are few, but devastating [ 22 , 44 ]. The live-attenuated vaccine gives a high protection rate but is contra-indicated in infants, in pregnant women, people aged > 60 years, and the more severe immunocompromised hosts due to the risk of vaccine-induced viscerotropic and neurotropic serious adverse events [ 5 ]. Since there is no antiviral therapy nor cure once this disease manifests, studies looking at mAb therapy are ongoing ( Supplement file ). The first phase 1 trial studying the safety, side-effect profile, and pharmacokinetics of TY014, a fully human IgG11 anti-yellow fever virus mAb, was published in 2020 [ 3 ]. The half-life of TY014 ranged from 6.5 to 17.5 days among individual participants across the five dose cohorts (0.5–40 mg/kg), and no major safety concerns were reported. Both groups (placebo vs TY014 infused) received the YF17D live attenuated vaccine as a challenge virus. The subjects who received the mAb TY014 (2.0 mg/kg iv) were able to curb viremia and reduce the incidence of vaccine-induced symptoms. It also prevented the induction of innate immunity- and pro-inflammatory response genes, whose expressions are associated with a more severe outcome in yellow fever patients. Although no real infection challenge could be performed, these finding do suggest that the mAbs could interrupt yellow fever pathogenesis, and further studies are necessary to examine the prophylactic and post-exposure treatment potential of TY014. Ebola virus disease Ebola virus disease (EVD) is caused by various Ebola viruses (EBOV) within the genus Ebolavirus ; with the closely-related Marburg virus (genus Marburgvirus ), causing very similar disease in a comparable outbreak pattern [ 45 ]. EVD is known for its high mortality (case fatality rate of 50%) and may present itself with a hemorrhagic fever which could affect both humans and other primates. The virus can be contracted via blood, secretions, organs and other bodily fluids of infected people, and is transmitted by wild animals such as fruits bats, porcupines and non-human primates. The risk of infection for travelers is low as most infections occur in remote areas in sub-Saharan Africa; although during the 2014–2016 Western African EVD outbreak, there was a serious threat for people traveling to endemic areas (especially for health care workers) to get exposed to the virus [ 46 ]. There are currently two licensed vaccines by the EMA and FDA, the rVSV-ZEBOV (Ervebo®) and Ad26.ZEBOV/MVA-BN-Filo (Zabdeno®/Mavbea®), which both only targets the (Z)EBOV (or Zaire ebolavirus ), while the most recent outbreak in Uganda was caused by the Sudan strain (Sudan virus or SUDV) [ 47 ]. Monoclonal antibody treatment of EVD, of which three have been licensed (of which one is already withdrawn) by the EMA and or FDA (Table 1 ), is methodologically well-established and technically amongst the most advanced in the field. However, mass application in a large-scale outbreak will remain difficult due to production logistics and cost and the risk that current mAbs might not be best suited for the then-outbreak-causative ZEBOV strain, let alone if an outbreak is caused by a non-ZEBOV EBV. The origins of ‘antibody therapy’ of Ebola in the broadest sense lie in the administration of convalescent plasma and full blood to Ebola patients. Very few anecdotal clinical data and some supporting animal data from the era prior to the West African outbreak 2013–2016 suggested that antibodies contained in convalescent full blood and plasma – all risks of transmitting infectious diseases taken into account – have the potential to prevent death and facilitate recovery of Ebola patients [ 48 – 50 ]. Further data on CPT are limited to very few cases, reviewed by Sullivan and Roback [ 51 ]. Even before the large West African EVD outbreak, more than twenty mAbs for the treatment of EVD had been identified and characterized, of which several were found promising to progress to testing in non-human primate models, as single antibodies or in combination [ 21 ]; in the meantime amounting to several hundreds have described the particular structure of mAbs targeting Ebola virus glycoprotein (GP) structures in relation to the specificities of the GP target in detail [ 52 ] ( Supplementary file ). In principle, mAbs bind to the GP which governs virus attachment and host membrane fusion [ 20 ]. Fausther-Bovendo and Kobinger as well as Pantaleo and colleagues recently reviewed the pre-clinical and clinical development of Ebola antibodies in much detail [ 12 , 53 ]. In essence, the first key clinical trial, including patients recruited in all three afflicted West African countries, was a randomized controlled trial of the ZMapp mAbs cocktail plus the (symptomatic treatment) standard-of-care versus stand-of-care alone during the West African outbreak. ZMapp contains three chimeric antibodies (13C6, 4G7 and 2G4) as combined from earlier experimental combinations MB-003 and ZMab [ 20 ]. In the PREVAILII trial deaths were 8/36 (22%) of cases in the intervention group versus 13/35 (37%) in the standard-of-care-alone comparator group, with a post-hoc observed probability of 91% of superiority of the ZMapp-applying intervention arm, and an absolute difference in mortality of -15% in frequentist analyses (CI -36 to 7) and although ZMapp appeared to be beneficial, the pre-specified statistical efficacy threshold of 97.5% was not met [ 54 ]. The PREVAILII results informed the study design of the PALM trial. In the PALM trial conducted in the East Kivu outbreak which began in 2018, 681 patients were randomly assigned in a 1:1:1:1 ratio to four investigational regimes – ZMapp as control, remdesivir, Mab114 as single mAb and the REGN-EB3 triple mAbs cocktail consisting of three human mAbs REGN3470, -3479 and -3471. At Day 28, the percentage of patients who died was lower in the MAb114 group and in the REGN-EB3 group than in the ZMapp group which led to the withdrawal of ZMAPP as standard treatment [ 55 ]. The PALM trial results are up to date and are considered decisive with regard to the now current standard of care regarding (Z)EBOV outbreaks [ 56 ]; however, in the most recent SUDV in Uganda, the REGN-EB3 cocktail (Inmazeb®) as well as mAb114 (Ebanga®) are naturally ineffective. Currently clinical studies looking at other mAbs which could be used for the emergency prevention of Ebola Virus Disease are registered but have not been published (NCT03428347/NCT04717830). Of note, administered mAbs (Mab114 or REGN-EB3) to high and intermediate-risk contacts of EVD patients appear to be promising candidates to protect these contacts [ 57 ]. Regarding the closely related Marburg virus disease; with an increasing number but very small outbreaks usually coming to an early end up to now, none of the candidate mAbs (Table 2 ) could be put to the test in the field up to now. Hepatitis B Hepatitis B virus (HBV) is currently the main cause of chronic hepatitis worldwide, and is most commonly transmitted vertically (from mother to child during birth and delivery), through contact with blood or other body fluids during sex with an infected partner, unsafe injections or exposures to sharp instruments. Although the vaccine has a 100% protection rate, most people are not aware they carry the HBV and could infect unvaccinated people. The disease be suppressed with antiviral therapy, but not cured [ 58 ]. When left untreated, chronic HBV infection leads to end-stage liver cirrhosis and/or hepatocellular carcinoma (HCC). For pre- and post-exposure applications, several immunoglobulin preparations targeting HBsAg (anti-HBs/HBIG) may be used (Table 1 ), and research suggests they could also be used for treatment of HBV [ 59 ]. Many pre-clinical studies studying mAbs targeting different epitopes of HBV in several mouse models have been published (Table 2 and Supplementary file ) [ 60 ]. Multiple clinical phase 1 studies have been registered with clinicaltrial.gov for mAbs targeting HBV (HH-006 (NCT05275465); HH-003 (NCT05542979), HepB mAb19 (NCT05856890); IMC-I109V (NCT05867056); HepeX-B (NCT00228592)), although only one study has been published in literature [ 61 ]. Lenvervimab is a recombinant human immunoglobulin used for the treatment of chronic HBV. HBV patients with a persistently positive serum HBsAg for at least six months were recruited for this open-label, dose-escalation phase 1 trial in which patients were given a single or weekly intravenous infection of lenvervimab (doses ranging from 80,000 to 240,000 IU) for four weeks. The primary endpoint was a decrease in HBsAg to less than the limit of quantitation without any rebound within one month but was only reached in two out of nine patients (22.2%) in the highest-dose group. No safety issues or dose-related toxicity was reported. The authors suggest this mAb might, in combination with a nucleoside analogue, lead to sustained clearance of HBsAg in patients with chronic HBV infection and is less allogenic and costly than plasma-derived HBIG. As mentioned in the article, a phase 2 study is underway which hopes to lead to a better understanding of how lenvervimab works in combination with antivirals. Other phase 2 trials are reported on clinicaltrial.gov but have not been published as of yet (envafolimab (NCT0446589), FG-3019 (NCT01217632), cetrelimab (NCT05242445), HLX-10 (NCT04133259), HH003 (NCT05861674/NCT05839639/NCT05734807/NCT05674448). Hepatitis C Hepatitis C virus (HCV) is a blood-borne virus with a high global burden, and most infections occur through exposure to infected blood via unsafe injection practices, unscreened blood transfusions, injection drug use, and sexual practices. In travel clinics, chronic disease is most often diagnosed in migrants during routine screening activities rather than acute illness episodes [ 62 ]. Although HCV could lead, if untreated, to liver fibrosis and end stage liver cirrhosis; in contrast to hepatitis B, there is a cure. A sustained virological response (SVR) is seen in 98% of patients with chronic HCV when treated with an oral direct-acting antiviral agent (DAA) combination regimen for 8—12 weeks [ 63 ]. Unfortunately, prevention of disease is not possible as there is no vaccine, and the antivirals have not been tested for use as PrEP. As with HBV, there seems to be a lot of interest in studies with mAbs, for HCV specifically targeting the HCV envelope, for curation and prevention of disease (Table 2 ). The challenge is to develop mAbs that are either at least as effective as the DAAs but with fewer adverse effects, or that, when combined with antiviral drugs, can circumvent long-term use of these drugs thereby reducing their side effects and augmenting their antiviral effect. Multiple phase 1 and 2 trials for mAbs targeting HCV are underway, of which some are published and some are registered on clinicaltrial.gov but have not been published by the time of writing (bavituximab (NCT00128271/ NCT00343525), XTL6865 (NCT00300807), CT-011 (NCT00962936); anti-IL2R B ab)). The mAb MBL-HCV1, targeting the HCV E2 glycoprotein, significantly delayed median time to viral rebound in patients with chronic HCV genotype 1a undergoing liver transplant compared to placebo treatment (18.7 days vs. 2.4 days, p < 0.001) in a double-blind, placebo-controlled trial [ 64 ]. Although monotherapy with MBL-HCV1 did not prevent allograft infection as antibody-treated subjects had resistance-associated variants at the time of viral rebound, further studies in combination with DAA’s are underway. The antiviral potential of another mAb, BMS-936558 (MDX-1106), a fully human anti-PD-1 monoclonal immunoglobulin-G4 that blocks ligand binding, was tested in a placebo-controlled single ascending dose study in patients with chronic HCV [ 65 ]. Persistent viremia, as seen in chronic hepatitis C patients, has been associated with the upregulation of PD-1 expression on virus-specific CD8 + T cells. In this proof-of-concept study, a single dose of BMS-936558, was generally well tolerated and led to HCV RNA reductions ≥ 0.5 log10 IU/mL in five of 45 (11.1%) patients and suppression of HCV replication persisted more than eight weeks in most patients. In a phase 2a clinical trial, the benefits of orally administered anti-CD3 mAb has been studied [ 66 ]. Orally administered anti-CD3 antibody exerts its effect mainly at the level of the gut-associated lymphoid tissue and mesenteric lymph nodes and exerts a systemic immune modulatory effect via promotion of specific T-cells. In this placebo-controlled trial, a 30-day course of oral anti-CD3 mAb immunotherapy was safe and well tolerated, and was associated with improvement in hepatic and immunologic parameters seen in patients with chronic HCV together with a reduction of HCV viral load. Particularly chronic HCV patients who are non-responders to antiviral therapy could potentially benefit from immune enhancement in the gut. Another mAb that is studied for its antiviral effect via immunomodulation is tremelimumab, a fully human IgG2 mAb that blocks the binding of cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), which has been registered for adult patients with metastatic non-small lung cancer by the FDA in 2022. In this phase 4 study, tremelimumab was administered at a dose of 15 mg/kg on day one of every 90-day cycle to patients with inoperable HCC and chronic HCV [ 67 ]. The therapy was well tolerated and showed both a reduction in the tumor load as well as in HCV viral load reduction. The authors suggest that the combination of this mAb together with DAAs is worth being explored in patients with interferon-resistant HCV infection. Malaria Malaria is a preventable and curable but potentially life-threatening vector-borne disease caused by five Plasmodium (P.) spp. causing disease in humans. In particular, if left untreated, or treated late, P. falciparum leads to life-threatening disease particularly in the non-immune. Travelers going to high- and middle-endemic areas are advised to take malaria chemoprophylaxis [ 68 ]. Currently only one vaccine has been brought to the international market, RTS,S/AS01, but more are being in development with great expectation [ 69 , 70 ]. In April 2023, the Ghanese Food and Drug Authority approved the R21/Matrix-M vaccine which has proven to have a higher efficacy than the RTS,S/AS01 vaccine. Currently, the WHO recommends the RTS,S/AS01 malaria vaccine only for children living in regions with moderate to high P. falciparum malaria transmission. The vaccine is not usable for travelers traveling to an endemic area due to its low efficacy. Single-dose mAbs used as malaria prophylaxis or as treatment are currently being extensively studied and most of the research is done on P. falciparum (Table 2 and Supplementary file ). (Recently published articles on phase 1 and 2 trials of the mAbs targeting P. falciparum are summarized below. The most abundant antigen on the sporozoite surface is the P. falciparum circumsporozoite protein (PfCSP), which is required for attachment to host hepatocytes. In 2021, a first-in-human, open-label, phase 1 dose-escalation clinical trial has been published by Gaudinsky et al . , showing promising results regarding the mAb developed to act directly against the P f CSP . The human mAb (CIS43) has been isolated from a human subject immunized with one of the Sanaria Inc. whole sporozoite vaccines [ 2 ]. The CIS43 binding specificity for the NPDP epitope, an important antigen target, seemed very effective in preclinical trials [ 71 ]. Furthermore, the mAb has been enhanced to increase the half-life from three weeks to longer-lasting immunity, to up to 36 weeks. Although the study has suffered from the COVID pandemic and therefore, following a protocol change, the end results show that among adults who had never had malaria infection or vaccination, a single-dose administration of the long-acting mAb CIS43LS with higher doses (20 mg/kg or 40 mg/kg i.v.) prevented malaria after controlled infection, and was well tolerated. Limitations were the small sample size and the absence of breakthrough infections; therefore, the threshold of CIS43LS could not be defined. Most recently, the third part of the phase 1 trial has been published, reporting the ability of CIS43LS to confer protection at lower doses intravenously administered (1 mg/kg, 5 mg/kg or 10 mg/kg) or by the subcutaneous route (5 mg/kg and 10 mg/kg). In this study, it is concluded that a single dose of CIS43LS at 5–10 mg/kg, administered subcutaneously or intravenously, provides high-level protection against controlled human malaria infection approximately 8 weeks (48–56 days) after antibody administration [ 72 ]. Studying the CIS43LS mAbs in a malaria-endemic area would shed further light on the usage of these monoclonal antibodies in travelers as substitute for malaria chemoprophylaxis, which has been published most recently [ 73 ]. In this randomized, dose escalating study, healthy adults in a malaria-endemic area were given a single intravenous dose of CIS43LS (10 or 40 mg/kg) or placebo over a six months malaria season in Mali. Every two weeks, thick smear examination was performed to study the primary efficacy endpoint. At six months, the efficacy of 40 mg of CIS43LS per kg as compared with placebo was 88.2%, and the efficacy of 10 mg of CIS43LS per kg bodyweight as compared to placebo was 75.0%. Although participants had a higher risk of moderate headache, CIS43LS was proven to be protective against P. falciparum infection during a 6-month malaria season in Mali and could be considered as an interesting alternative for travelers. Another potential mAb, L9, targeting a different conserved site in the junctional region of PfCSP appears to be two to three times more potent than CIS43. A phase 1 trial was recently published to assess the safety and pharmacokinetics of L9LS in healthy adults [ 74 ]. Both subcutaneous and intravenous administration were being tested with different doses (1 mg, 5 mg, or 20 mg/kg of body weight) followed by a controlled human malaria infection ( P. falciparum 3D7 strain). Compared to the CIS43 mAb, the half-life extension was similar, with an estimated 56 days. Both the five- and 20 mg doses, administered intravenously, yielded 100% protection in the human malaria challenge model. To further study its potential, three phase 2 trials (NCT05304611/NCT05400655/ NCT05816330) are currently underway, studying L9LS in Mali involving children and adults, and in Kenya including infants [ 74 ]. Lastly, a phase 1 trial studying the mAb TB31F, that binds to the gametocyte surface protein Pfs48/45 and inhibits fertilization, thereby preventing further parasite development in the mosquito midgut and onward transmission, was recently published [ 75 ]. Malaria-naïve participants were administered a single intravenous dose (ranging 0.1 – 10 mg /kg) or subcutaneous dose of 100 mg TB31F, and were monitored for 84 days primarily for adverse events. Further analyses included TB31F serum concentrations and transmission-reducing activity (TRA) of participant sera. Administration of TB31F was well tolerated, did not lead to serious adverse events, and appeared to be a highly potent mAb capable of completely blocking transmission of P. falciparum parasites from humans to mosquitoes for a duration of 160 days. The latter means it could potentially block the transmission cycle for a complete malaria season. Currently, no mAbs targeting other malaria species pathogenic to man (P. ovale subspecies , knowlesi, vivax or malariae) have been tested in human clinical trials yet. Rabies Rabies, caused by a neurotropic Lyssavirus , has a case-fatality rate of almost a 100%. When vaccinated, the immunological memory is reactivated within seven days after a single intramuscular booster immunization, even when administered 10–24 years after PrEP [ 76 ]. Once an unvaccinated human is being bitten by a – proven or suspected—rabid animal, PEP containing HRIG must be administered preferably within 48 h [ 77 , 78 ]. However, HRIG is expensive and complex to produce, and a synthetically derived alternative would be ideal. Most cross-reactive mAbs developed for neutralizing the rabies virus are targeting the outer viral glycoprotein. The first to be used mAbs in humans were a cocktail of two, CR57 and CR 4098 (together called CL184), which were shown to be broadly neutralizing across many rabies virus isolates during pre-clinical research and were also tested in phase 1 and 2 trials. Although the safety and presence of rabies virus neutralizing antibodies in these studies seemed hopeful, the pharmaceutical company, for unknown reasons, decided to withdraw the mAb from further development [ 77 ]. The first mAb registered in humans was in 2016, was well tolerated, and was also directed against the rabies virus glycoprotein antigenic site III (SII RMab or Rabishield) [ 79 ]. SII RMab is currently licensed in India and was tested in a phase 2/3 trial, where it demonstrated to be non-inferior to standard HRIG in rabies-exposed individuals in India [ 80 ]. A further phase 4, multicenter, randomized, controlled study of the safety and immunogenicity in patients with potential rabies virus exposure is underway. The only concern which was raised for this mAb was that it did not neutralize all rabies variants and therefore the WHO has marked a slight risk for use in the Americas region [ 77 ]. The second mAb, docaravimab/miromavimab (TwinrabTM or RabimabsTM), which could be used as PEP, received orphan status by the FDA and was approved in 2019 (Table 1 ). Recently, a phase 2/3 trial was published and demonstrated non-inferiority after administration of 40 IU/kg TwinrabTM in safety and efficacy to standard 20 IU/kg HRIG in rabies virus exposed patients in India [ 81 ]. Three other mAbs are currently at the phase 2/3 stage namely SYN023, Ormutivimab, and GR1801. SYN023 consists of two humanized mAbs, CTB011 and CTB012, and was given to subjects in a phase 2 study in a single dose of 0.3 mg/kg in combination with five vaccine doses [ 82 ]. In this study, SYN023 provided adequate antibody coverage and treatment related adverse events were comparable to RIG. A phase 3 study has recently been completed (NCT04644484) but results have not been published by the time of writing. Ormutivimab, a mAb of the IgG1 subtype, is the third recombinant human anti-rabies mAb marketed and has been approved for PEP of rabies virus in China with a dose of 20 IU/kg. In a phase 2b trial conducted in China, healthy volunteers received 20 IU/kg, 40 IU/kg or 20 IU/kg HRIG in combination with vaccination [ 83 ]. The combination of ormutivimab and rabies vaccine induced higher neutralizing antibodies levels in the early stage and less interventions to the vaccine. The lower dosage seemed as effective with the least adverse events, therefore in the phase 3 confirmatory clinical study, the efficacy and safety of 20 IU/kg ormutivimab injection combined with rabies vaccine in class III exposed persons attacked by suspected rabies animals will be further explored. GR1801, a mAb indicated for PEP of WHO Category 3 rabies exposure patients has entered a phase 3 clinical trial and is currently recruiting (NCT05846568). Patients of the marketed mAbs for rabies are given as PEP, but none has been studied as potential cure once symptoms manifest. However, preclinical data published on mAbs as cure for rabies in mice do have potential [ 8 ]. Trypanosomiasis and schistosomiasis Chagas disease, caused by Trypanosoma (T.) cruzi, and schistosomiasis, caused by different Schistosoma (S.) spp., are both parasitic infections diagnosed in migrants and travelers presenting to the travel clinics for screening activities [ 22 ]. Chagas disease is especially difficult to treat once in the chronic stage, and could cause severe cardiomyopathy and death. Schistosomiasis can also persist for years and can lead to increased risk of liver fibrosis or bladder cancer. For both diseases, only preclinical studies have been published on mAbs. For Chagas disease, especially mAbs targeting TNF such as infliximab in animals infected with T. cruzi seems to positively impact on the severity of cardiac disease [ 84 ]. Bevacizumab, a monoclonal antibody that functions as an angiogenesis inhibitor, showed a regression in the vascular activity and microvascular density in mice infected with S. mansoni [ 85 ]. Currently none of the mAbs have entered the clinical phase. Tuberculosis Tuberculosis, caused by Mycobacterium tuberculosis , is the most common bacterial infection seen in migrants [ 22 ]. For tuberculosis, especially targeting multi drug-resistant strains, there are numerous drugs in the (pre)clinical pipeline (website newtbdrugs.org), although monoclonal antibodies are still quite scarce. In 2012 there has been a clinical registry (NCT01638520) for pascolizumab, an anti-IL-4 antibody, a phase 2 study which was looking at the safety and efficacy in patients receiving standard therapy for pulmonary tuberculosis but the status is currently unknown and there has not been a subsequent publication in literature. Influenza and COVID-19 Both influenza and COVID-19 are respiratory viral diseases which can be contracted seasonally without a travel history. Although Influenza was found to be in the top-10 diseases seen in the returning traveler presetting with symptoms to the European travel clinic between 1998 and 2018, it is an endemic disease in almost all countries worldwide. There are yearly new vaccines available based on current strains for both diseases and administered to people with a higher risk of developing more severe disease such as the elderly or immunocompromised. For both influenza virus and SARS CoV-2 virus well written reviews on mAbs targeting these viruses have been recently published in literature and were therefore left out of this scoping review [ 86 , 87 ]. Other infections The only licensed mAbs targeting bacteria causing tropical infections include raxibacumab and obiltoxaximab, used as post-exposure prophylaxis or treatment for inhalation anthrax (Table 1 ). (Pre)clinical studies on mAbs directed against (parts of) bacteria causing tropical infectious diseases are scarce, presumable due to effective antibiotic treatment with high cure rates against diseases such as leptospirosis, typhoid fever, and rickettsial disease (Table 2 ) although multi-drugs resistant bacteria causing these diseases are an increasing threat to global health [ 88 ]. Monoclonal antibodies targeting tropical bacterial infections with a high mortality rate despite antibiotic treatment such as melioidosis would be desirable and constitute a potential area of further research, but only preclinical studies have been reported ( Supplementary file ). Most parasitic infections caused by nematodes such as Strongyloides stercoralis , although having a high global burden, are not yet being targeted with mAbs in literature (Table 2 ). On the other hand, mAbs targeting both cutaneous as well as visceral leishmaniasis caused by the Leishmania parasites, have been studied preclinically over the last years ( Supplementary file ), and there is even a mAbs in de clinical stage targeting IL10 (anti IL-10, SCH708980, NCT01437020) which may help to prevent the immune system from becoming suppressed and worsening the disease in combination with standard therapy. Supplementary Information
Abbreviations Monoclonal antibodies Coronavirus disease 2019 Ebola virus disease Post-exposure prophylaxis Hepatitis A virus Hepatitis B virus Human rabies immunoglobulin Convalescent plasma therapy World Health Organization Immunoglobulins Fragment antigen-binding domains Crystallizable fragment Fcy receptors Respiratory syncytial virus Antibody-dependent enhancement European Medicines Agency U.S. Food and Drug Administration Polymerase chain reaction Salmonella Spp Escherichia Spp Vibrio Clostridioides Pre-exposure prophylaxis Human immunodeficiency virus type 1 Pubmed identifier Zika virus disease Intravenous DNA-encoded monoclonal antibodies Chikungunya virus Ribonucleic acid Japanese encephalitis virus Intravenous immunoglobulin West Nile virus Ebola viruses Glycoprotein Sudan strain Hepatocellular carcinoma Hepatitis C virus Sustained virological response Direct-acting antiviral agent Cytotoxic T-lymphocyte-associated antigen Plasmodium P. falciparum Circumsporozoite protein Transmission-reducing activity Trypanosoma Spp Schistosoma Spp Interleukin Low- and middle-income countries Disabilities-adjusted life-years Lower respiratory tract infection Adenosine deaminase severe combined immunodeficiency Diphtheria antitoxin Heptavalent botulism antitoxin Orphan drug status Crimean Congo hemorrhagic fever Acknowledgements This work was presented in part at the ASTMH 2022 Meeting in Seattle, WA, USA; Abstract #1868 (poster #1044). Authors’ contributions H.K. de Jong conceived the paper, conducted the literature search and wrote the largest part of the first draft of the paper. M.P. Grobusch contributed to the writing of the first draft; both authors contributed to, and endorsed, the final version of the paper as submitted. The authors declare they have no financial conflicts of interest in any of the products described here. Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Availability of data and materials All data generated or analyzed during this study are included in this published article [and its supplementary information files]. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
Trop Dis Travel Med Vaccines. 2024 Jan 15; 10:2
oa_package/19/53/PMC10789029.tar.gz
PMC10789030
0
Background In literature, human microbiome studies have received increasing attention. This domain is considered a potential source for the diagnosis and development of new medical treatments [ 1 ]. Several studies aim to identify variations in the gut microbiome and potential biomarkers to diagnose diseases and disorders such as inflammatory bowel disease (IBD) [ 2 – 5 ], type 2 diabetes (T2D) [ 6 – 9 ], autism spectrum disorder (ASD) [ 10 – 13 ], and some types of cancer [ 14 – 17 ], among others. Microbiome studies have also been used to develop medical treatments and to analyze the responses from patients [ 18 – 21 ]. Microbiome analysis consists in sequencing the gene encoding 16s ribosomal RNA (rRNA) and compare it with known bacteria sequence databases to identify bacterial members of a microbial population [ 22 ]. Several software tools and pipelines are available for this process, such as QIIME2 [ 23 ], VSEARCH [ 24 ], DADA2 [ 25 ], Trimmomatic [ 26 ], mothur [ 27 ], and FLASH [ 28 ]. These software tools allow performing the quality analysis of 16s rRNA raw data (filtering, trimming, chimera removal, merge sequences, taxonomy assignment) to generate Operational Taxonomy Units (OTUs) or Amplicon Sequence Variants (ASVs) and performing statistical analysis on the resulting bacterial taxonomy and abundance. With the advancements in omics technologies and AI, research focused on the search for potential biomarkers in the human microbiome using machine learning tools has increased, where the use of taxonomy-based feature selection is one of the most common approach [ 29 ]. Nowadays, it is common to find research works that aims to find relevant taxonomy-based features and use them as potential biomarkers to apply them in medical conditions such as ASD [ 30 , 31 ], cardiovascular disease [ 32 ], T2D [ 33 , 34 ], IBD [ 35 – 38 ], Parkinson [ 39 ], and also to analyze the effect of medical treatments [ 40 , 41 ]. Despite the promising results, several issues can still be found in these studies: Datasets: high dimensional data with a small number of samples are common, usually because of the costs (time and money) associated with data collection from human participants. This causes machine learning models prone to overfitting and biased performance [ 42 ]. Inconsistent results: Most of the studies use Operational Taxonomy Units (OTUs) in their experiments, and due to the limitations and their inability to be used in independent studies [ 43 , 44 ], may be the reason for obtaining inconsistent results [ 29 , 45 , 46 ]. Reproducibility: Several factors such as the lack of uniform processing methodologies, incomplete or erroneous descriptions of the simulations, incomplete or erroneous dataset documentation, which software version was used, incomplete documentation, or not having the code available for use are responsible for a lack of reproducibility in microbial research [ 45 , 47 ]. The main objective in this work, is to address the lack of reproducibility by providing a methodology, that considers more than one dataset, that combines a DADA2-based pipeline for 16s rRNA sequences processing and the Recursive Ensemble Feature Selection (REFS) algorithm, previously used in [ 48 ]. This methodology also provides an approach to deal with high dimensional data with a small number of samples, inconsistent results, and the lack of uniform processing and analysis methodologies. The effectiveness of the proposed methodology was tested by comparing its results with different feature selection methods. Three experiments were performed analyzing microbiome data related to: Inflammatory Bowel Disease (IBD), Autism Spectrum Disorder (ASD), and Type 2 Diabetes (T2D). The results of these experiments provide valuable insights about the performance of the proposed methodology and its potential application in microbiome research. Further research is needed to confirm these findings and to explore their potential clinical applications.
Methods Methodology The proposed methodology consists of four phases: (1) dataset selection criteria, (2) raw data processing, (3) feature selection, and (4) testing. In contrast to other methodologies, such as pooling analysis [ 29 , 51 ], we do not combine more than two datasets to produce a single one to be analyzed. The proposed methodology is oriented to work with Amplicon Sequence Variants (ASVs) because they can be used in independent studies [ 43 , 44 ]. Using ASVs provide a possible solution to avoid inconsistent results, at the same time, they can help achieve external validation in separate datasets which [ 29 ]. For external validations, we recommend working with at least three datasets: one for discovery and the rest for testing. We address the issues of overfitting and biased performance associated with the datasets implementing a nested cross-validation scheme [ 42 ]. To provide a reproducible approach in the microbiome research, we document software versions, description about each phase and the necessary code/scripts to perform experiments are available on Github ( https://github.com/steppenwolf0/MicrobiomeREFS ). An overview of the proposed methodology is illustrated in Fig. 4 . The dataset selection criteria phase involves the selection, download, and extraction of relevant information from metadata (e.g., samples labels). The datasets must meet the following conditions: The databases should be 16s ribosomal RNA (rRNA) amplicon sequencing and belong to the same domain such as disease, disorder, or medication. There should be a minimum of two groups such as a control group and a case group. Each group should have a minimum of 10 samples. The documentation, whether it be metadata or a scientific paper, should clearly specify which group each sample belongs to. Datasets should have the same source of samples such as tissue, feces, or mucosa. The raw data processing phase involves performing amplicon workflow on the raw data in the selected datasets and generate ASVs (features). We selected the DADA2 pipeline 1 [ 25 ] due to its clear documentation. The DADA2 open-source R package allows to implement the full amplicon workflow on 16s rRNA sequences: filtering, dereplication, sample inference, chimera identification, and merging of paired-end reads [ 25 ]. We developed a DADA2-based script in R version 4.1.2, the code editor was RStudio version 2022.07.2 build 576, the DADA2 library version was 1.22.0, the DECIPHER library version was 2.22.0, the BiocManager library version was 1.30.19, and the taxonomy assignment was performed based on the SILVA_SSU_r138_2019 2 reference database. The feature selection phase aims to identify features, since we are working with the sequence as feature instead of the taxa because sequence is unique on the dataset, the feature selected should be contained in testing, so one dataset must be selected for discovery. The eligibility criteria for the discovery dataset is the one that contains the shortest sequence length after the raw data processing phase. Once the discovery dataset is selected, we have to perform two processes: The Recursive Ensemble Feature Selection (REFS),which is an algorithm for identifying biomarkers by determining the features that are most effective in differentiating between groups in datasets achieving the highest accuracy with the fewest number of features [ 48 , 57 – 62 ]. The ensemble is composed by 8 classifiers from the scikit-learn toolbox [ 63 ]: Stochastic Gradient Descent (SGD) on linear models, Support Vector Machine classifier (SVC), Gradient Boosting, Random Forest, Logistic Regression, Passive Aggressive classifier, Ridge Classifier and Bagging. To minimize overfitting and biased performance, REFS employs a nested approach within a 10-fold cross-validation scheme, which is a proven solution to yield more accurate and unbiased results, even with a small sample size [ 42 ]. REFS was built on python version 3.10.8 using the scikit-learn toolkit version 1.1.3. Validation, to minimize bias selection, we developed a validation module with 5 different classifiers from the scikit-learn toolkit [ 63 ]: AdaBoost, Extra Trees, KNeighbors, Multilayer Perceptron (MLP), and LassoCV. This validation module also employs a nested approach within a 10-fold cross-validation scheme. This module must be executed two times: (1) using samples labels, the selected features, and the corresponding abundance, and (2) using samples labels, all features, and the corresponding abundance. The 5 classifiers provides an average value for the area under curve (AUC), that evaluates the effectiveness of a discriminant test. Values approaching to 1.0 indicate excellent performance [ 50 ]. These processes should be executed at least 10 times concurrently, to compensate for the stochasticity of some of the classifiers used in the study (e.g. Random Forest) and the internal cross-validation process. The testing phase involves testing the features selected by using REFS in a minimum of two separate datasets. The selected features must be searched on each testing dataset. Features can be repeated in the testing datasets, so we must follow the next process: if Feature x is present n-times in the testing dataset, the final abundance of Feature x will be the sum of the abundance of those n-occurrences. To validate the features found in each testing dataset, the validation module must be executed one time on each testing dataset using as input the samples labels, the found features, and the corresponding abundance. The AUC is employed as a measure of diagnostic accuracy. Additionally, we conducted a comparative analysis with two different feature selection methods: K-Best with F-score . This selection method will be applied to the discovery dataset instead of REFS. We used the SelectKbest algorithm from the scikit-learn toolbox which selects the K top-scoring features based on a user-defined metric, with the F-score [ 63 ]. The value assigned to K is determined by the number of features obtained using REFS. For instance, if REFS selected 10 features, the value of K would be set to 10. 10-time random selection . This method consists in randomly select a given number of features from all features in each testing dataset. This given number is determined by the number of features found in each testing dataset. For instance, if 8 out of 10 features selected by using REFS were identified in the testing dataset, then 8 features will be randomly selected each time. The AUC provided by the validation module is used as a metric for comparing the results of the proposed methodology with these two feature selection methods. Additionally, we use the Matthews Correlation Coefficient (MCC) [ 64 ] as a metric to evaluate the performance of the methodology as well as for comparison with other feature selection methods. Datasets We used a total of nine datasets, with three datasets for each experiment: Autism Spectrum Disorder (ASD), Inflammatory Bowel disease (IBD), and Type 2 Diabetes (T2D), see Fig. 5 . Each dataset adhered to the data selection criteria phase. We considered only two groups within each dataset: control and cases. The control group is made up of healthy people or people in remission, the case group is made up of people diagnosed with the medical condition. The datasets related with ASD are: (1) David et al [ 49 ] 3 it has 117 samples of which 57 belong to the control group and 60 to the case group, (2) PRJNA589343 [ 65 ] downloaded from the NCBI public repository, 4 it has 127 samples of which 50 belong to the control group and 77 to the case group, and (3) PRJNA578223 [ 66 ] downloaded from the NCBI public repository, it has 96 samples of which 48 belong to the control group and 48 to the case group. The datasets related with IBD are: (1) PRJEB21504 [ 67 ] downloaded from the NCBI public repository, it has 95 samples of which 66 belong to the control group and 29 to the case group, (2) DRA006094 [ 68 ] downloaded from the NCBI public repository, it has 70 samples of which 15 belong to the control group and 55 to the case group, and (3) PRJNA68458 [ 69 ] downloaded from the NCBI public repository, it has 103 samples of which 45 belong to the control group and 58 to the case group. The datasets related with T2D are: (1) PRJNA3259311 [ 70 ] downloaded from the NCBI public repository, it has 112 samples of which 84 belong to the control group and 28 to the case group, (2) PRJNA5545355 [ 71 ] downloaded from the NCBI public repository, it has 60 samples of which 20 belong to the control group and 40 to the case group, and (3) PRJEB53017 [ 72 ] downloaded from the NCBI public repository, it has 94 samples of which 46 belong to the control group and 48 to the case group.
Results Autism spectrum disorder (ASD) Raw data processing The trimming parameters used for DADA2-based srcipt filtering process and the number of Amplicon Sequence Variants (ASVs) generated for each dataset were: David et al – parameter: trimLeft = 10, 2040 ASVs generated. PRJNA589343 – parameter: truncLen = c(250), 2040 ASVs generated. PRJNA578223 – parameter: truncLen = c(290,220), 18,758 ASVs generated. Feature selection phase We selected David et al [ 49 ] for discovery following the eligibility criteria. 26 out of the 2040 features resulted after applying the Recursive Ensemble Feature Selection (REFS) algorithm. This means, REFS achieved its highest accuracy (> 0.8) with 26 features, Fig. 1 a. The result of the validation module for the selected 26 features was an average AUC of 0.816, which is considered “ very good ” diagnostic accuracy [ 50 ]. The Multilayer Perceptron (MLP) algorithm had the best performance, Fig. 1 b. In comparison, we applied the same validation module to the complete 2040 features, the resulting average AUC was 0.41. For feature selection using K-Best, with k = 26, the average AUC was 0.706. The detailed validation results are presented in Table 1 . Using the Matthews correlation coefficient (MCC) as additional metric to evaluate the performance of the methodology, REFS achieved better average MCC (0.649) compared with the other feature selection methods, see Table 1 . Testing phase We searched the 26 features selected by REFS in the testing datasets, the result was 22 out of 26 for PRJNA589343 and 20 out of 26 for PRJNA578223. We applied the validation module to the features found in both testing datasets. For PRJNA589343 we obtained an average AUC of 0.748 and for PRJNA578223 we obtained an average AUC of 0.74. Both average AUCs corresponds to a “ good ” diagnostic accuracy [ 50 ]. In both cases, the classifier with the best performance was Extra Trees, Fig. 1 c,d. For the comparative analysis, we searched for the 26 features selected by K-Best on each testing dataset, the result was 20 out of 26 for PRJNA589343 and 17 out of 26 for PRJNA578223. We applied the validation module to the features found in both testing datasets. The resulting average AUCs were 0.704 for PRJNA589343 and 0.678 for PRJNA578223. For the 10-time random selection the resulting average AUCs were 0.6278 for PRJNA589343 and 0.6352 for PRJNA578223. The detailed validation results are presented in Table 1 . Using the MCC as additional metric for this phase, REFS achieved better performance in both testing datasets with average MCC values of 0.4794 for PRJNA589343 and 0.5071 for PRJNA578223, see Table 1 . Inflammatory bowel disease (IBD) Raw data processing The trimming parameters used for DADA2-based srcipt filtering process and the number of Amplicon Sequence Variants (ASVs) generated for each dataset were: PRJEB21504 – parameter: trim = 20 and truncLen = c(160), 1793 ASVs generated. DRA006094 – parameter: trim = 20 and truncLen = c(200), 375 ASVs generated. PRJNA684584 – parameter: trim = 20, 1621 ASVs generated. Feature selection phase We selected PRJEB21504 for discovery following the eligibility criteria. 53 out of the 1793 features resulted after applying the Recursive Ensemble Feature Selection (REFS) algorithm. This means, REFS achieved its highest accuracy (> 0.95) with 53 features, Fig. 2 a. The result of the validation module for the selected 53 features was an average AUC of 0.936, considered “ excellent ” diagnostic accuracy [ 50 ]. The Multilayer Perceptron (MLP) algorithm had the best performance, Fig. 2 b. In contrast, we applied the same validation module to the complete 1793 features, the resulting average AUC was 0.718. For feature selection using K-Best, with k = 53, the average AUC was 0.902. The detailed validation results are presented in Table 2 . Considering the Matthews correlation coefficient (MCC) as additional metric to evaluate the performance of the methodology, REFS achieved an average MCC value of 0.8715 which is higher than MCC values achieved by the other feature selection methods, see Table 2 . Testing phase We searched the 53 features selected by REFS in each testing dataset, the result was 22 out of 53 for DRA006094 and 48 out of 53 for PRJNA684584. After applying the validation module, we obtained an average AUC of 0.778 for DRA006094 and for PRJNA684584 we obtained an average AUC of 0.71. Both average AUCs correspond to a “ good ” diagnostic accuracy [ 50 ]. In this case, the classifier with the best performance was KNeighbors for DRA006094 and Extra Trees for PRJNA684584, Fig. 2 c,d. For the comparative analysis, we searched for the 53 features selected by K-Best on the testing datasets. The result was 21 out of 53 for DRA006094 and 52 out of 53 for PRJNA684584. We applied the validation module to the features found in both testing datasets. The resulting average AUCs were 0.732 for DRA006094 and 0.652 for PRJNA684584. For the 10-time random selection the resulting average AUCs were 0.528 for DRA006094 and 0.5582 for PRJNA684584. The detailed validation results are presented in Table 2 . Using the MCC as additional metric for this phase, REFS achieved better performance in both testing datasets with average MCC values of 0.4057 for DRA006094 and 0.3567 for PRJNA684584, see Table 2 . Type 2 diabetes (T2D) Raw data processing The trimming parameters used for DADA2-based srcipt filtering process and the number of Amplicon Sequence Variants (ASVs) generated for each dataset were: PRJNA3259311 – parameter: trimLeft = 15, 3316 ASVs generated. PRJNA5545355 – parameter: truncLen = c(400), 3201 ASVs generated. PRJEB53017 - no parameter used, 3672 ASVs generated. Feature selection phase We selected PRJNA3259311 for discovery according to the eligibility criteria. 9 out of the 3316 features resulted by using the Recursive Ensemble Feature Selection (REFS) algorithm. Thus, REFS achieved its highest accuracy (> 0.90) with 9 features, Fig. 3 a. The result of the validation module for the selected 9 features was an average AUC of 0.79, which is considered “ good ” diagnostic accuracy [ 50 ]. In this case, the Multilayer Perceptron (MLP) the algorithm had the best performance, Fig. 3 b. In comparison, we applied the same validation module to the total 3316 features, the resulting average AUC was 0.494. For feature selection using K-Best, with k = 9, the average AUC was 0.75. The detailed validation results are presented in Table 3 . Using the Matthews correlation coefficient (MCC) as additional metric to evaluate the performance of the methodology, REFS achieved better performance, compared with the other feature selection methods, with an average MCC of 0.79, see Table 3 . Testing phase We searched the 9 features selected by REFS in each testing dataset, the result was 5 out of 9 for both testing datasets. We applied the validation module to the features found in both testing datasets. For PRJNA5545355 we obtained an average AUC of 0.714 and for PRJEB53017 we obtained an average AUC of 0.662. The average AUC for PRJNA5545355 corresponds to a “ good ” diagnostic accuracy and for PRJEB53017 the average AUC corresponds to a “ sufficient ” [ 50 ]. For both testing datasets, the classifier with the best performance was Extra Trees, Fig. 3 c, d. For the comparative analysis, we searched for the 9 features selected by K-Best on each testing dataset, the result was 4 out of 9 for both testing datasets. We applied the validation module to the features found in both testing datasets. The resulting average AUCs were 0.668 for PRJNA5545355 and 0.582 for PRJEB53017. For the 10-time random selection the resulting average AUCs were 0.5238 for PRJNA5545355 and 0.5154 for PRJEB53017. The detailed validation results are presented in Table 3 . Using the MCC as additional metric for this phase, REFS achieved better performance in both testing datasets with average MCC values of 0.4210 for PRJNA5545355 and 0.3429 for PRJEB53017, see Table 3 . Discussion In traditional analyses, groups of taxa called Operational Taxonomy Units (OTUs) are generated with sequences that are similar with a percentage of error, usually 3% [ 43 , 44 ]. Considering this error, it is possible to miss variations (possible mutations) making a specific taxa that could be important in medical applications unable to be analyzed. Using Amplicon Sequence Variants (ASVs) this potential loss can be avoided due to all their properties such as ASVs inferred independently from different studies or different samples can be comparable across studies, reduced need for computation power, and are not limited by incomplete reference databases to mention some of them [ 43 , 44 ]. ASVs allow individual experiment and the results could be tested and validated in separate datasets in contrast to merging datasets as in pooling analysis [ 29 ]. Using our methodology, we are able to achieve a signature of taxa across different datasets. In contrast with [ 51 ], where a signature of taxa between the microbiome and the diagnosis of ASD was not found through the analysis of various datasets. To the best of our knowledge, these type of experiments are not reported in the literature. The complete resulting taxa for each experiment is in Tables 1-3 of Additional file 1 . Visualization of difference abundance of the results is in Supplementary figures 1-12 of Additional file 2 . Finally, for individual AUC and MCC obtained in the Random Selection is in Additional file 3 . Despite the promising results and findings, more research and experimentation should be done with microbiome sequencing because counterexamples can be found that make this methodology ineffective. Such is the case with datasets related to asthma: PRJEB44044 [ 52 ], PRJNA601757 [ 53 ], and PRJNA913468 [ 54 ], where the feature selection and testing phase were inefficient. This was due to the lack of datasets with samples from the same source, the quality of the sequences, the lack of documentation, variations in the technical sequencing equipment used, also known as the batch effect [ 55 , 56 ]. Thus, this methodology is dependent of the batch effect. Additionally, the experiments must be extended to study the relationship taxa-disease or taxa-disorder for possible medical applications. Furthermore, from all experiments, it is easy to notice that the classification performance on the discovery dataset is considerably higher than those on the validation datasets. There are two possible explanations for this result. First of all, not all ASV features selected by the proposed methodology on the discovery dataset are found in the validation datasets: thus, the classifiers do not have access to all the information that led to the better performance on the discovery data, resulting in an decreased AUC and MCC. Secondly, the datasets could present differences due to the batch effect . We intentionally did not apply any batch correction methodology in this work, to better isolate and study the results of the proposed methodology.
Discussion In traditional analyses, groups of taxa called Operational Taxonomy Units (OTUs) are generated with sequences that are similar with a percentage of error, usually 3% [ 43 , 44 ]. Considering this error, it is possible to miss variations (possible mutations) making a specific taxa that could be important in medical applications unable to be analyzed. Using Amplicon Sequence Variants (ASVs) this potential loss can be avoided due to all their properties such as ASVs inferred independently from different studies or different samples can be comparable across studies, reduced need for computation power, and are not limited by incomplete reference databases to mention some of them [ 43 , 44 ]. ASVs allow individual experiment and the results could be tested and validated in separate datasets in contrast to merging datasets as in pooling analysis [ 29 ]. Using our methodology, we are able to achieve a signature of taxa across different datasets. In contrast with [ 51 ], where a signature of taxa between the microbiome and the diagnosis of ASD was not found through the analysis of various datasets. To the best of our knowledge, these type of experiments are not reported in the literature. The complete resulting taxa for each experiment is in Tables 1-3 of Additional file 1 . Visualization of difference abundance of the results is in Supplementary figures 1-12 of Additional file 2 . Finally, for individual AUC and MCC obtained in the Random Selection is in Additional file 3 . Despite the promising results and findings, more research and experimentation should be done with microbiome sequencing because counterexamples can be found that make this methodology ineffective. Such is the case with datasets related to asthma: PRJEB44044 [ 52 ], PRJNA601757 [ 53 ], and PRJNA913468 [ 54 ], where the feature selection and testing phase were inefficient. This was due to the lack of datasets with samples from the same source, the quality of the sequences, the lack of documentation, variations in the technical sequencing equipment used, also known as the batch effect [ 55 , 56 ]. Thus, this methodology is dependent of the batch effect. Additionally, the experiments must be extended to study the relationship taxa-disease or taxa-disorder for possible medical applications. Furthermore, from all experiments, it is easy to notice that the classification performance on the discovery dataset is considerably higher than those on the validation datasets. There are two possible explanations for this result. First of all, not all ASV features selected by the proposed methodology on the discovery dataset are found in the validation datasets: thus, the classifiers do not have access to all the information that led to the better performance on the discovery data, resulting in an decreased AUC and MCC. Secondly, the datasets could present differences due to the batch effect . We intentionally did not apply any batch correction methodology in this work, to better isolate and study the results of the proposed methodology.
Conclusion We developed a methodology for reproducible biomarker discovery for 16s rRNA microbiome sequence analysis, addressing the issues related with high dimensional data with a small number of samples, inconsistent results, the lack of uniform processing and analysis methodologies, and to achieve validations in separate databases. The results from the three experiments show that the proposed methodology achieved better performance (AUC and MCC) compared to K-Best and 10-time random selection methods. This methodology is a first approach to increase reproducibility, to provide robust and reliable results, and further testing needs to be done, as shown by the experiment in Asthma (PRJEB44044, PRJNA601757 and PRJNA913468) described in the discussion section. Nevertheless, the approach to the individual study of ASVs makes possible to identify small variations that can have a positive impact on medical applications. This methodology provides results that hopefully will allow pharmacologists, biologists, and health researchers to direct their efforts to the analysis of a list with a smaller number of individual taxa, instead of thousands of taxa grouped in clusters.
Background In recent years, human microbiome studies have received increasing attention as this field is considered a potential source for clinical applications. With the advancements in omics technologies and AI, research focused on the discovery for potential biomarkers in the human microbiome using machine learning tools has produced positive outcomes. Despite the promising results, several issues can still be found in these studies such as datasets with small number of samples, inconsistent results, lack of uniform processing and methodologies, and other additional factors lead to lack of reproducibility in biomedical research. In this work, we propose a methodology that combines the DADA2 pipeline for 16s rRNA sequences processing and the Recursive Ensemble Feature Selection (REFS) in multiple datasets to increase reproducibility and obtain robust and reliable results in biomedical research. Results Three experiments were performed analyzing microbiome data from patients/cases in Inflammatory Bowel Disease (IBD), Autism Spectrum Disorder (ASD), and Type 2 Diabetes (T2D). In each experiment, we found a biomarker signature in one dataset and applied to 2 other as further validation. The effectiveness of the proposed methodology was compared with other feature selection methods such as K-Best with F-score and random selection as a base line. The Area Under the Curve (AUC) was employed as a measure of diagnostic accuracy and used as a metric for comparing the results of the proposed methodology with other feature selection methods. Additionally, we use the Matthews Correlation Coefficient (MCC) as a metric to evaluate the performance of the methodology as well as for comparison with other feature selection methods. Conclusions We developed a methodology for reproducible biomarker discovery for 16s rRNA microbiome sequence analysis, addressing the issues related with data dimensionality, inconsistent results and validation across independent datasets. The findings from the three experiments, across 9 different datasets, show that the proposed methodology achieved higher accuracy compared to other feature selection methods. This methodology is a first approach to increase reproducibility, to provide robust and reliable results. Supplementary information The online version contains supplementary material available at 10.1186/s12859-024-05639-3. Keywords
Supplementary information
Acknowledgements Not applicable. Author contributions A.L.R., D.R.V., A.T. developed the methodology. A.L.R., D.R.V. acquired the data, performed the analyses and processed the results. D.R.V. wrote the manuscript. S.K., A.D.K., A.T., D.O., J.G., A.L.R. reviewed the manuscript. All authors approved the final manuscript. Funding Not applicable. Availibility of data and materials The code/scripts to perform experiments are available on Github ( https://github.com/steppenwolf0/MicrobiomeREFS ). Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Bioinformatics. 2024 Jan 15; 25:26
oa_package/ae/20/PMC10789030.tar.gz
PMC10789031
0
Introduction Rhabdomyosarcoma (RMS) is a type of soft tissue sarcoma observed occurring in children, which consists of skeletal myoblast-like cells with a high-grade neoplasm [ 1 ]. RMS can be divided into two histopathological subtypes: embryonal rhabdomyosarcoma (ERMS) and alveolar rhabdomyosarcoma (ARMS) [ 2 ]. The 5-year overall survival rates of children with RMS have improved significantly due to the adoption of multimodal therapeutic protocols [ 3 ]. However, children with high-risk RMS usually have low survival rates due to the development of chemoresistance and the metastasis and recurrence of this disease [ 4 ]. Uncovering the molecular mechanisms underlying RMS may assist in identifying novel therapeutic targets and improve the prognosis of patients with this malignancy. Long noncoding RNAs (lncRNAs) are noncoding RNA constructs > 200 nucleotides in length that act as powerful intermediaries in numerous cellular physiological processes during the development and progression of almost all diseases [ 5 ]. Studies have shown that lncRNAs play roles in regulating cancer stem cells (CSCs) by targeting specific signaling pathways and transcription factors. The importance of lncRNAs as potential therapeutic targets for the elimination of CSCs has been emphasized in many research studies, as lncRNAs have the ability to maintain the characteristics of stem cells and facilitate the development of tumors by regulating gene expression. Some lncRNAs are well established to have tumor-specific expression, and these lncRNAs possess unique regulatory functions in tumor cells, ranging from mediating increases in invasion/migration to mediating recurrence, and have been considered prognostic/diagnostic biomarkers or therapeutic targets [ 6 – 9 ]. lncRNAs regulate tumor progression through a variety of mechanisms, and the function of lncRNAs as competitive endogenous RNAs (ceRNAs) allows them to abolish miRNA-mediated inhibition of target genes by sponging microRNAs (miRNAs/miRs) [ 10 ]. miRNAs are a type of endogenously activated small noncoding RNA, 18–25 nucleotides in length, that bind to the 3’-untranslated regions (3’-UTRs) of their target genes to regulate their expression. They reduce the stability and thus also the translational efficiency of mRNAs [ 11 ]. Therefore, the expression levels of tumor suppressors may be decreased and the expression levels of oncogenes increased by miRNAs during the initiation and/or development of RMS [ 12 ]. Guanine nucleotide exchange factor T (GEFT, ARHGEF25, or p63RhoGE), which is encoded by a gene located on chromosome 12q13.3, is a member of the Rho guanine nucleotide exchange factor family and is typically expressed in excitable tissues, including brain, muscle, and heart tissues. GEFT accelerates GDP/GTP exchange to activate Rho GTPases. It also plays essential roles in skeletal muscle regeneration and myogenic differentiation [ 13 – 16 ]. Our previous studies indicated that GEFT had high expression in RMS and that high GEFT expression was significantly related to poor prognosis, lymph node metastasis and distant metastasis [ 17 , 18 ]. GEFT exerts its tumor-promoting effect via positive regulation of the proliferation, migration, invasion, and antiapoptotic capabilities of RMS cells via regulation of the Rac1/Cdc42-PAK signaling pathway to induce EMT [ 19 ]. mTOR is encoded by a gene located on chromosome 1p36.2 and is a member of the PI3K-related kinase family. It is often involved in regulating cell survival, growth, metabolism, protein synthesis, and autophagy, and the mTOR signaling pathway is dysregulated in numerous types of cancer and is frequently associated with carcinogenesis and tumor progression; thus, mTOR represents an ideal and promising therapeutic target. In addition, several studies have shown that lncRNAs are regulators of mTOR signaling in cancers [ 20 ]. In the present study, GEFT was found to positively regulate mTOR expression in RMS cells and to promote tumor progression to some extent through its ability to induce mTOR expression. However, the potential molecular mechanism by which GEFT modulates mTOR expression in RMS remains undetermined. In the present study, a novel lncRNA, termed lnc-PSMA8-1 (ENST000000580975), was identified and shown to be activated by GEFT and highly overexpressed in RMS cell lines and tissues, which was indicative of poor prognosis. Next, it was shown that lnc-PSMA8-1 promoted the proliferation and migration of RMS cells and upregulated the expression of mTOR by sponging miR-144-3p. Thus, whether lnc-PSMA8-1, miR-144-3p, and/or mTOR could be considered novel therapeutic targets for RMS and how the lnc-PSMA8-1R/miR-144-3p/mTOR axis regulates RMS progression in vivo will be assessed in future studies.
Materials and methods Clinical samples In the present study, 20 paraffin-embedded RMS tissues and 10 normal skeletal muscle tissues were obtained from the First Affiliated Hospital, Shihezi University (Xinjiang, China) and the First Affiliated Hospital, Xinjiang Medical University (Xinjiang, China). The inclusion criteria were a diagnosis confirmed by two pathologists and the lack of systemic or local therapy prior to surgery. The exclusion criteria were a history of a second primary malignant tumor and local recurrence or metastasis. The pathological images of RMS samples are shown in Fig. S1 . All the patients and their families were informed regarding specimen collection, and the patients’ parents/guardians provided written informed consent. All experiments were approved by the Ethics Committee of Shihezi University School of Medicine (No. 2019-021-01). Cell culture The ERMS cell lines RD and A204 were obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China) and Fu Xiang Biotechnology Co., Ltd. (Shanghai, China). The ARMS cell lines RH30 and PLA802 were purchased from Shanghai Fu Xiang Biotechnology Co., Ltd. and Shanghai Hong Shun Biotechnology Co., Ltd. The human skeletal muscle cell line HSKMC was purchased from Beijing Be Na Biotechnology Co., Ltd. The above cell lines were cultured in DMEM (Gibco; Thermo Fisher Scientific, Inc.) supplemented with 10% fetal bovine serum (Biological Industries, Israel) and 1% streptomycin-penicillin (Solarbio, China) at 37 °C in a humidified incubator under a 5% CO 2 atmosphere. Cell transfection Shanghai GeneChem Co., Ltd. designed and synthesized the GEFT and lnc-FAM59A-1 overexpression plasmids, the GEFT interference plasmid, and the empty vector. The siRNAs against human lncRNAs (lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, lnc-PSMA8-1) and mTOR, the miR-144-3p mimic, the antisense miR-144-3p inhibitor, and the negative scramble control RNA oligo were purchased from Shanghai GenePharma Co., Ltd. Lipofectamine 2000 (Thermo Fisher Scientific, Inc.) was used for all transient transfections. RNA preparation and quantitative reverse transcription–PCR (qRT‒PCR) Using a miRNeasy FFPE Kit or a miRNeasy Mini Kit (QIAGEN GmbH), total RNA was obtained from tissue samples or cultured cell lines, respectively. A Cytoplasmic and Nuclear RNA Purification Kit (Norgen Biotek Corp.) was used to isolate and purify cytoplasmic and nuclear RNA according to the manufacturer’s protocol. Reverse transcription was performed with a miScript II RT Kit (QIAGEN GmbH). cDNA was subsequently subjected to qRT‒PCR analysis on an Applied Biosystems 7500 Real-Time PCR System (Thermo Fisher Scientific, Inc.) using a miScript SYBR Green PCR Kit (QIAGEN GmbH). Subsequently, the samples were amplified by PCR, and the 2 -∆∆Ct method was used to calculate relative gene expression levels. The sequence-specific qRT‒PCR primers targeting miR-144-3p and U6 were designed and purchased from Shanghai GenePharma Co., Ltd. Additional RNA sequence-specific qRT‒PCR primers were acquired from Sangon (Shanghai, China). All of the sequences of the real-time PCR primers are listed in Table S1 . Microarray analysis Total RNA was obtained from GEFT-overexpressing and GEFT-knockdown RMS cells (RD, A204, RH30, and PLA802), amplified, and then used to synthesize fluorescent cRNA. The labeled cRNA was hybridized onto an Affymetrix Genechip® Human Transcriptome Array 2.0 (Affymetrix Inc.). The microarray experiments and data analyses were performed by Beijing Compass Biotechnology Co., Ltd. Cell proliferation and apoptosis assays A CCK-8 assay (Dojindo Molecular Technologies, Inc.) was performed according to the manufacturer’s instructions to evaluate cell proliferation. Approximately 4 × 10 3 cells per well were seeded into four 96-well plates and cultured with DMEM. After 0, 24, 48, or 72 h, 10 μl of CCK-8 reagent per well was added, and the cells were further incubated for 1.5 h at 37 °C. Subsequently, the optical density at 450 nm (OD450) was measured. An Annexin V-APC/PI Apoptosis Detection Kit (KeyGEN, Chain) was used to analyze apoptosis 48 h post-transfection according to the manufacturer’s instructions. The apoptosis rate of cells was determined with a PAS flow cytometry system (PARTEC, Germany). Cell invasion and migration assays A total of 2.5 × 10 5 cells in 0.2 ml of serum-free DMEM were seeded into the upper chamber of a Transwell insert (8 μm pore size, Costar; Corning, Inc.) containing a membrane coated with Matrigel for the invasion assay and an uncoated membrane for the migration assay (BD Biosciences). After 24 h of incubation at 37 °C for 24 h, the transfected cells that had migrated through or invaded into the insert membrane were fixed and stained using 0.5% crystal violet solution. Subsequently, the number of invaded or migrated cells was determined using an optical microscope (Olympus BX51). RNA-binding protein immunoprecipitation (RIP) A total of 5 × 10 5 RD or RH30 cells were plated in 100 mm cell culture dishes and incubated for 24 h. Subsequently, the cells were transfected with the miR-144-3p mimic or miR-NC. Then, 48 h after transfection, in accordance with the manufacturer’s protocol, a RIP kit was used to assess the binding of endogenous Ago2 to RNA by RIP with an anti-Ago2 monoclonal antibody (Millipore Sigma); IgG was used as the control. Finally, the relative enrichment of lnc-PSMA8-1 and mTOR in the immunoprecipitates was determined by qRT‒PCR. Luciferase reporter assay We used the DIANA, RNA22, miRanda and miRWalk2.0 bioinformatics tools to predict the binding sites between lnc-PSMA8-1 and miR-144-3p and between the mTOR 3’UTR and miR-144-3p. Luciferase plasmids that contained the wild-type lnc-PSMA8-1 binding site (lnc-PSMA8-1-WT) or mutated lnc-PSMA8-1 binding site (lnc-PSMA8-1-MUT) or the wild-type mTOR 3’UTR (mTOR 3’UTR-WT), or mutated mTOR 3’UTR (mTOR 3’UTR-MUT), the corresponding empty vector controls, and the Renilla luciferase plasmid were constructed by Shanghai GeneChem Co., Ltd. RD and RH30 cells were seeded into 24-well plates (3 × 10 4 cells/well); 24 h later, the cells were transfected with 0.1 μl of one of the luciferase plasmids, 0.02 μl of the Renilla luciferase expression plasmid, and 100 nM miR-NC or the miR-144-3p-mimic. After 48 h, Renilla luciferase expression was measured according to the manufacturer’s protocol. Western blot analysis Western blotting was used to measure the protein expression levels of mTOR and p-mTOR. Equal quantities of proteins were loaded into each lane of an SDS gel, separated using SDS‒PAGE, and transferred to PVDF membranes, which were then blocked with 5% BSA for 2 h. Since the positions where all protein blots appeared were quite stable and for obtaining clearer western blot bands, we set the upper and lower boundaries of the membranes according to protein molecular weight, and the left and right boundaries were according to different cell lines or other experiments. Therefore, all the blots were cropped. prior to hybridization with primary antibodies. Subsequently, the membranes were incubated with primary antibodies overnight at 4 °C. The following antibodies were used: anti-β-actin (OriGene Technologies, Inc.), anti-mTOR, and anti-p-mTOR (both from Cell Signaling Technology, Inc.). Following six washes, the membranes were incubated with a secondary antibody (OriGene Technologies, Inc.) for a duration of 2 h and were then washed. Signals were visualized using chemiluminescence solution (Thermo Fisher Scientific, Inc.). Statistical analysis SPSS 26.0 software was applied for statistical analysis. All data obtained from at least three separate experiments are presented as the means ± SDs. GraphPad Prism software was used to draw graphs. Differences with P < 0.05 were regarded as statistically significant (* P < 0.05, ** P < 0.01, and *** P < 0.001).
Results Screening of GEFT-regulated lncRNAs in RMS cell lines To identify the lncRNAs that are regulated by GEFT, microarray analysis was used, and lncRNA expression levels were compared between GEFT-overexpressing and knockdown cells. The microarray-based analysis identified 31 differentially expressed lncRNAs ( P < 0.05, fold change > 2.0), namely, 14 upregulated lncRNAs and 17 downregulated lncRNAs (Fig. 1 A, Table S2 ). To determine the reliability of the microarray chip analysis results, 10 lncRNAs were randomly selected (5 upregulated and 5 downregulated), and their expression was validated by qRT‒PCR. The results showed that the expression levels of these 10 lncRNAs were essentially consistent with the results of the microarray analysis (Fig. 1 B). Effects of lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, lnc-PSMA8-1, and lnc-FAM59A-1 on the proliferation and apoptosis of RMS cells According to the microarray analysis and bioinformatic analysis results, four upregulated lncRNAs—namely, lnc-CEACAM19-1 (NONHSAT066708), lnc-VWCE-2 (NONHSAT021625), lnc-GPX7-1 (ENST00000607321), and lnc-PSMA8-1 (ENST00000580975)—and one downregulated lncRNA, lnc-FAM59A-1 (ENST00000581134), were predicted to regulate the biological functions of RMS cells. In RMS cell lines, we knocked down the upregulated lncRNAs (lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1 and lnc-PSMA8-1) to assess their biological functions. Moreover, we overexpressed the downregulated lncRNA (lnc-FAM59A-1) to assess its biological functions. The qRT‒PCR results showed that the knockdown and overexpression were successful (Fig. 2 A). Growth curves constructed using data from CCK-8 cell proliferation assays showed that knockdown of the four upregulated lncRNAs (lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1 and lnc-PSMA8-1) and overexpression of the downregulated lnc-FAM59A-1 decreased the cell proliferation rate (Fig. 2 B), and flow cytometric analysis showed that the late apoptosis rate in RD cells and total apoptosis rate in RH30 cells was also increased (Fig. 2 C). Effects of lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, lnc-PSMA8-1, and lnc-FAM59A-1 on the invasive and migratory capacities of RMS cells The effects of lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, lnc-PSMA8-1, and lnc-FAM59A-1 on the invasion and migration of RMS cells were further studied using Transwell invasion and migration assays. The data revealed that knockdown of the four upregulated lncRNAs and overexpression of the downregulated lnc-FAM59A-1 reduced the invasion (Fig. 3 A) and migration (Fig. 3 B) of RMS cells. Mir-144-3p may play a bridging role between lnc-PSMA8-1 and mTOR Accumulating evidence indicates that lncRNAs may serve as competitive endogenous RNAs (ceRNAs) to antagonize the functions of miRNAs; that is, lncRNAs sponge miRNAs to decrease their abundance and reduce their regulatory effects on their target 3’-UTRs. To examine whether the lncRNAs exert their specific effects on mediating the function of GEFT by functioning as ceRNAs to regulate mTOR expression in RMS cells, the upregulated lncRNAs that positively regulated mTOR expression in RD and RH30 cells were identified by transfection of siRNAs against lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, or lnc-PSMA8-1. As shown in Fig. 4 A, interference with lnc-GPX7-1 and lnc-PSMA8-1 expression reduced the expression of mTOR in RD cells, but lnc-PSMA8-1 knockdown significantly decreased mTOR expression. Knockdown of lnc-PSMA8-1 notably reduced mTOR expression in RH30 cells, but this effect was not observed in RH30 cells transfected with siRNA targeting lnc-GPX7-1. Given the presence of miRNAs in the cytoplasm, the subcellular localization of lnc-PSMA8-1 was further determined. According to the results of analysis with the online bioinformatics tool lncLocator and qRT‒PCR, lnc-PSMA8-1 was localized primarily in the cytoplasm (Fig. 4 B). Thus, lnc-PSMA8-1 possibly functions as a ceRNA that indirectly regulates mTOR expression. The DIANA and RNA22 online tools were used to identify miRNAs targeted by lnc-PSMA8-1, and miRanda and mirwalk2.0 were used to identify the miRNAs that target the 3’UTR of mTOR. The bioinformatics results combined with the results of whole-genome miRNA expression profiling in RMS cells indicated that miR-144-3p may play a bridging role between lnc-PSMA8-1 and mTOR (Fig. 4 C). Total RNA of RMS tissues and normal skeletal muscle tissues was extracted, and the expression of lnc-PSMA8-1, miR-144-3p, and mTOR was measured by qRT‒PCR to analyze their interrelationships. The results showed that the expression of lnc-PSMA8-1 and mTOR was higher in RMS tissues than in normal skeletal muscles and that the expression of miR-144-3p was lower in RMS tissues than in normal skeletal muscles (Fig. 4 D-F). Spearman correlation analysis showed that miR-144-3p expression was negatively correlated with lnc-PSMA8-1 and mTOR expression and that lnc-PSMA8-1 expression was positively correlated with mTOR expression (Fig. 4 G-I). Next, the expression levels of lnc-PSMA8-1, miR-144-3p, and mTOR were measured in human RMS and skeletal muscle cells. The results showed that the expression levels of lnc-PSMA8-1, miR-144-3p, and mTOR in these cells were identical to the levels measured in the corresponding tissues (Fig. 4 J-K). These data suggested that the expression of lnc-PSMA8-1, miR-144-3p, and mTOR in RMS was consistent with the expression pattern determined based on the ceRNA-based regulatory lncRNA‒miRNA–mRNA network. lnc-PSMA8-1 modulates mTOR expression by competitively binding to miR-144-3p To determine the targeting effect of miR-144-3p on lnc-PSMA8-1, luciferase reporters containing the wild-type (lnc-PSMA8-1-WT) or mutated miR-144-3p binding site (lnc-PSMA8-1-MUT) were constructed. The results showed that overexpression of miR-144-3p reduced luciferase activity in cells transfected with the wild-type reporter vector but did not reduce luciferase activity in cells transfected with the mutant reporter vector or the empty vector (Fig. 5 A). miRNAs bind to their target mRNAs, resulting in mRNA degradation and/or translational repression in a manner dependent on AGO2. For the purpose of ascertaining the regulatory effect of miR-144-3p on lnc-PSMA 8 − 1, anti-AGO2 RIP was performed in RD and Rh30 cells transiently overexpressing miR-144-3p. The AGO2-mediated endogenous lnc-PSMA8-1 precipitate exhibited significant enrichment in cells transfected with miR-144-3p (Fig. 5 B). Moreover, knockdown of lnc-PSMA8-1 resulted in upregulated expression of miR-144-3p in RD and RH30 cells (Fig. 5 C), whereas overexpression of miR-144-3p decreased lnc-PSMA8-1 expression in RD and RH30 cells (Fig. 5 D). The data presented above support the hypothesis that lnc-PSMA8-1 acts as a ceRNA for miR-144-3p. The bioinformatic analysis results revealed that mTOR may be a direct target of miR-144-3p. miRNA binds to its target mRNA and causes its posttranscriptional inhibition in an Ago2-dependent manner. Thus, anti-Ago2 RIP was used to confirm the association between mTOR and miR-144-3p. The results revealed that mTOR was notably enriched in the miR-144-3p precipitate, indicating that they are in the same RNA-induced silencing complex (Fig. 5 E). To further validate the direct association between mTOR and miR-144-3p, luciferase reporters containing mTOR 3’UTR-WT or mTOR 3’UTR-MUT were constructed. Transient overexpression of miR-144-3p reduced luciferase activity in cells transfected with the wild-type reporter vector but not in those transfected with the mutant reporter or empty vector (Fig. 5 F). In addition, qRT‒PCR and Western blotting showed that transient overexpression of miR-144-3p reduced mTOR expression (Fig. 5 G-H). All these results indicated that miR-144-3p suppressed mTOR expression by directly targeting the 3’UTR of mTOR. Since miR-144-3p is known to target mTOR for inhibition, can lnc-PSMA8-1 function through competitive binding to miR-144-3p, thereby attenuating the inhibitory effect of miR-144-3p on mTOR? To further investigate the role of lnc-PSMA8-1, the expression of mTOR in RD and RH30 cells was assessed after transfection of a siRNA to knock down lnc-PSMA8-1 combined with inhibitors of miR-144-3p. The results revealed that knockdown of lnc-PSMA8-1 decreased the expression of mTOR in RD and RH30 cells and that inhibition of miR-144-3p reversed the decrease in mTOR following the knockdown of lnc-PSMA8-1 (Fig. 6 A-B). The modulation of mTOR expression by miR-144-3p is dependent on the binding of miR-144-3p to the mTOR 3’UTR; if the regulatory effect of lnc-PSMA8-1 on mTOR is dependent on competitive binding of lnc-PSMA8-1 to miR-144-3p, then lnc-PSMA8-1 should also have a regulatory effect on the mTOR 3’UTR. Using the mTOR 3’UTR-WT and mTOR 3’UTR-MUT luciferase reporters, we found that depletion of lnc-PSMA8-1 decreased luciferase activity in cells transfected with mTOR 3’UTR-WT but not in cells transfected with the empty mutant reporter (Fig. 6 C). These results indicate that lnc-PSMA8-1 modulates mTOR expression in RMS cells by competitively binding to miR-144-3p. lnc-PSMA8-1 promotes RMS progression partly through sponging mir-144-3p to regulate mTOR expression To further confirm whether miR-144-3p affects the function of lnc-PSMA8-1 in RMS cells, the proliferative, apoptotic, invasive, and migratory capacities of RD and RH30 cells transfected with lnc-PSMA8-1 siRNA in combination with the miR-144-3p inhibitor were evaluated by a CCK-8 cell proliferation assay, flow cytometric analysis, and Transwell migration and invasion assays. Following lnc-PSMA8-1 knockdown, the proliferation of RMS cells was increased by miR-144-3p inhibition (Fig. 7 A). Following knockdown of lnc-PSMA8-1, inhibition of miR-144-3p markedly reduced apoptosis (Fig. 7 B). Moreover, inhibition of miR-144-3p reversed the suppressive effect of lnc-PSMA8-1 knockdown on the invasion (Fig. 7 C) and migration (Fig. 7 D) capabilities of cells. These observations suggest that lnc-PSMA8-1 promotes RMS progression by repressing miR-144-3p. To investigate whether the modulatory effects of the lnc-PSMA8-1/miR-144-3p axis on the proliferative, apoptotic, invasive, and migratory capacities of RMS cells are mediated through the miR-144-3p target gene mTOR, these behaviors were evaluated in RD cells and RH30 cells transfected with the miR-144-3p inhibitor in combination with mTOR siRNA. Knockdown of mTOR inhibited the proliferation of RMS cells treated with the miR-144-3p inhibitor (Fig. 8 A). Knockdown of mTOR reversed the reduction in apoptosis induced by the miR-144-3p inhibitor (Fig. 8 B). Moreover, knockdown of mTOR markedly reduced the enhancing effects of miR-144-3p inhibition on the invasion (Fig. 8 C) and migration (Fig. 8 D) capabilities of cells. These results collectively suggest that lnc-PSMA8-1 promotes RMS progression by competitively binding to miR-144-3p to modulate mTOR expression.
Discussion Approximately 7% of all pediatric malignancies are soft tissue sarcomas, of which 50% of cases are RMS [ 3 ]. Radiation, combination chemotherapy, and surgery are commonly used approaches to treat RMS [ 21 ]. Over the past 50 years, several low-risk RMS patients have exhibited excellent outcomes, as shown by cooperative group trials [ 22 ]. Patients with distant metastases, who are at the greatest risk, have a maximum two-year event-free survival (EFS) rate of < 20% [ 23 ]. Despite a cooperative group trial being conducted in 1972, the outcome of those patients has not improved for five decades, highlighting the need to improve our understanding of the molecular mechanisms underlying this disease [ 24 ]. Approximately 2% of human genetic material encodes proteins, while the vast majority is transcribed into ncRNAs [ 25 , 26 ]. Despite the annotation of thousands of lncRNAs in recent years, only a small fraction of them have undergone functional characterization [ 27 ]. ncRNAs can regulate chromatin function, influence the stability and translation of mRNAs within the cytoplasm, and interfere with signaling pathways through lncRNA–DNA, lncRNA–RNA, and lncRNA–protein interactions during pretranscriptional, transcriptional, or posttranscriptional processes [ 28 ]. It is possible that lncRNAs, which play crucial roles in almost all diseases, may eventually serve as therapeutic targets. There are theoretical advantages to this possibility. The high degree of specificity of lncRNA profiles across tissues and the regulation of cellular networks by lncRNAs suggests that targeting lncRNAs may have an advantage over targeting proteins in avoiding potentially harmful unintended consequences. Additionally, the lack of translation, fast degradation, and low expression levels of lncRNAs may allow more rapid effects with lower doses [ 10 ]. Therefore, searching for potential lncRNA therapeutic targets in rhabdomyosarcoma is a highly promising endeavor. The GEFT gene is located on chromosome 12q13.3-24.1 and was validated to be overexpressed in RMS and associated with survival and prognosis. GEFT leads to metastasis and tumorigenicity in RMS by activating EMT induced by Rac1/Cdc42 signaling [ 17 , 18 , 29 ]. Here, microarray analysis was used to identify GEFT-regulated lncRNAs. In this study, knockdown of GEFT resulted in upregulation of lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, and lnc-PSMA8-1, and overexpression of GEFT resulted in downregulation of lnc-FAM59A-1, attenuating the malignant phenotypes of RMS cells. Then, it was found that lnc-PSMA8-1 was activated by GEFT and highly overexpressed in RMS cell lines and tissues, which was indicative of poor prognosis. An increasing number of studies have shown that lncRNAs with multiple complementary miRNA binding sites function as ceRNAs or miRNA sponges, reducing miRNA function and indirectly targeting mRNAs, thus affecting the occurrence and development of tumors [ 30 – 33 ]. Wang et al. [ 34 ] demonstrated that the lncRNA HULC induced the phosphorylation of CREB by functioning as a ceRNA for miR-372 to reduce the translational repression of its target gene, PRKACB. Chen et al. [ 35 ] showed that LINC01234 in gastric cancer cells modulated CBFB expression by competitively binding to miR-204-5p. Yuan et al. [ 36 ] found that lncRNA-ATB was activated by TGF-β and accelerated hepatocellular carcinoma cell invasion by serving as a ceRNA for miR-200s to modulate the expression of ZEB1/2, ultimately inducing EMT. Multiple types of cancer exhibit dysregulated mTOR signaling, and this pathway is frequently associated with carcinogenesis and tumor progression. According to reports, cancers with abnormal mTOR activation account for > 70% of all cases [ 37 ]. Therefore, targeting mTOR expression may serve as a novel strategy for the management of refractory RMS. lncRNAs can modulate mTOR activity in several ways as important modulators of mTOR signaling [ 20 ]. Thus, here, we examined whether the lncRNAs that regulate the role of GEFT also promote RMS progression by functioning as ceRNAs to regulate mTOR expression. We determined that lnc-PSMA8-1, one of the four GEFT-activated lncRNAs, positively regulated mTOR expression in RMS cell lines and was expressed mainly in the cytoplasm. According to the ceRNA hypothesis, lnc-PSMA8-1 possibly acts as a ceRNA to indirectly regulate mTOR expression. The bioinformatics results showed that miR-144-3p may play a bridging role between lnc-PSMA8-1 and mTOR. Related studies have confirmed that multiple target sequences, including the 3’-UTR of mTOR, are regulated by miR-144-3p in several complex tumors. For example, Huo et al. [ 38 ] revealed that mTOR expression was downregulated by miR-144-3p in human salivary adenoid carcinoma, inhibiting cell proliferation and inducing apoptosis. Iwaya et al. [ 39 ] demonstrated that the progression of colorectal cancer was associated with the downregulation of miR-144, which targets mTOR. Ren et al. [ 40 ] revealed that miR-144 had a suppressive effect on the proliferation of osteosarcoma cells and induced apoptosis through the direct regulation of mTOR expression. Hence, we assessed whether lnc-PSMA8-1, activated by GEFT, modulates mTOR expression by competitively binding to miR-144-3p to regulate biological behaviors of RMS cells. Our studies revealed that the expression of lnc-PSMA8-1, miR-144-3p, and mTOR in RMS tissues was consistent with the expression patterns suggested by a ceRNA-based lncRNA–miRNA–mRNA regulatory network. Mechanistic verification was also performed, which confirmed that lnc-PSMA8-1 modulated mTOR expression by competitively binding to miR-144-3p. The results of cell functional assays suggested that lnc-PSMA8-1 promoted cell proliferation, invasion, and migration and inhibited apoptosis in RMS cell lines through miR-144-3p via regulation of mTOR activity. These results collectively suggest that lnc-PSMA8-1 promotes RMS progression through competitively binding to miR-144-3p to regulate the expression of mTOR. Notably, the of ceRNA hypothesis considers that all types of RNA transcripts interact via miRNA response elements [ 31 ]. Therefore, studies on lncRNAs acting as ceRNAs have primarily focused on the prediction and identification of lncRNA-targeted miRNAs. However, an often overlooked concept is that a ceRNA’s subcellular localization affects its accessibility to miRNAs. miRNAs are localized primarily in the cytoplasm, and lncRNAs can perform biological functions in the nucleus and in the cytoplasm [ 41 – 44 ]. lncRNAs with a nuclear localization typically control pretranscriptional or transcriptional processes. lncRNAs localized in the cytoplasm often act as ceRNAs that sponge miRNAs, thereby indirectly controlling the expression of target mRNAs at the posttranscriptional level [ 45 ]. Therefore, determining the subcellular localization of lncRNAs is necessary. The results of the bioinformatic analysis and cell fractionation assays in the present study confirmed that lnc-PSMA8-1 is localized primarily in the cytosol, and it regulates mTOR at the posttranscriptional level. In our study, we show that lnc-PSMA8-1 is an important modulator of mTOR.
Conclusion Accordingly, our research demonstrated that lnc-PSMA8-1 is a key regulator of GEFT signaling pathways and that lnc-PSMA8-1, activated by GEFT, promotes RMS progression by functioning as a ceRNA of miR-144-3p to indirectly regulate the expression of mTOR. Thus, lnc-PSMA8-1 is activated by GEFT and promotes RMS progression by competitively binding to miR-144-3p to regulate the expression of mTOR (Fig. 9 ). lnc-PSMA8-1 could be an ideal and promising pharmacological target for therapeutic development in RMS.
Background GEFT is a key regulator of tumorigenesis in rhabdomyosarcoma (RMS), and overexpression of GEFT is significantly correlated with distant metastasis, lymph node metastasis, and a poor prognosis, yet the underlying molecular mechanism is still poorly understood. This study aimed to investigate and validate the molecular mechanism of GEFT-activated lncRNAs in regulating mTOR expression to promote the progression of RMS. Methods GEFT-regulated lncRNAs were identified through microarray analysis. The effects of GEFT-regulated lncRNAs on the proliferation, apoptosis, invasion, and migration of RMS cells were confirmed through cell functional experiments. The target miRNAs of GEFT-activated lncRNAs in the regulation of mTOR expression were predicted by bioinformatics analysis combined with quantitative real-time polymerase chain reaction (qRT–PCR) analysis. The expression of lnc-PSMA8-1, miR-144-3p, and mTOR was measured by qRT–PCR in RMS tissue samples and cell lines. The regulatory mechanisms of the lnc-PSMA8-1-miR-144-3p-mTOR signaling axis were verified by RNA-binding protein immunoprecipitation (RIP), a luciferase reporter assay, qRT–PCR analysis, Western blot analysis, and cell functional experiments. Results The microarray-based analysis identified 31 differentially expressed lncRNAs (fold change > 2.0, P < 0.05). Silencing the 4 upregulated lncRNAs (lnc-CEACAM19-1, lnc-VWCE-2, lnc-GPX7-1, and lnc-PSMA8-1) and overexpressing the downregulated lnc-FAM59A-1 inhibited the proliferation, invasion, and migration and induced the apoptosis of RMS cells. Among the factors analyzed, the expression of lnc-PSMA8-1, miR-144-3p, and mTOR in RMS tissue samples and cells was consistent with the correlations among their expression indicated by the lncRNA–miRNA–mRNA regulatory network based on the ceRNA hypothesis. lnc-PSMA8-1 promoted RMS progression by competitively binding to miR-144-3p to regulate mTOR expression. Conclusion Our research demonstrated that lnc-PSMA8-1 was activated by GEFT and that the former positively regulated mTOR expression by sponging miR-144-3p to promote the progression of RMS. Therefore, targeting this network may constitute a potential therapeutic approach for the management of RMS. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11798-y. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Abbreviations Competitive endogenous RNA Epithelial-mesenchymal transition Guanine nucleotide exchange factor T Long noncoding RNA Mechanistic target of rapamycin kinase Quantitative real-time polymerase chain reaction RNA-Binding Protein Immunoprecipitation Rhabdomyosarcoma Author contributions CL and FL conceived and supervised the project. LM and HS designed the methods. LM, HS, ZL, XW, and QL carried out experimental work. LM, HS and QL analyzed data. LM and HS wrote the manuscript. ZZ, and CL review and edit the manuscript. All authors discussed the results and commented on the paper. Funding This work was supported by the National Natural Science Foundation of China (No. 81960485, 81660441 and 82060487), City school joint funding project (No. 202201020104). Data availability The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author. Declarations Ethics approval and consent to participate All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the Ethics Committee of Shihezi University School of Medicine (No. 2019-021-01). Informed consent was obtained from all individual participants included in the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Cancer. 2024 Jan 15; 24:79
oa_package/e7/a3/PMC10789031.tar.gz
PMC10789032
0
Introduction Treatment with biologic drugs has for the last two decades transformed the prognosis of rheumatoid arthritis (RA), an autoimmune destructive joint disease. The B cell depleting therapy, rituximab, has been particularly successful in autoantibody positive RA patients, providing convincing affirmation of the involvement of B cells in RA pathogenesis [ 1 ]. The identification of the RA-associated autoantibodies, rheumatoid factor (RF) and anti-citrullinated protein antibodies (ACPA) has led to the discovery that these autoantibodies can be detected many years prior to clinical onset [ 2 , 3 ]. Thus, the break of B cell tolerance occurs long before clinically detectable joint inflammation and raises the possibility that changes to the composition of a patient’s B cell pool could give vital clues to the nature of the approaching autoimmune pathogenesis at the point of diagnosis. The division of B cells into phenotypic subsets using various cell surface markers commonly relies on IgD and CD27, where the latter is considered a marker for memory B cells (MBCs) and the presence or absence of IgD defines their status as unswitched or switched respectively [ 4 ]. Only a handful of studies have analysed the composition of the B cell population in the peripheral blood (PB) of early RA (eRA) patients, i.e. RA patients with disease duration less than 1 year according to EULAR definitions [ 5 ], and most report deficits relative to healthy donors (HDs) in both switched and unswitched MBCs [ 6 – 8 ]. There is also evidence that a decrease in CD27 + MBCs can be detected already a year prior to disease debut [ 9 ]. A CD21 −/low B cell population is expanded in chronic inflammatory states, e.g. in various autoimmune diseases such as Sjögren’s syndrome and systemic lupus erythematosus [ 10 , 11 ] as well as chronic infectious diseases, e.g. HIV [ 12 ] and CVID [ 13 ]. However, the definition of the B cell population differs between these studies, which in part could be due to the studied diseases. We and others have shown that the CD21 −/low fraction of B cells and its subsets are expanded in patients with established (est) RA [ 8 , 14 – 18 ], and in one of these studies, a positive correlation between disease activity and a CD21 −/low subset that was CD27 − IgD − (double negative, DN) CD11c + was reported [ 18 ]. In our study on female patients with ACPA and/or RF positive estRA [ 16 ], we found the frequency of the CD21 −/low DN subset to be elevated relative to the same population in HDs. In addition, the frequency of CD21 −/low DN in PB also correlated with radiographic joint destruction suggesting a role in the pathogenesis. As for eRA, an increase in the frequency of CD21 −/low CD38 − subset has been described [ 8 ], whereas another study found no differences in the proportion of CD21 −/low CD11c + in eRA compared to HD [ 19 ]. Thus, more information is needed on the composition of the B cell compartment in eRA and potential associations with clinical parameters such as disease activity and disease severity. Here, we have analysed the associations between B cell subsets and clinical parameters including disease activity and joint damage in a large cohort of patients with eRA. Our aim was to provide insight into RA disease pathogenesis and possible therapeutic targets.
Materials and methods Patients and healthy donors Seventy-six patients with newly diagnosed RA, according to the American College of Rheumatology/European League Against Rheumatism 2010 criteria, were included in the study (Table 1 ). The patients who met the following inclusion criteria were eligible for the study: ≥ 18 years old, at least 2 swollen joints and 2 tender joints, RF-positive or ACPA-positive or C-reactive protein (CRP) ≥ 10 mg/L, moderate disease activity (> 3.2) by composite index Disease Activity Score28 (DAS28)-CRP, duration of symptoms (retrospective patient-reported pain in the joints) < 24 months, and no treatment with disease modifying anti-rheumatic drugs (DMARDs) or prednisolone. The patients were recruited at the rheumatology clinic at Sahlgrenska University Hospital in Gothenburg as well as the rheumatology clinic at the University Hospital at Malmö and Lund. Blood samples were taken from the patients within 1 week of diagnosis of RA. The patient group was compared to a group of twenty-eight age-and sex-matched controls HD (Table 1 ). The study was approved by the regional ethics committees of Gothenburg and Lund, Sweden, and all patients signed an informed consent form. Synovial fluid from patients with estRA ( n = 5) was collected at the rheumatology clinic at Sahlgrenska University Hospital in Gothenburg. We also received information on age, sex and antibody status. Ethical permission did not allow for obtaining further clinical information. Clinical disease assessments Disease activity was evaluated by assessing the following parameters: Swollen Joint Count (SJC28), Tender Joint Count (TJC28), Disease Activity Score—Erythrocyte Sedimentation Rate (DAS28-ESR), DAS28-CRP, Clinical Disease Activity Index (CDAI), CRP and ESR. ACPA positivity was determined by a multiplexed anti-CCP test (BioPlex; Bio-Rad, Hercules, CA, USA), and RF positivity was determined by nephelometry (Beckman Coulter, Brea, CA, USA). Patients with ≥ 20 IU/ml ACPAs or RF in the serum were considered ACPA or RF positive, respectively. For thirty-two patients radiographs of hands and feet were taken at the time of diagnosis and evaluated by a certified assessor, blinded to clinical data, autoantibody and B cell status. The modified Sharp van der Heijde score (mSHS, 0–448), including 16 areas for erosions and 15 areas for joint space narrowing in each hand and 6 areas (for both erosions and joint space narrowing) in each foot, was used [ 20 ]. Flow cytometry Peripheral blood and synovial fluid mononuclear cells (PBMCs and SFMCs) were isolated from whole blood and synovial fluid, respectively, using Lymphoprep (Axis-Shield, Oslo, Norway). SFMCs were frozen in 10% DMSO FBS and stored at − 150°C for between 13 and 30 months. SFMCs were thawed and washed with PBS. Fc receptors were blocked with 1% mouse serum (in-house) for 15 min on ice for both SFMCs and PBMCs and subsequently surface stained at a final concentration of 0.5–1 × 10 6 in 100 μl for 30 min on ice. Antibodies and dilutions used in flow cytometry analysis as shown in Supplementary Table 1 . Cells were acquired on a FACSCanto2 and BD FACSLyric. Data were analysed using the Flow Jo software (Tree Star Ashland, OR, USA). Gating strategy for flow cytometry B cells were identified in PBMCs or SFMCs as single lymphocytes expressing CD19, which were then divided according to their expression of the CD21 coreceptor into CD21 + and CD21 − populations (Fig. 1 A). By gating for CD27 vs IgD, we defined four B cell subsets: naive and transitional cells (NAV, CD27 − IgD + ), switched MBCs (SWM, CD27 + IgD − ), unswitched MBCs (USW, CD27 + IgD + ) and double-negative MBCs (DN, CD27 − IgD − ); these populations were identifiable in both CD21 + and CD21 − populations. Statistical analysis The relation between eRA and healthy donors or between clinical B cell subset proportions and clinical parameters was assessed by means of multivariate factor analysis . Two-class discriminant analysis (OPLS-DA) was used to examine whether eRA could be discriminated from the healthy donors based on the various/different B cell subset proportions. Data were normalized using a log transformation and were further scaled to unit variance (by dividing each variable by its standard deviation) so that all the variables were given an equal weight regardless of their absolute value. The loading vectors were normalized to length 1. OPLS model performance was assessed according to R2 (amount of variation explained) and Q2 (how well the outcome can be predicted by the model in a cross-validation sample). Aforementioned statistical analyses were conducted in SIMCA version 17.0.1; Umetrics, Umea, Sweden. B cell populations between two groups were compared according to either paired T test or Mann–Whitney tests, and for ≥ 2 groups, Kruskal–Wallis test or Friedman test with Dunn’s multiple comparisons was used. Associations between CD21 −/low DN and radiological measures, i.e. mSHS, joint space narrowing score (JSN) and erosion score (ES), were examined in linear regression models that were adjusted for age, sex, autoantibody status and smoking, if significant. Statistical analyses were conducted with SAS 9.4 (SAS Institute Inc., Cary, NC, USA). p -values < 0.05 were considered statistically significant.
Results Characteristics of the study population Seventy-six eRA patients and 28 age-and sex matched HD were included in the study. The participant demographics are summarized in Table 1 . Both eRA patients and HD were mostly women, 71% and 68%, respectively. The mean tender joint count 28 (TJC28) was 9 (± 6), and the mean swollen joint count 28 (SJC28) was 9 (± 5). Similarly, the cohort had a moderate disease activity score 28-CRP (DAS28-CRP) of 5.0. The majority of eRA patients were ACPA and/or RF positive (68%). Only 14% of the eRA patients were smokers. Around 59% of eRA patients had bone erosions and 63% cartilage loss on radiographs of hands and feet taken at the time of diagnosis. The B cell compartment in eRA patients is disturbed compared to controls First, we asked whether the B cell compartment in eRA patients differed from that in HD. To do so, flow cytometric evaluation of the B cell compartment was conducted whereby B cells (CD19 + ) were divided into CD21 + and CD21 −/low populations, and using the phenotypic markers CD27 and IgD, these were further divided into the following established cell subsets, i.e. switched memory (SWM, CD27 + IgD − ), unswitched memory (USW, CD27 + IgD + ), naïve and transitional (NAV, CD27 − IgD + ) and double-negative cells (DN, CD27 − IgD − ) (Fig. 1 A). The frequency of total B cells (CD19 + cells) was similar in eRA patients and HD (Fig. 1 B) as were the frequencies of the total CD21 + and CD21 −/low populations (Fig. 1 C and D). OPLS-DA was used to determine whether any subsets of these B cell populations associated with eRA. The B cell subsets that showed the strongest association with eRA (positive or negative) are displayed in the column plot in (Fig. 1 E). The CD21 + cell subsets with the strongest positive associations to eRA were the NAV cells, and those with the strongest negative association were the USW and SWM cells (Fig. 1 E). This was confirmed with univariate analyses, which demonstrated that the frequency of CD21 + NAV cells was increased and the frequency of CD21 + USW and CD21 + SWM cells decreased in eRA compared to HD (Fig. 1 F). Looking at the CD21 −/low subsets in the OPLS-DA, we found that DN cells had the strongest positive and the USW cells the strongest negative association to eRA (Fig. 1 E). Further univariate analysis did not reach significance for the CD21 −/low DN cells but could confirm the CD21 −/low USW association, i.e. that the frequency of CD21 −/low USW was significantly decreased in eRA patients compared to HD (Fig. 1 G). Frequency of CD21 −/low DN cells correlates with joint space narrowing in eRA Next, we asked whether any of these aforementioned B cell subsets were associated with eRA clinical features, i.e. joint destruction, disease activity and autoantibody status as well as age and sex. We have previously shown that the frequency of CD21 −/low DN cells correlated with joint damage in estRA, and our objective was to investigate whether we could detect a similar relationship in eRA. The OPLS model displayed associations for CD21 −/low DN cells and various clinical factors (Fig. 2 A). Total mSHS, consisting of joint space narrowing score (JSN) and erosion score (ES), was positively associated with CD21 −/low DN cells, after adjusting for age (Table 2 ). Notably, the frequency of CD21 −/low DN was significantly associated with JSN ( p = 0.03), linking CD21 −/low DN cells with joint damage, in eRA (Table 2 ). Sex was neither associated with CD21 −/low DN cells nor mSHS and its composites. We did not find any association between APCA or RF titres and measures of joint destruction, i.e. ES, JSN and mSHS, indicating that the autoantibodies did not have a confounding effect on the association between CD21 −/low DN cells and mSHS as well as JSN. This supports the hypothesis that ACPA and RF are not influencing the CD21 −/low DN association with joint destruction. The OPLS-DA models for the CD21 + cell subsets, i.e. CD21 + NAV, SWM and USW and for the CD21 − USW subset, did not show any association with clinical factors. Taken together, our results suggest that CD21 −/low DN cells are involved in joint destruction in eRA patients. CD21 −/low DN cells in synovial fluid co-express RANKL and CD11c To investigate whether CD21 −/low DN cells had the potential to directly drive the mechanism of joint damage, SF from the inflamed joints of estRA patients was assessed for the presence of CD21 −/low DN cells and their phenotype (Fig. 2 B). This included the surface expression of receptor activator of the nuclear factor κB ligand (RANKL), which is known to drive osteoclastogenesis and contribute to joint destruction. We confirmed previous results that B cells in SF were predominantly CD21 −/low (Fig. 2 C) [ 16 , 21 ]. Around 50% of the CD21 −/low cells were DN and 30% SWM (Fig. 2 D). Further exploration of the CD21 −/low DN cells revealed that they largely expressed the integrin CD11c and transcription factor Tbet with the majority of CD21 −/low CD11c + Tbet + B cells coming from the DN compartment (Fig. 2 E, F). Moreover, RANKL + B cells were exclusively CD21 −/low and could be further characterized by the co-expression of the integrin CD11c (Fig. 2 G). Furthermore, 60% of CD21 −/low CD11c + RANKL + cells were from the DN subset, while SWM contributed only to 30% of RANKL and CD11c expressing SF B cells (Fig. 2 H). In conclusion, our analyses revealed an altered B cell compartment in eRA with CD21 −/low DN cells linked to joint space narrowing and an expansion of CD21 −/low DN cells in SF from the inflamed estRA joint where they expressed CD11c and Tbet. This SF B cell subset was further characterized by the expression of RANKL, demonstrating a capacity to promote osteoclastogenesis and revealing a possible role in the pathogenesis of RA.
Discussion In this study, the enrolment of untreated eRA patients aimed to limit the confounding effects of chronic inflammation and immune suppressive treatments on the B cell compartment and to provide an accurate and robust analysis of the B cell population in the early phase of RA. We have analysed the main B cell subsets in PB from eRA patients in which we can demonstrate a significant association between the frequencies of the CD21 −/low DN MBCs and joint destruction. Amongst the relatively few studies regarding B cell subsets in patients with eRA, our current report is unique as it separates the B cell populations according to expression levels of the complement 2 receptor, CD21. Lack of or low levels of CD21 on B cells are characteristic of pathologies related to some chronic infections and autoimmunity [ 22 ]. We have both identified changes in the composition of the B cell population and additionally analysed those in relation to clinical outcomes. CD21 −/low B cell subsets have been linked to autoimmune diseases [ 10 , 11 , 14 ], chronic infections [ 12 , 23 – 25 ] and immunodeficiency [ 13 , 14 ] and comprise around 5% of the B cell population in healthy donors [ 26 ] and around 10% in seropositive estRA [ 16 ]. Our finding that the CD21 + B cell population contains a lower frequency of SWM and USW in eRA patients than in HD is in concordance with previous works [ 6 – 9 ]. Moreover, in agreement with others studying estRA [ 27 – 29 ], we did not find any differences between eRA patients and HD in the frequency of the CD21 + DN subset; this outcome notwithstanding, there are reports in both eRA [ 30 ] and estRA [ 30 – 32 ] of increases in the frequency of CD19 + DN. In a previous study, a negative correlation between female sex and MBCs as well as between age and MBCs, transitional B cells and plasmablasts was described, but no association with the CD21 −/low subset was shown [ 8 ]. We found no differences between gender and any of the analysed B cell subsets, but there is a positive association between CD21 −/low DN cells and age in the current study. The frequency of DN cells as a proportion of the total B cells has been linked to ageing; one study in which age correlated positively with the CD19 + DN subset [ 33 ] and a second in which the CD21 −/low CD27 + B cell subset correlated positively with age in older women with RA [ 34 ]. Clearly, there is scope for further investigation of the relationship between different B cell subsets with age, in both health and disease. We did not find any correlations between disease activity and any particular B cell subset. Different studies have reported varying findings regarding B cell associations with DAS28. Studies have found a positive correlation with CD21 − DN CD11c + B cells [ 18 ], CD86 + B cells [ 7 ] and plasmablast frequencies in estRA and treated early RA [ 8 , 32 ]. Additionally, opinion is divided on whether or not the frequencies of CD19 + DN cell subsets from patients with estRA correlate with DAS28 [ 28 , 31 ]. We have previously shown in estRA that the frequency of CD21 −/low DN cells correlated significantly with joint destruction as measured by mSHS [ 16 ]. In the present study, it is clear that also in eRA patients with very little (or no) joint damage, CD21 −/low DN still associate with cartilage destruction. The implication of a specific B cell subset in the disease pathology at this point offers a compelling direction for further studies. In this study, the CD21 −/low DN population dominated the B cell population in the SF of estRA patients. Other studies have in addition to the DN subset also shown SWM to be a significant component of the SF B cell compartment [ 35 , 36 ]. CD21 −/low B cell subset in different chronic autoimmune and inflammatory conditions have been described using diverse markers, but a common characteristic of these subsets is the expression of CD11c and Tbet [ 22 ]. Indeed, our CD21 −/low DN SF subset is mainly CD11c + Tbet + . It has been shown that CD21 − DN CD11c + cells are able to activate fibroblast-like synoviocytes (FLS), which in turn produce matrix metalloproteinases (MMPs) and IL-6, facilitating cartilage destruction [ 18 ]. Of further interest, we found that the SF CD21 −/low DN CD11c + Tbet + subset was largely RANKL + , which suggests an additional role in bone destruction, as RANKL promotes osteoclastogenesis. This is supported by previous results that found FcRL4 + B cells in SF of estRA patients were also CD21 −/low , RANKL + and largely of the DN phenotype [ 35 ]. The association of CD21 −/low DN cells and joint damage both at early and later stages of RA and their expansion in the RA joints suggests that these cells are pathogenic and provide a possible treatment target.
Conclusions A direct role for CD21 −/low DN B cells in the destructive process in the inflamed joint implied by these results is of particular interest in the context of the pathogenesis and treatment of RA.
Background Involvement of B cells in the pathogenesis of rheumatoid arthritis (RA) is supported by the presence of disease-specific autoantibodies and the efficacy of treatment directed against B cells. B cells that express low levels of or lack the B cell receptor (BCR) co-receptor CD21, CD21 −/low B cells, have been linked to autoimmune diseases, including RA. In this study, we characterized the CD21 + and CD21 −/low B cell subsets in newly diagnosed, early RA (eRA) patients and investigated whether any of the B cell subsets were associated with autoantibody status, disease activity and/or joint destruction. Methods Seventy-six eRA patients and 28 age- and sex-matched healthy donors were recruited. Multiple clinical parameters were assessed, including disease activity and radiographic joint destruction. B cell subsets were analysed in peripheral blood (PB) and synovial fluid (SF) using flow cytometry. Results Compared to healthy donors, the eRA patients displayed an elevated frequency of naïve CD21 + B cells in PB. Amongst memory B cells, eRA patients had lower frequencies of the CD21 + CD27 + subsets and CD21 −/low CD27 + IgD + subset. The only B cell subset found to associate with clinical factors was the CD21 −/low double-negative (DN, CD27 − IgD − ) cell population, linked with the joint space narrowing score, i.e. cartilage destruction. Moreover, in SF from patients with established RA, the CD21 −/low DN B cells were expanded and these cells expressed receptor activator of the nuclear factor κB ligand (RANKL). Conclusions Cartilage destruction in eRA patients was associated with an expanded proportion of CD21 −/low DN B cells in PB. The subset was also expanded in SF from established RA patients and expressed RANKL. Taken together, our results suggest a role for CD21 −/low DN in RA pathogenesis. Supplementary Information The online version contains supplementary material available at 10.1186/s13075-024-03264-2. Keywords Open access funding provided by University of Gothenburg.
Supplementary Information
Abbreviations Anti-citrullinated protein antibody B cell receptor Clinical Disease Activity Index C-reactive protein Disease Activity Score, including a 28-joint count Disease modifying anti-rheumatic drug Double negative Early rheumatoid arthritis Erosion score Erythrocyte sedimentation rate Established rheumatoid arthritis Fibroblast-like synoviocytes Healthy donor Immunoglobulin Joint space narrowing score Naive and transitional B cells Memory B cells Matrix metalloproteinase Modified Sharp van der Heijde score Not applicable Orthogonal projections to latent structures discriminant analysis Peripheral blood Peripheral blood mononuclear cells Rheumatoid arthritis Receptor activator of the nuclear factor κB ligand Rheumatoid factor Synovial fluid Synovial fluid mononuclear cells Swollen joint count Switched memory B cells Tender joint count Unswitched memory B cells Visual analogue scale general health Acknowledgements Not applicable. Authors’ contributions IG, ILM and AR study conception, study design, acquisition, analysis and interpretation of data. KT study design, acquisition, analysis and interpretation of data. SM acquisition, analysis and interpretation of data. LJ study design and interpretation of data. MLA and AKE interpretation of data. KF interpretation of radiography. All the authors were involved in the drafting of the article and revising it critically for important intellectual content. All the authors read and approved the final version of the manuscript. Authors’ information Not applicable. Funding Open access funding provided by University of Gothenburg. This work was supported by the following: Göteborg Medical Society, Swedish Science Research Council, Reumatikerförbundet (The Swedish Rheumatism Association), King Gustav V Stiftelse, IngaBritt och Arne Lundbergs Stiftelse, Lundgrens Stiftelse, Amlövs Stiftelse, Karolina Widerströms fond, AFA, ALF (Avtalet om läkarutbildning och forskning), the Foundation for assistance to disabled people in Skane (Stiftelsen för bistånd åt Rörelsehindrade i Skåne), Cancerfonden, Barncancerfonden. Availability of data and materials The datasets used during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The study was performed according to the ethical principles of WMA Helsinki Declaration. The study was approved by the Human Research Ethics Committee of the Medical Faculty, University of Gothenburg (ethical approval number: 691–12, amendment number: T270-13). Synovial fluid samples were collected (ethical approval number: S010-03, 2003–06-25, amendment number: T536-07, 2007–09-17; 459–18, 2018–01-01). All patients signed a written informed consent before entering the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
Arthritis Res Ther. 2024 Jan 15; 26:23
oa_package/79/9c/PMC10789032.tar.gz
PMC10789033
0
Introduction CTNNB1 neurodevelopmental disorder with spastic diplegia and visual defects (OMIM 615,075, also known as NEDSDV or CTNNB1-NDD) is hallmarked by two main symptoms: cognitive impairment and exudative vitreoretinopathy. Additional manifestations of this disease include truncal hypotonia, peripheral spasticity, dystonia, behavioral problems, microcephaly, refractive errors, and strabismus. Less frequent symptoms are intrauterine growth restriction, feeding difficulties, and scoliosis [ 1 , 2 ]. CTNNB1 neurodevelopmental disorder shows autosomal dominant inheritance. It primarily develops as a consequence of de novo germline loss-of-function pathogenic variants of the catenin beta-1 (CTNNB1) gene [ 3 ]. The majority of disease-causing variants in the CTNNB1 gene are truncations caused by frameshifts and nonsense variants [ 4 ]. Disease-causing variants in CTNNB1 may lead to the development of exudative vitreoretinopathy type 7, of which the clinical symptoms do not include any cognitive impairment or neurological abnormalities [ 5 ]. Somatic variants of the CTNNB1 gene are linked to various tumors, such as colorectal cancer, hepatocellular carcinoma, medulloblastoma, ovarian cancer, and pilomatricoma [ 6 , 7 ]. CTNNB1 encodes beta catenin, a member of the evolutionary conserved armadillo repeat proteins [ 8 ]. The CTNNB1 protein is a component of adherens junctions and contributes to the establishment and maintenance of epithelial layers and adhesion between cells. CTNNB1 is also involved in the WNT signaling pathway, which regulates various biological processes, including cell proliferation or cell fate determination [ 9 ]. Here, we present the case of a Hungarian patient affected by CTNNB1 neurodevelopmental disorder with a novel likely pathogenic frameshift variant p.Ala636SerfsTer12 in the CTNNB1 gene. This likely pathogenic variant was not present in the clinically unaffected parents of the patient suggesting its de novo nature. This result increases the known CTNNB1 variant spectrum associated with the CTNNB1 neurodevelopmental disorder.
Materials and methods Patient This study involved a 16-year-old male Hungarian patient who was born at 35 weeks gestation following an uncomplicated pregnancy. At birth, the head circumference was 31 cm, the birth weight was 2560 g and the birth length was 45 cm. During early childhood, the patient developed axial hypotonia and increased tone in all four limbs. Global developmental delay was observed. Cognitive impairment was detected with a borderline IQ. Behavioral problems were observed, which included aggressiveness and fits of anger. The patient was of short stature as his height was lower than the 3rd percentile. Intermittent headaches and recurrent dizziness were reported. Skull MRI of the 9-year-old patient demonstrated chronic sinusitis sphenoidalis, no other abnormality was present. Upon ophthalmologic examination, strabismus and hypermetropia were evident. During neurological examination, symptoms of complex movement disorder were detected with spasticity, slight ataxia, intermittent dystonia, stereotypes. Video demonstrates the complex movement disorder of the patient (Additional Video 1 ). When the video was recorded, the head circumference was 54 cm, body weight was 49.2 kg and body length was 162.6 cm (February 1, 2023). Because of partial pituitary dysfunction, growth hormone was administered at the age of 10. The patient responded well to the treatment. The patient is the only known affected family member. He has three clinically unaffected siblings, two sisters, and one brother (Fig. 1 ). Written informed consent was obtained from the parents of the patient and genetic studies were conducted according to a protocol approved by the Hungarian National Public Health Center, in adherence with the Helsinki guidelines. The patient and his parents underwent pre- and post-test genetic counseling at the Department of Medical Genetics, University of Szeged (Szeged, Hungary). DNA extraction Genomic DNA was extracted from venous blood mixed in the presence of the anticoagulant EDTA using the DNeasy® Blood & Tissue Kit (QIAGEN, Germany) as described in the manufacturer’s instructions. For quantification, a Qubit Fluorometric Quantification instrument was used according to the manufacturer’s instructions. Whole exome sequencing The patient’s genotype was determined using next-generation sequencing (NGS). Library preparation was carried out using the SureSelectQXT Reagent Kit (Agilent Technologies, Santa Clara, CA). Pooled libraries were sequenced on an Illumina NextSeq 550 NGS platform using the 300-cycles Mid Output Kit v2.5 (Illumina, Inc., San Diego, CA, USA). Adapter-trimmed and Q30-filtered paired-end reads were aligned to the hg19 human reference genome using Burrows–Wheeler Aligner (BWA). Duplicates were marked using the Picard software package. The Genome Analysis Toolkit (GATK) was used for variant calling (BaseSpace BWA Enrichment Workflow v2.1.1. with BWA 0.7.7-isis-1.0.0, Picard: 1.79 and GATK v1.6-23-gf0210b3). The mean on-target coverage achieved from sequencing was 71× per base and the average percentage of targets covered was greater or equal to 30× of 96% and 90%, respectively. Variants passed by the GATK filter were used for downstream analysis and annotated using the ANNOVAR software tool (version 2017 July 17) [ 10 ]. SNP testing was performed as follows: high-quality sequences were aligned with the human reference genome (GRCh37/hg19) to identify sequence variants. The detected variations were analyzed and annotated. Variants were filtered according to read depth, allele frequency, and prevalence in the genomic variant databases, asExAc (v.0.3) and Kaviar. Variant prioritization tools (PolyPhen2, SIFT, LRT, Mutation Taster, Mutation Assessor) were used to predict the functional impact. For variant filtering and interpretation, VarSome and Franklin bioinformatic platforms [ https://franklin.genoox.com ] were used according to the guidelines of the American College of Medical Genetics and Genomics [ 11 , 12 ]. The identified candidate variant was confirmed by bidirectional capillary sequencing in DNA from the patient and parents (Fig. 2 ).
Results Whole exome sequencing identified a novel (c.1902dupG, p.Ala636SerfsTer12) heterozygous frameshift variant in the 12th exon of the CTNNB1 gene 3p22.1, NM_001904.4 (Fig. 2 ). This variant was not detected in the parents of the patient, thus it is considered a de novo variant (PM6). Based on the ACMG variant classification guideline, this variant could be classified as a likely pathogenic one, since null variants (frameshift) in gene CTNNB1 are predicted to cause loss-of-function, which is a known mechanism of the disease. The affected exon contains 11 other pathogenic and likely pathogenic variants and the truncated region contains 32 other pathogenic and likely pathogenic ones (PVS1) and it was not present in gnomAD population databases (PM2) (Additional Table 1 ). The detected novel frameshift variant affects the last armadillo/beta-catenin-like repeats domain (ARM domain) of the encoded protein (Fig. 3 ) (UniProt Tools, https://www.uniprot.org/uniprotkb/P35222/variant-viewer ). The identified frameshift variant results in the formation of a premature termination codon, 12 amino acids after the variant. It affects the 10th amino acid of the last 40 amino acid–long ARM domain, and after 12 amino acids it results in the formation of a premature termination codon, which may either cause a nonsense-mediated RNA decay or a severely mutated protein with a largely truncated ARM domain and a missing 3’ end of the protein. ARM domains in general are composed of tandem repeats that form a superhelix of helices. They may mediate the interaction of CTNNB1 with its ligands; therefore, we hypothesized that the identified novel variant has a severe loss-of-function impact on protein functions. The region of the variant on the CTNNB1 protein exhibits high evolutionary conservation (Additional Fig. 1 ) (Aminode, http://www.aminode.org/search ). In silico functional predictions using MT, DANN, BayesDel, SpliceAI, GERP, GenoCanyon, fitCons, and CADD also suggest that the newly identified frameshift variant has severe consequences and further supports its putative disease-causing role in the development of the observed CTNNB1 neurodevelopmental disorder (Additional Table 1 ).
Discussion We describe a Hungarian patient affected by a CTNNB1 neurodevelopmental disorder, which developed as a consequence of a newly identified, de novo frameshift variant in the CTNNB1 gene (c.1902dupG, p.Ala636SerfsTer12). The patient is heterozygous for this variant. Both the clinical symptoms of the affected patient and the identified de novo heterozygous truncating variant on the CTNNB1 gene are consistent with the typical symptoms of this disease and the most common genetic scenario involved in disease development [ 4 ]. The truncating variants including frameshift and nonsense ones, such as the newly identified p.Ala636SerfsTer12 variant, are the most common types of disease-causing variants of the gene. This is probably the result of nonsense-mediated RNA decay or the absence of one or more ARM domains in the encoded protein. The ARM domain is an approximate 40 amino acid–long tandemly repeated sequence motif, which mediates the interaction of the CTNNB1 protein with its ligands. Disruption of the interactions of the CTNNB1 protein with its ligands may lead to impaired synaptic plasticity, neuronal network connectivity, brain malformations, and consequently, to the development of the CTNNB1 neurodevelopmental disorder [ 13 ]. Concerning the less frequent symptoms of the CTNNB1 neurodevelopmental disorder related phenotype, the reported Hungarian patient has partial pituitary dysfunction and as a consequence he underwent growth hormone administration. Here, we emphasize that however it is less frequent manifestation, but still pathogenic variants of the CTNNB1 gene can lead to the development of pituitary dysfunction. Variants and genotype-phenotype correlations of 404 patients with CTNNB1 neurodevelopmental disorder suggest that the most common clinical feature in patients with pathogenic or likely pathogenic variants of the CTNNB1 gene is a mild-to-profound cognitive impairment. Exudative vitreoretinopathy, truncal hypotonia, peripheral spasticity, dystonia, behavior problems, microcephaly, refractive errors and strabismus are also frequently present. Less common clinical symptoms include intrauterine growth restriction, feeding difficulties, and scoliosis [ 4 ]. The above list of symptoms well-reflects the clinical heterogeneity of the disease, thus genetic testing might contribute to the establishment of the clinical diagnosis. However, there is still no causative treatment for this severe condition, clinical and preclinical studies have identified several drugs targeting the WNT signaling pathway such as lithium, SB216763, sulindac, and PPARγ agonist, which may have therapeutic effects for NDDs [ 14 ]. The patients with CTNNB1 neurodevelopmental disorder are usually under supportive care by a multidisciplinary team. Therefore, genetic screening has a significant impact on the affected families by facilitating family planning and the birth of healthy children. However, most of the cases are de novo and the disease is rarely inherited from an affected parent [ 1 , 2 ]. Germline mosaicism was reported in one family with two affected offspring and healthy parents [ 15 ]. If the CTNNB1 pathogenic variant found in the proband cannot be detected in leukocyte DNA of either parent, the recurrence risk to siblings is estimated to be 1% because of the possibility of parental germline mosaicism [ 15 ]. Prenatal and/or preimplantation genetic testing should be available for affected families. In this study, we identified a novel variant in the clinically and genetically homogenous CTNNB1 neurodevelopmental disorder. Hopefully, in the near future, these genetic discoveries will completely define the genetic background of the CTNNB1 neurodevelopmental disorder and provide a solid basis for studies developing novel therapeutic modalities for patients.
Purpose We aimed to elucidate the underlying disease in a Hungarian family, with only one affected family member, a 16-year-old male Hungarian patient, who developed global developmental delay, cognitive impairment, behavioral problems, short stature, intermittent headaches, recurrent dizziness, strabismus, hypermetropia, complex movement disorder and partial pituitary dysfunction. After years of detailed clinical investigations and careful pediatric care, the exact diagnosis of the patient and the cause of the disease was still unknown. Methods We aimed to perform whole exome sequencing (WES) in order to investigate whether the affected patient is suffering from a rare monogenic disease. Results Using WES, we identified a novel, de novo frameshift variant (c.1902dupG, p.Ala636SerfsTer12) of the catenin beta-1 (CTNNB1) gene. Assessment of the novel CTNNB1 variant suggested that it is a likely pathogenic one and raised the diagnosis of CTNNB1 neurodevelopmental disorder (OMIM 615,075). Conclusions Our manuscript may contribute to the better understanding of the genetic background of the recently discovered CTNNB1 neurodevelopmental disorder and raise awareness among clinicians and geneticists. The affected Hungarian family demonstrates that based on the results of the clinical workup is difficult to establish the diagnosis and high-throughput genetic screening may help to solve these complex cases. Supplementary Information The online version contains supplementary material available at 10.1186/s12887-023-04509-w. What is known • CTNNB1 neurodevelopmental disorder is featured by two main symptoms: cognitive impairment and exudative vitreoretinopathy. Additional manifestations include truncal hypotonia, peripheral spasticity, dystonia, behavioral problems, microcephaly, refractive errors, strabismus, intrauterine growth restriction, feeding difficulties, and scoliosis. • CTNNB1 neurodevelopmental disorder develops as a consequence of de novo germline loss-of-function pathogenic variants of the catenin beta-1 (CTNNB1) gene. The CTNNB1 protein is a component of adherens junctions and contributes to the establishment and maintenance of epithelial layers and adhesion between cells. CTNNB1 is also involved in the WNT signaling pathway regulating several biological processes including cell proliferation or cell fate determination. What is new • The c.1902dupG, p.Ala636SerfsTer12 is a novel heterozygous frameshift variant in the 12th exon of the CTNNB1 gene. Based on the ACMG variant classification guideline, this variant could be classified as a likely pathogenic one, since null variants (frameshift) in gene CTNNB1 are predicted to cause loss-of-function, which is a known mechanism of the disease. • The c.1902dupG, p.Ala636SerfsTer12 variant affects the 10th amino acid of the last 40 amino acid–long ARM domain of the encoded protein, and after 12 amino acids it results in the formation of a premature termination codon, which may either cause a nonsense-mediated RNA decay or a severely mutated protein with a largely truncated ARM domain and a missing 3’ end of the protein. ARM domains may mediate the interaction of CTNNB1 with its ligands; therefore, we hypothesized that the identified novel variant has a severe loss-of-function impact on protein functions. Supplementary Information The online version contains supplementary material available at 10.1186/s12887-023-04509-w. Keywords Open access funding provided by University of Szeged.
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We are grateful to all individuals participated in our study. Author contributions Alíz Zimmermann, Balázs Gellén, András Salamon, László Sztriha, Péter Klivényi and Márta Széll contributed to the study conception and design. Material preparation, data collection and analysis were performed by Barbara Anna Bokor, Dóra Nagy and Margit Pál. The first draft of the manuscript was written by Nikoletta Nagy and Márta Széll. All authors read and approved the final manuscript. Funding This research was supported by the EFOP-3.6.1-16-2016-00008 grant and by the GINOP-2.3.2-15-2016-00039 grant. PK supported by the Ministry of Innovationt and Technology of Hungary form the National Research, Development and Innovation Fund under the TKP2021-EGA-32 funding scheme. University of Szeged Open Access Fund grant number 6338. Open access funding provided by University of Szeged. Data availability The Department of Medical Genetics, University of Szeged, Hungary is registered in ClinVar and available at the following open access link: https://www.ncbi.nlm.nih.gov/clinvar/submitters/505686/ . The identified likely pathogenic variant reported in this study and the associated phenotype are registered in ClinVar (SUB13804099), and available at the following open access link: VCV002579643.1 - ClinVar - NCBI (nih.gov). Declarations Ethics approval and consent to participate The clinical and genetic investigations in the affected Hungarian family were conducted according to the guidelines of the Declaration of Helsinki and was approved by the Ethics Committee of University of Szeged (58523-4/2017/EKU). Written informed consent was obtained from the parents of the affected child and pre- and post-test genetic counselling were carried out. The parents of the affected child gave their written consent to record and upload the movie about the complex movement disorder of the patient. Consent to publish Informed consent was obtained from all subjects and/or their legal guardian(s) for publication of identifying information/images in an online open-access publication. Competing interests The authors declare no competing interests. Abbreviations Armadillo/beta-catenin-like repeats domain Catenin beta-1 Neurodevelopmental disorder CTNNB1 neurodevelopmental disorder Next generation sequencing Peroxisome proliferator-activated receptor gamma
CC BY
no
2024-01-16 23:45:34
BMC Pediatr. 2024 Jan 15; 24:47
oa_package/87/fd/PMC10789033.tar.gz
PMC10789034
38221617
Background Even in contexts where abortion is available on broad legal grounds (i.e., available on request or on broad social and economic grounds), barriers exist to seeking wanted abortion care. Financial, procedural, informational, and social barriers limit access to abortion services and compel people to travel for services, causing unique burdens such as lost wages, increased costs related to childcare and transportation, increased need to disclose seeking services, and delayed care [ 1 – 7 ]. Delays in accessing abortion care cause procedures to take place later in pregnancy—increasing both the cost of care and the risk of complications [ 8 ]. The literature exploring financial and logistical barriers to abortion care in Europe, especially in countries where abortion is broadly legal in the first trimester but restricted to specific circumstances thereafter, is limited [ 6 , 9 , 10 ]. Although abortion is legal in nearly all European countries (with the exception of Poland, Malta, and until 2018 and 2019 the Republic of Ireland and Northern Ireland respectively), restrictions on timing, permitted reasons, and waiting periods vary across countries [ 11 , 12 ]. In most countries in Europe including those included in the present analysis (Italy, France, 1 Germany, Belgium, Austria, and Denmark) abortion is permitted on broad grounds in the first trimester, but highly restricted thereafter. After the first trimester, abortion is only permitted on certain grounds, most commonly in cases that a pregnant person’s life or health is endangered, in cases of fetal anomalies, or rape and incest. Other laws include mandatory waiting periods and mandatory counseling [ 14 , 15 ]. Provider shortages (due to belief-based denial or lack of second trimester training) and limited service provision outside of urban areas also create barriers to care [ 11 , 16 – 25 ]. Because of such barriers, pregnant people may be forced to travel to other regions of their country or to another country to seek abortion services, especially if they are seeking services later in pregnancy [ 26 ]. However, the evidence surrounding travel for abortion in Europe has primarily focused on contexts in which abortion is highly restricted throughout pregnancy and little is known about the experiences of those travelling from countries where abortion is available on broad grounds in the first trimester [ 6 ]. Because England, the Netherlands, and Spain are among the only European countries with simplified legal access to care after the first trimester and because of their proximity to many European countries, many people needing abortion services past the first trimester travel to these countries to seek later abortion care [ 2 , 10 , 27 , 28 ]. Cross-country travel, however, has been associated with delays in access to abortion care [ 29 ]. Travelling for abortion care incurs costs which literature has documented to be burdensome [ 2 , 27 ]. Additionally, because abortion is broadly legal in the first trimester in many European countries, people seeking abortion may look to local resources prior to seeking care abroad. While a growing mixed-methods literature has explored people’s experiences with travel for abortion in Europe [ 10 , 26 , 30 – 32 ], little is known about how cost and in-country care seeking are associated with delays in care. For these reasons, we investigate two potential factors that may delay care seeking including (1) care-seeking within the country of residence and (2) financial barriers to care among those travelling to England and the Netherlands for abortion care. Understanding whether these factors delay access to abortion care for this understudied population is important to inform policies and interventions to increase access to timely abortion care.
Materials and methods For this analysis, we draw on data from a multi-country, 6-year, mixed-methods study on barriers to legal abortion and travel for abortion in Europe, funded by the European Research Council. The study aimed to assess the impact of legal, procedural, and social barriers to abortion care, and to document and explore the experiences of women and pregnant people 2 who travel abroad to seek abortion services in England, the Netherlands, and Spain as well as those who travel domestically within their country of residence in France, Italy and Spain. This analysis focuses on the quantitative survey data collected among those travelling from countries where abortion is legal on broad grounds to England and the Netherlands (n = 204). We exclude data from Spain because we recruited few participants that travelled from abroad. See Table 1 for a list of abortion laws in countries in which participants resided. We excluded those traveling from restrictive contexts (Poland, Malta, and at the time of our data collection the Republic of Ireland) from the analysis for two reasons. First, in-country care seeking in restrictive contexts may be less relevant and second, the population of travelers from restrictive contexts is distinct. Those traveling from restrictive contexts primarily sought care during their first trimester while those recruited participants from settings where abortion is legal on broad grounds were largely in the second trimester of their pregnancy. We selected three clinics run by the British Pregnancy Advisory Service (BPAS) in England and two abortion clinics in the Netherlands for recruitment that had the largest number of non-residents who obtained abortion care at the respective clinics in the years preceding the launch of the study. Abortion patients were eligible to participate if they were 18 years of age or older, had travelled from another European Union country to seek abortion care, and were proficient in French, Italian, English, Dutch, German or Spanish. Eligible individuals were identified by an on-site researcher and/or clinic staff and provided with a study information sheet upon their arrival to the clinic. Those interested could complete an anonymous, self-administered, tablet-based questionnaire at the clinic at any time before starting the abortion procedure, or remotely, via phone or internet, after going back to their countries of residence. Only two participants participated remotely after their procedures. These participants were excluded from this analysis. The survey covered topics such as sociodemographic information, reproductive histories, care-seeking trajectory, barriers faced in accessing abortion services in the country of residence, travel and cost in care seeking, and reasons for and experiences in seeking abortion care out of the country of residence. Recruitment spanned July 2017 to March 2019. We completed data collection prior to the withdrawal of the United Kingdom from the European Union; however, data collection started after the vote on the referendum to approve the withdrawal in 2016. We aimed to recruit 200 respondents across the full study in England and 200 respondents in the Netherlands to have sufficient power for the study’s main proposed analyses which involved comparing respondents by country of residence legal context. We aimed to recruit as many respondents as possible within each participating clinic until our country-specific sample size was reached; however, we recruited fewer respondents than anticipated. Measures Our analysis aims to characterize the extent to which (1) financial barriers and (2) abortion care seeking within a person’s country of residence are associated with delays in abortion access. Outcome The main outcome of the analysis was delays in accessing abortion care. We defined delays as the number of weeks between when the respondent considered abortion and the day they completed the survey. We excluded respondents missing this value from the analysis (n = 24). Predictors of interest We evaluate two specific predictors of interest: (1) the difficulty of covering the costs of travel and the abortion procedure and (2) whether the respondent had contacted or visited any abortion providers in their country of residence before coming to the clinic where they were surveyed. For the first predictor, we created a composite, binary variable to summarize the responses from the two questions “How easy or difficult would you say it was for you to cover the cost of travel, not including the abortion itself?” and “How easy or difficult would you say it was for you to cover the cost of the abortion procedure?” We combined those who responded that it was “very easy” or “somewhat easy” to cover both the cost of the abortion and travel in one group representing no difficulty in covering costs. We considered those who said the cost of travel, the cost of the abortion, or both costs were “somewhat difficult” or “very difficult” to cover to have some difficulty in covering costs. This measure relies on the participant’s assessment of the difficulty or ease with which they were able to cover the cost of the abortion and travel. We created a second categorical predictor to test the intersection of difficulty paying for the abortion and travel and other basic living costs. To assess whether respondents had sufficient funds to cover basic living costs in the past month, we assessed responses to the question: “During the past month, would you say you had enough money to meet your basic living needs such as food, housing and transportation?” An answer of “all the time” or “most of the time” was categorized as having sufficient funds to cover basic living costs, an answer of “some of the time”, “rarely”, or “never” was categorized as having insufficient funds to cover basic living costs. Using this question we created a predictor with the following categories: (1) no difficulty paying for the abortion or travel AND sufficient funds to cover basic living costs (2) some difficulty paying for the abortion or travel AND sufficient funds to cover basic living costs (3) no difficulty paying for the abortion or travel AND not having sufficient funds to cover basic living costs and (4) some difficulty paying for the abortion or travel AND not having sufficient funds to cover basic living costs. The one participant who fell into the third category was excluded from analysis. These groups are referred to as “highest means,” “mixed means,” and “lowest means,” respectively. Responses were missing for n = 31 participants. To assess whether there were any differences in delays based on in-country care seeking, we used a binary measure of whether a respondent reported contacting and/or visiting a provider in their country of residence. We ran a sensitivity analysis using a secondary measure of in-country care seeking. This measure categorized respondents in three categories: no contact and no visit to providers, contacted providers only, and visited providers. Responses were missing for three participants. We excluded observations that were missing any of the main predictors from our analyses. Covariates We considered possible confounders of the relationship between the two predictors of interest and the outcome based on their theorized associations between financial difficulty, in-country care seeking, and delays in care. We controlled for age of the respondent (categorical variable 18–24, 25–34, 35+); a categorical measure of parity (no children, one-two children, and 3+ children); country of residence; gestational age of the pregnancy at the time of the survey; reported difficulty of the abortion decision; whether the participant had tried anything on their own to end the pregnancy; history of prior abortion; social support in their abortion decision-making process; employment status (full time employment (> 32 h/week), part time employment, student, or other (unemployed, unable to work, homemaker)); and the time it took to travel to the abortion clinic (≤ 2 h, > 2–4 h, > 4–6 h, > 6–8 h, 8+ h). We also controlled for the difficulty covering the cost of the abortion in the analysis of in-country care-seeking. We also examined secondary measures of cost and financial experiences of abortion and travel asked in our survey including logistics organized for the abortion appointment and travel, actions taken to cover the cost of the abortion, length of time needed to raise money for the cost of travel and the abortion, and cost-related reasons for delays in abortion. Analysis We used Stata v15 SE to conduct quantitative analyses. We calculated descriptive frequencies and bivariate associations for the outcome and key predictors. We stratified descriptive results about cost and financial experiences by the difficulty of paying for the procedure and ability to cover basic living expenses. In order to test the associations between the predictors of interest and delays in access to care controlling for potential confounders, we constructed multivariate discrete-time hazards models using the weeks of delay as the unit of time and logit link. In this analysis, shorter “survival” or having received an abortion earlier after considering abortion is the preferred outcome. Standard errors were clustered by respondent. The clinic at which the patient received services is controlled for as a fixed effect in the model. We tested the sensitivity of the results to the handling of missing data by running a model using pairwise deletion for each model instead of using casewise deletion with any missing main predictor. We also coded a “missing” category for the main predictors and re-ran the model with the re-coded predictors. Ethical approval This phase of the study received ethical approval from the European Research Council Ethics Committee, the BPAS Research & Ethics Committee, the Tilburg University Ethics Committee, and the University of Barcelona Bioethics Committee.
Results The main analysis included a total of 164 participants. The majority of respondents (86%) sought abortion services in the Netherlands while 14% were recruited in England (Table 2 ). Over half of the sample resided in Germany, and a quarter lived in France. The rest of the sample lived in Italy (8.5%), Belgium (6.7%), or another country (7.3%) within Europe where abortion is legal on broad grounds. Similar proportions of participants were between 18 and 24 years old and 25 and 34 at the time of the survey (42.1% and 44.5% respectively). The majority of participants did not have children (62.8%), had completed some university or more (61.6%), and had enough money all the time or most of the time to meet their basic living needs in the past month (76.8%). At the time of the survey, the majority of respondents reported gestational ages between 13 and 20 weeks with a mean gestational age of 17.8. The majority of respondents had to travel over 2 h with approximately 30% reporting they traveled for over 6 h. The main reasons for travel included that it was too late to have an abortion in their country of residence (81%) and that abortion was not legal in my their country of residence in their situation (7%) (data not shown). About two-thirds of the sample had some difficulty covering the cost of the abortion procedure and/or the travel (Table 2 ). Forty-eight percent of the sample contacted or visited an abortion provider in their country of residence before seeking services abroad (Table 2 , column 3). Of these respondents, 21% contacted a provider but did not visit any provider in person while 79% visited a provider in person in their country of residence. On average, approximately 4.2 weeks elapsed between when participants considered abortion and when they were surveyed at the clinic when presenting for abortion care (Table 2 , column 4). In the sample, the weeks elapsed ranged from a minimum of zero weeks to a maximum of 21 weeks. Financial barriers Comparing time to presentation at the clinic by difficulty paying for the abortion or travel alone, approximately 50% of both those who reported no difficulty covering costs and those who reported some difficulty covering costs had presented at the clinic where they completed the survey by at least 3 weeks after considering the abortion (Fig. 1 a). Considering the extent to which someone is able to cover their basic living expenses and had difficulty covering the cost of the abortion and/or travel (Fig. 1 b), those with the lowest means reported longer delays between when they considered abortion and when they presented at the clinic. Among those with the highest means and those with mixed means, 50% of participants had presented at the clinic abroad by two to three weeks after considering abortion and 75% had presented by 4 weeks. Among those with the lowest means, 50% of participants had presented at 4 weeks and 75% had presented by 10 weeks. In both the unadjusted and adjusted discrete-time hazards model, any difficulty paying for the abortion/travel was not significantly related to differential time between considering an abortion and presenting at the clinic (Table 3 ). Incorporating the intersection of difficulty paying for the abortion/travel and ability to cover basic expenses in the past month, those with the lowest means had statistically significantly longer times between considering abortion and presenting at the clinic in both the unadjusted and adjusted models. Specifically, those with the lowest means had 61% lower odds (adjusted hazard odds ratio: 0.39, 95% CI 0.21–0.74) of having an abortion in the next week compared to those with the highest means, given they had not presented at the clinic, all else equal. The alternative model specification dealing with missing data were consistent with the findings reported here (data not shown). Examining the financial implications of travel, those with the lowest means were more likely to report having to delay paying other expenses to fund their abortion (38.7% of respondents) compared to those with mixed means (24.0%) and those with the highest means (2.0%) (Table 4 ). Thirteen percent of those with the lowest means had to sell something of value, compared to less than 3% of those in the other groups. Those with the highest means primarily reported drawing from their savings (49.0%) or relying on a friend, relative, or partner. Almost 30% of participants with the highest means did not report any measures taken to cover the costs. In fact, almost half of the group with highest means said they did not need to raise funds and those who did have to raise money primarily raised funds within a week. Among those with mixed means, half reported it took up to a week for them to raise the money, and a quarter reported it took them 1–4 weeks. In the group with the lowest means, over 40% took a week or more to raise the funds, with 19.4% reporting it took them over 4 weeks to raise the money for their travel or their abortion procedure. Finally, financial reasons factored more prominently into why those with the lowest means could not get an abortion earlier among those who would have preferred earlier access. Seeking care in country of residence Among those who did not contact or visit a provider prior to presenting at the clinic in England or the Netherlands, 50% of respondents had presented for care by 2 weeks after having considered abortion compared to 3 weeks among those who contacted or visited an abortion provider in their country of residence (Fig. 1 d). In both the unadjusted and adjusted models, having contacted or visited an abortion provider in the country of residence was associated with a longer time to presenting at the clinic abroad for an abortion (Table 3 ). Among those who had not presented at the clinic abroad at any given week, those who had contacted or visited providers in their country of origin had 44% lower odds (adjusted hazard odds ratio: 0.56, 95% CI 0.36–0.89) of presenting to the clinic abroad in the next week compared to those who had not contacted or visited a provider in their country of residence. Among those who only contacted a provider (versus visiting a provider), the unadjusted hazard odds ratio was not significantly different from the group that did not seek care in their country of residence; however, the effect was significant in the adjusted model (adjusted hazards odds ratio: 0.41, 95% CI 0.18–0.95). The results for those who visited a provider were similar to the main analysis (adjusted hazards odds ratio: .56, 95% CI 0.34-0.92). The sensitivity analyses to address missing data were consistent with the findings reported here (data not shown).
Discussion This analysis considered two specific reasons that people seeking abortions outside of their country of residence may be delayed in accessing abortion—financial barriers and abortion care-seeking within a person’s country of residence. Among people who received an abortion outside their country of residence, those who had difficulty paying for an abortion and/or the travel and had insufficient means to cover basic living costs were more delayed in presenting at a clinic abroad for care than those who consistently had enough money to cover basic living costs, regardless of whether they had difficulty paying for the cost of the abortion or travel. Additionally, those who sought in-person care at an abortion provider in their country of residence were significantly more delayed in presenting at a clinic abroad for care compared to those who only contacted a provider or who did not seek abortion care at all in their country of residence. Among pregnant people travelling abroad for abortion services, the cost of travel and the procedure may be associated with delays in care for those who face less financial security. This is in line with previous research that shows that offering access to abortion without ensuring associated costs are covered restricts who is able to access services and the timeliness with which they are able to do so [ 33 – 35 ]. Governments in France, Belgium, and Italy cover the costs of abortion procedures performed in their own countries by law [ 36 , 37 ]. In Germany, financial coverage for abortion services in country is based on income thresholds [ 12 ]. Despite these varying commitments to cover financial costs of abortion care for care sought within their own borders, countries generally do not pay for care sought abroad. In the United States, a large body of literature documents the impact that lack of insurance coverage has on abortion access and the wellbeing of individuals and families. Lack of insurance coverage compels low-income persons to raise money for the procedure in ways that risk their health and wellbeing by forgoing essentials such as food and electricity, increasing financial instability, and delaying and restricting access to services [ 38 – 40 ]. Our results add to this literature—participants in our study relied on postponing or forgoing payments for other expenses, selling valuable possessions, and leaning on support networks to help finance their travel and procedure. Given that many people who travel for abortion services from European countries with broad legal grounds for abortion in the first trimester but restricted access thereafter are later in pregnancy, there is an urgency to remove gestational age limits to center health equity and to enact policies to ensure access to timely abortion care is guaranteed. These results also speak to the need for organizations (e.g., abortion funds, practical support organizations, or clinic-based funds) to support people financially and logistically seeking abortion care abroad in Europe [ 41 , 42 ]. Those seeking later abortion services are more often from lower socioeconomic status; compounding costs of travel and the procedure itself may also further restrict who is able to access services [ 43 ]. Our findings also point to how interactions with the medical system in a given country of residence may delay people in accessing abortion. In this study, those who had visited a provider in their country of residence were more likely to be delayed in ultimately accessing care abroad. In-country care seeking may have contributed to delays through a number of avenues including difficulty accessing a medical professional with information on how to navigate abortion care access, particularly past the first trimester; and requirements that providers must have extensive and burdensome documentation in order to provide care. These interactions may also require individuals to interact with an objecting provider, mandated counseling, or burdensome waiting periods that push abortion seekers past the legal limit for abortion in their country of residence [ 7 , 16 , 44 ]. During the study period in Germany, for example, the law disallowed abortion providers from legally disseminating or publicizing information on abortion services [ 45 ]. While the law has recently been amended to allow providers to list the availability of services online, detailed information may still be limited. Additionally, past the first trimester, many countries require approval from at least one, and in many cases multiple clinicians [ 28 ]. Somewhat paradoxically, participants in our study also reported medical professionals to be a key source of information about abortion services [ 46 ]. As such, the medical system may act both to delay access to care for some, and as a valuable source of information for others [ 47 , 48 ]. It is important to note that delays observed in this study among those who sought care in person at an abortion provider in their home country may not have been due to the medical system or providers, but may have been the result of having to organize logistics for care seeking more than once or due to differential access to information on abortion care and laws in their country of residence. However, this would still suggest that gestational age limits and other barriers intersect with interactions with the medical system to create further delays to care. Assessing and improving the resource and information landscape for people seeking abortion may help people get care more quickly. It is important to note that additional factors may delay people in accessing abortion care abroad including support available from friends, family, or partners; difficulty deciding about the abortion; and the gestational age of the pregnancy. There are a number of limitations to note in this analysis. First, the sample size for the analysis is small—a product of the declining number of people travelling for abortion from relatively liberal settings to the England and the Netherlands. The decline may be due to changing dynamics in the increasing availability of medication abortion self-management [ 49 , 50 ] or may be related to greater availability of clinic-based abortion in countries of origin, decreased overall demand for abortion, or to changing political arrangements in Europe, most notably the vote for Brexit which had not yet been implemented at the time of the study but that factored into ongoing public dialogue and perceptions [ 36 ]. The size of the sample likely limits our power to detect small differences in delays to care. Regardless, small studies using time-to-event modeling are powered to detect larger effects [ 51 , 52 ] and this study is unique in the population that it includes. Little research focuses on people from European countries where abortion is available on broad grounds in the first trimester travelling abroad for abortion care. Second, only those who were ultimately able to access abortion services abroad were captured in the sample—excluding those who may have wanted an abortion but were unable to travel and those who received an abortion in their country of residence. To measure financial insecurity, we relied on self-assessed relative acute measures instead of income or asset-based measures. While this helps account for differences across contexts, personal spending patterns and individual assessments of sufficient funds are subjective [ 53 ]. Future work could extend this work to measure socioeconomic status using income or wealth. Our work is strengthened, however, by the secondary measures of cost and financial experiences that we stratified by the perceived measures of socioeconomic status. Finally, while we tried to capture potential ways in which missing data may have influenced our results, we cannot rule out the possibility that the exclusion of participants with missing data could have influenced the results.
Conclusions This paper explores financial and medical system barriers faced by residents of countries in Europe where abortion is available on broad grounds in the first trimester seeking abortion care outside of their country of residence. The findings point to inequities in access to timely abortion care based on socioeconomic status and delays related to seeking care in the country of residence. These findings suggest that policies which govern when (i.e., gestational age limits) and how to have an abortion intersect with health care systems and social stratification to potentially result in differential access in timing of abortion services.
Introduction This study characterized the extent to which (1) financial barriers and (2) abortion care-seeking within a person’s country of residence were associated with delays in abortion access among those travelling to England and the Netherlands for abortion care from European countries where abortion is legal on broad grounds in the first trimester but where access past the first trimester is limited to specific circumstances. Methodology We drew on cross-sectional survey data collected at five abortion clinics in England and the Netherlands from 2017 to 2019 (n = 164). We assessed the relationship between difficulty paying for the abortion/travel, acute financial insecurity, and in-country care seeking on delays to abortion using multivariable discrete-time hazards models. Results Participants who reported facing both difficulty paying for the abortion procedure and/or travel and difficulty covering basic living costs in the last month reported longer delays in accessing care than those who had no financial difficulty (adjusted hazard odds ratio: 0.39 95% CI 0.21–0.74). This group delayed paying other expenses (39%) or sold something of value (13%) to fund their abortion, resulting in ~ 60% of those with financial difficulty reporting it took them over a week to raise the funds needed for their abortion. Having contacted or visited an abortion provider in the country of residence was associated with delays in presenting abroad for an abortion. Discussion These findings point to inequities in access to timely abortion care based on socioeconomic status. Legal time limits on abortion may intersect with individuals’ interactions with the health care system to delay care. Plain Language Summary This paper explores delays in accessing abortion care associated with financial and medical system barriers. We focus on residents of countries in Europe where abortion is available on broad grounds in the first trimester seeking abortion care outside of their country of residence. This study demonstrates an association between difficulty covering abortion costs for people facing financial insecurity and in-country care seeking and delays in accessing abortion abroad. Policy barriers, medical system barriers, as well as financial barriers may interact to delay access to care for people in European countries with broad grounds for abortion access in the first trimester but restrictions thereafter, especially for people later in pregnancy. Keywords
Acknowledgements We would like to thank our study participants and the organisations and clinics that partnered with us in the design and development of the study, including the British Pregnancy Advisory Service (BPAS) in England, CASA Kliniek, Beahuis & Bloemenhove Kliniek and Rutgers in the Netherlands. The authors thank Lieta Vivaldi for her support with data collection. Finally, this study would have not been possible without the funds from the European Research Council and the support of the host institution, the University of Barcelona. Author contributions AW: Methodology; software; conceptualization; formal analysis; writing original draft. SZ: Conceptualization, investigation, funding acquisition, writing—review and editing, Project administration. GZ: Investigation, writing—review and editing. JM: Conceptualization, writing—review and editing. CGa: Data curation, writing—review and editing, CGe: Conceptualization, writing—review and editing. Funding This study is funded by the European Research Council (ERC) via a Starting Grant awarded to Dr. De Zordo [BAR2LEGAB, 680004] and is hosted by the University of Barcelona. It is also supported by the Spanish Ministerio de Economía, Industria y Competitividad [grant RYC‐2015‐19206]. Dr. De Zordo’s work has also been supported by the Spanish Ministerio de Ciencia e Innovación [grant PID2020-112692RB-C22]. The funders had no role in the design or conduct of the study; collection, management, analysis or interpretation of the data; preparation, review or approval of the manuscript; or the decision to submit the manuscript for publication. Availability of data and materials Due to our commitment to protect the confidentiality and anonymity of those who received abortion services at the participating clinics, we cannot make the data used for this study publicly available for download. The data that support the findings of this study are available on request from the corresponding author. Declarations Ethics approval and consent to participate The study involved human subjects and received ethical approval from the European Research Council Ethics Committee, the BPAS Research & Ethics Committee, the Tilburg University Ethics Committee, and the University of Barcelona Bioethics Committee. Informed consent was obtained from all study participants, and consent was signaled through selection of buttons on the survey. Consent for publication N/A. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Reprod Health. 2024 Jan 15; 21:7
oa_package/3a/a9/PMC10789034.tar.gz
PMC10789035
0
Introduction Women go through emotional and psychological changes during the perinatal period making them more susceptible to psychiatric disorders, such as postpartum depression [ 1 ]. Postpartum depression may impair parenting skills and judgment [ 2 ], decrease enjoyment of the maternal role, and create poor mother-infant interactions. Mothers with PPD show an early cessation of breastfeeding and less care of their infants which leads to decreased immunity and puts babies at risk for delayed growth and development [ 3 ]. Moreover, maternal mental illness can also affect a child’s emotional and cognitive development [ 4 ]. Undetected PPD imposes a heavy cost on society since the mother is less able to fulfill her responsibility as a caregiver [ 5 ]. The Diagnostic and Statistical Manual of Mental Disorders defines postpartum depression as the “Occurrence of a major depressive episode (MDE) within four weeks after birth, which may involve irritability, excessive crying, or panic” [ 6 ]. However, these episodes might begin or persist during the first year after delivery [ 7 ]. In low- and middle-income countries, PPD affects up to 48.5% of women [ 4 ], but just 6.5–12.9% of women in high-income nations [ 7 ]. Postpartum depression has been linked to several risk factors, including stressful life events, a history of depression, not breastfeeding, the first delivery, and having a poor body image. Other risk factors include having a poor relationship with their partner and having a lower socioeconomic status [ 8 ]. in some contexts, however, higher education, permanent employment, a kind, trustworthy partner, and belonging to the majority ethnic group were protective factors against PPD [ 1 ]. In the West Bank, a cross-sectional study conducted in Nablus city in 2013 showed a prevalence of 17% with depression during pregnancy and a history of mental illness being the highest associated risk factors [ 9 ]. Another study in Bethlehem in 2016 showed a prevalence of 27.7%, with multiparty and unplanned pregnancy having the highest association with PPD [ 8 ]. A recent systematic analysis of the frequency of postpartum depression in Arab nations revealed that 1 in 5 women experience the condition, with some of the most common risk factors being low socioeconomic status, unwanted pregnancy, low social and husband support, stressful life events during pregnancy and personal or family history of depression [ 10 ]. A variety of instruments have been used to screen for PPD including the Beck Depression Inventory (BDI) and the Mini International Neuropsychiatric Interview (MINI) [ 11 ]. However, EPDS is most suited for postnatal and primary care settings [ 12 ] and several studies have been done to assess its validity in varying cut-off scores [ 13 – 15 ]. The USPTFS recommends screening pregnant and post-partum women with EPDS but doesn’t specify a cut-off value [ 16 ]. Despite the impact of post-partum depression, few studies have been conducted in Palestine. Objectives This study aims to investigate the prevalence of postpartum depression among Palestinian women attending vaccination clinics at primary health care centers in the northern of west bank in 2022 and to pinpoint risk factors for the condition.
Methods Study design and setting This study is a descriptive cross-sectional study that was conducted in the northern West Bank between 1 May 2022 and 30 June 2022.Women visiting vaccination clinics at the seven largest primary health care centers of the Ministry of Health in four cities (Nablus, Tulkarem, Jenin, and Qalqelia) with their infants after 7-12 weeks of delivery were asked to participate. These clinics are the only to provide vaccination services in the four cities (2 in Nablus, 2 in Tulkarem, 2 in Jenin, 1 in Qalqelia,) and post-partum women are expected to be seen there with their infants. Mother who agreed to participate were consented and given a self-administered questionnaire including the Arabic version of EPDS This instrument is commonly employed as a self-administered questionnaire for screening postpartum depression [ 2 ]. Additionally, studies have demonstrated comparable efficacy between self-administered questionnaires and face-to-face interviews for diagnosing this particular condition [ 3 ]. To ensure the data quality, one of the researchers was present at the site of data collection to ensure that the questionnaire was comprehensive. To ascertain the integrity of the data, the researcher examined the questionnaires completed by the participants for both completeness and consistency. Incomplete questionnaires were discarded. Inclusion criteria Women who were between 18 and 44 years old, 7-12 weeks after delivery, and able to read the Arabic version of the questionnaire were recruited to participate. Exclusion criteria Women who experienced depression or used antidepressants medication. Sample size and sampling method The sample size was calculated with the Raosoft sample size calculator. Based on the annual report of the Palestinian Central Bureau of Statistics, there were 28,273 live births in the northern of West Bank in 2020 [ 17 ]. The minimum sample size was 380 at 95% confidence level, 5% margin of error and 50% response distribution. In order to have a representative sample of the target population to achieve the aim of the study, a proportionate sample was calculated as shown in Table 1 . Then, a convenient sample was collected from women presenting to the vaccination clinics at the largest primary care centers in West Bank/Palestine in May and June, 2022. This study was done in primary care. The facility has a psychological department. This department is aimed for persons with mental health concerns who require extra help. Before starting our research, we communicated with the psychiatric department and agreed to manage patients with various depression levels. All positive screening results were forwarded for further testing. Instrument Data was collected using a 3 part self-administered questionnaire; 1) A pre-defined checklist used to collect data about socio-demographic factors, pregnancy and birth-related factors, baby-related factors, and psychological history [ 9 ]. 2) The Maternal Social Support Scale (MSSS), which is a 6-question 5 point-Likert scale used to assess the social support mothers were given after giving birth. The possible highest score is 30 with a score of 6-18 on the MSSS considered low social support, 19-24 medium support, and > 24 adequate support. The reliability of the scale (Cronbach’s alpha) was 0.71-0.90 [ 18 ]. 3) The Arabic version of the Edinburgh Postnatal Depression Scale (EPDS), a self-reporting screening tool, composed of 10 items reflect the mother’s emotional experience over the past 7 days. Responses were scored 0–3 indicating the severity of manifestations, with a maximum score of 30. There is a debate on the single cut-off with the highest sensitivity and specificity. No studies were conducted in Palestine to examine the EPDS validity and reliability. However, a recent systematic review in the Arab world showed that most studies used 13 or greater as a cut off to maximize consistency with other studies. The validity and reliability of the Arabic version of the Edinburgh Postnatal Depression Scale were reviewed with internal reliability of .84 (alpha Cronbach) [ 12 ]. Data analysis Data were entered into Excel and analyzed using SPSS program version 20. We used univariate descriptive analysis for all variables. Bivariate analysis was used to study the relationship between dependent and independent variables and test the null hypothesis. P value< 0.05 was considered significant. All significant variables were exported to multivariate analysis using logistic regression to attenuate the effect of cofounders. Ethical consideration All methods involving human participants in this study were conducted per ethical research standards. The study was conducted in conformity with the ethical norms of An-Najah National University (ANNU). The Ministry of Health approved authorization for the study to be conducted in PHC settings, and participants were approached and invited voluntarily to participate. Participants were assured of their confidentiality and anonymity. The ethical code Med. March 8/2022.
Results Descriptive results Characteristics of participants From the 420 questionnaires that were distributed, 390 patients responded. Ten were excluded due to insufficient data. A total of 380 women with the median age of 27 and a range of 26, with the majority of them falling between the ages of 26 and 35 ( n = 208, 54.7%) participated. The majority (84%, n = 319) were city dwellers, highly educated (studied beyond secondary school) (70.3%, n = 267), and married to highly educated men (47.6%, n = 181). Additionally, most of them were still married at the time of the research (99.5%, n = 378), had government insurance (46.3%, n = 176), and were unemployed (79.5%, n = 302). Their salary ranged from (400 $) to less than (800$) (55.5%, n = 211) middle income. See Table 1 . Birth and child related characteristics of the participants As Tables 2 and 3 show, most of the participants were multiparas (68.7%, n = 261), without any previously diagnosed chronic illness (87.1%, n = 331). Most pregnancies occurred without complications (63%, n = 239) and in private hospitals (61.3%, n = 233) by Caesarean section (51.1%, n = 194). The majority of vaginal deliveries were not assisted by vacuum (86%, n = 160). However, of those that had a vaginal delivery, more than half underwent episiotomy repair (59.7%, n = 111). Of the 380 women, 200 had boys (52.6%), 344 were full-term (90.5%), 375 were healthy (98.7%), with weights within the normal range (85.5%, n = 325). Most reported only breast feeding (40.8%, n = 155) or mixed with formula feeding (39.7%, n = 151). Psycho-social factors Most participants reported no personal or family history of mental illness (98.4 and 77.4% respectively). Nearly half (47.9%) reported stressful life events during pregnancy factors such as financial stress and having additional children without family support are considered as risk factors for postpartum depression. The MSSS median was 20 (with a range of 22). Most women rated their families’ support in the medium range (53.7%, n = 204). See (Tables 4 and 5 ). Postpartum depression prevalence At the time of our study 33.9% ( n = 129) showed the risk of post-partum depression using the EDPS. The highest score was 28 and the lowest score was zero, with all answers negative for suicide attempts (Question 10 in the EPDS). The median score was 11.5 with a range of 28. Factors associated with PPD Bivariate analysis (Chi-squared test and Fishers’ exact test as appropriate) was applied to all variables in the descriptive part of the results. Factors significantly associated with postpartum depression were lower level education of the husband, primiparity, vacuum use, stressful events during pregnancy, and low social support for the mother. A multi-logistic regression showed all factors remained significantly associated with PPD except primiparity. (See Table 6 ). Women experiencing stressful events during pregnancy were shown to be 2.1 times more likely to develop PPD ( p value 0.003, OR: 2.1, 95% CI [1.27-3.4]), as did those with vacuum extraction of the baby ( p value 0.002, OR: 4.0, 95%CI: [1.64-9.91]). Additionally, marriage to a man with a low level of education (6 years and less) increased the PPD five times more than marriage to a man with a high degree of education (more than 12 years) ( p value less than 0.001, OR: 5.2, 95%CI: [2.7-10]). Moreover, it was demonstrated that mothers with low social support showed a higher likelihood of experiencing PPD in comparison to mothers with both medium and adequate support ( p value less than 0.001, OR: 2.5, 95%CI: [1.7-4.2]). Social support in both medium and adequate categories was calculated as one group due to low sample size in the adequate group.
Discussion Our study showed that prevalence of postpartum depression (women who scored 13 and more in the EPDS) was much higher than studies that used the EPDS in Nablus and Bethlehem which reported 17% an overall prevalence of PPD (8.9% scored 13 and more, 8.1% scored 10-12 on EPDS), 27.7% (score of 11 and more on the EPDS) respectively [ 8 , 9 ]. The rise is not surprising and may be explained by the continued pressures Palestinians experience due to the restrictions and uncertainties imposed by being an occupied nation. Since the elected Palestinian government was politically and economically boycotted, living circumstances have become worse, leading to increased levels of unemployment, poverty, and internal conflict in Palestine as well as more limitations on access to healthcare [ 19 ] 18). These stressors result in mental instability and increase susceptibility to mental illnesses including PPD [ 1 ]. The increased prevalence of PPD in Palestine is similar to nearby Arab countries [ 20 ]. Surprisingly, Palestine’s prevalence of 33.9% is reasonable in comparison to what is published in nearby Arab region; recent systematic review study was held in the Arab region, 2020 and showed 8-40% range of PPD in Arab countries [ 10 ]. Another meta-analysis across the Middle East countries reported 27% prevalence [ 21 ]. In Saudi Arabia prevalence of PPD increased from 25.7% in 2017 to 38.5% in 2020 [ 22 ] 3). Another study in Damascus, 2017 reported a prevalence of 28.2% (EDPS score 13 and more) [ 23 ]. Higher prevalence was seen in Jordan; an article was published in 2021 and reported 52.9% prevalence between Jordanian women (EDPS score 12 and more) [ 24 ]. The global rise of PPD prevalence could be explained by the continuing COVID pandemic impact on health sectors both physically and mentally. Recent studies on the effect of the COVID-19 pandemic on PPD revealed increased levels of post-partum depression caused by increased fear [ 25 , 26 ]. While reviewing the literature, we found variations in risk factors for PPD. Our study adds lack of social support, the husband’s low level of education, the occurrence of stressful events while pregnant and the use of vacuum as significant risk factors related to PPD. Our first two risk factors were also found in the Middle-East systematic review and meta-analysis [ 21 ]. However, stressful events in pregnancy or vacuum are new findings. Vacuum extraction is reported by mothers as a negative experience which indirectly increases the possibility of experiencing PPD due to trauma to birth canal, post-delivery complications, increased pain and delay of return to normal activities [ 27 ]. All of this explains the significant association between vacuum and PPD which was reported in our study. Another negative experience that showed significant association with PPD was stressful events during pregnancy, these includes mother argued with partner more than usual, separation, divorce, mother was in a physical fight, moved to new address, had a lot of unpaid bills, job loss, and close family member sick or died [ 28 ]. Several studies showed that stress increases amygdala activity which leads to mood changes and increasing probability of depression [ 29 ]. This study showed a significant association between low maternal social support and PPD; a new baby comes with many more duties and requirements, but assistance and support help mothers to cope faster [ 30 ]. Significant association between social support and PPD was also found in studies in nearby Arab countries with the same cultural and religious characteristics of Palestine [ 31 ]. Our significant negative association between higher levels of the husband’s education and PPD was also found in other studies across the world, India and East Turkey as an example [ 32 , 33 ]. This can be contributed to the increased knowledge of women’s needs and the ability to provide good strategies of support. Here in Palestine, in order to provide acceptable living conditions, husbands with low educational level spend long working hours away from their wives limiting their ability to provide the needed support. Limitations As a cross-sectional study, our findings don’t show causal relationships between variables. Although it is crucial and significant to have knowledge about the prevalence of postpartum depression in refugee and rural areas, there has been a lack of data collection in those areas. This can be attributed to budgetary constraints and the limitations imposed by the COVID-19 Act on UNRWA clinics in the period of data collection. This resulted in a selection bias in participant selection, as the prevalence of PPD may increase in particular in refugee areas and still unknown in rural area. The study was conducted during the pandemic, a very difficult time across the globe and PPD may be higher due to the stresses of COVID. However, this study examines PPD prevalence and associated risk factors across a larger region in Palestine than prior studies.
Conclusion In conclusion, given the high prevalence of PPD in the northern of West Bank, Palestine, we recommend that antenatal clinics to expand their services from only maternal physical care to include mental health as well. Mothers should be screened for a range of stressors during pregnancy and efforts should be made to address them. This might include referrals to treat PPD with medication or offer therapeutic counseling support. Moreover, easy access to post-partum clinics should be offered for providing scheduled meetings including close family members to assure adequate social support especially for women whom vacuum was used during delivery or with stressful events during pregnancy. Recommendations Create campaigns to educate the public about postpartum depression based on the findings of this study. At the national and regional levels, where mental health care services for women are limited, this should be emphasized. Policymakers and external funding agencies require this information to design future agendas.
Background Postpartum depression (PPD) has a huge negative impact on the health of the mother and the family, both physically and mentally. Few postpartum depression studies have been done in Palestine. This study aimed to examine the prevalence and the most probable risk factor of PDD among Palestinian women in the northern West Bank. Methods This is a cross-sectional study of 380 mothers, ages 18 and 44 years, visiting vaccination clinics with their infants after 7-12 weeks of delivery between 1 May 2022 and 30 June 2022. Postpartum women seeking care at the seven largest primary health care centers of the Ministry of Health in four cities in the Northern West Bank in Palestine were asked to complete a self-administered questionnaire that included the Edinburgh Postnatal Depression Scale and demographic and birth details. A score of 13 or higher was used to indicate PPD risk. Descriptive and analytical analyses were performed using SPSS version 20. The level of significance was set at 5%. Results The median age of the participants was 27 years with a range of 26 years. A total of 129 women had an EPDS score of 13 or more, giving a prevalence rate of post-partum depression of 33.9%. The predictors of postpartum depression were stressful life events during pregnancy ( p -value 0.003, OR: 2.1, 95% CI [1.27-3.4]), vacuum use during delivery p -values 0.002, OR: 4, 95% CI: [1.64-9.91]), low social support ( p -value less than 0.001, OR: 2.5, 95%CI: [1.7-4.2]) and husband’s low level of education ( p -value less than 0.001, OR: 5.2, 95%CI: [2.7-10]). Conclusion The study showed a high prevalence of PPD among Palestinian mothers in the northern West Bank. Our study found that PPD risk factors include lack of social support, the husband’s low education, and stressful events during pregnancy. This will emphasize the importance of PPD screening and early intervention, especially among vulnerable women. Keywords
Abbreviations Post-Partum Depression Edinburgh Postnatal Depression Scale World Health Organization Major Depressive Episode Standard Deviation Beck Depression Inventory Mini International Neuropsychiatric Interview United States Preventive Services Task Force Odds Ratio Confidence Interval We thank the heads of the MOH centers of primary health care in Palestine for providing access to patients. We appreciate the support from the family medicine department at Al-Najah National University. Special thanks to the mothers that agreed to participate with the hope of improving the lives of mothers in Palestine. Consent for participation All subjects involved in the research were invited to participate voluntarily after the study’s purpose as well as the risks and the benefits of participation were explained. Informed consent was obtained from all individual participants is included in the study. Authors’ contributions Dina Wildali 1: Develop the questionnair, write the manuscript, Saja Nazzal 2: Develop the questionnair, write the manuscript Suha Hamshari 3: review and edit the questionnaire, write the manuscript Souad belkebir 4: review the methodology All authors reviewed the manuscript. Funding No funding was received. Availability of data and materials The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate All methods involving human participants in this study were conducted per ethical research standards. The study was conducted in conformity with the ethical norms of An-Najah National University (ANNU). The Ministry of Health approved authorization for the study to be conducted in PHC settings, and participants were approached and invited voluntarily to participate. Participants were assured of their confidentiality and anonymity. This study was performed in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. It was approved by the Institutional Review Board (IRB) of An-Najah National University (No Med March 2022/8). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Womens Health. 2024 Jan 15; 24:43
oa_package/8f/bc/PMC10789035.tar.gz
PMC10789036
0
Background Potassium (K + ), as one of the most essential macronutrients, plays important roles in plants and contributes greatly to enzyme activation, protein synthesis, photosynthesis, osmotic pressure, and cell extension. K + deficiency in the field seriously influences agriculture through a series of negative impacts, such as growth inhibition, impaired nitrogen uptake, increased pathogen susceptibility, osmotic imbalance, and finally crop failure [ 1 ]. Sweetpotato ( Ipomoea batatas [L.] Lam), a typical ‘‘K + favoring’’ food crop, plays a critical role in both food security and bio-industries. However, the K + deficiency of soil in southern China seriously limits sweetpotato productivity and quality in this region [ 2 ]. The dry matter yield and biomass productivity are closely related to the soil potassium supply in sweetpotato. Its root differentiation process requires a large supply of potassium fertilizer and determines the final root yield, which has a decisive effect on market supply and economic benefits [ 3 ]. Hence, understanding how sweetpotato responds to low-K + stress is valuable because the complex molecular mechanism and regulatory network have not been fully elucidated. Plants resist low-K + stress at the physiological level through regulating the cellular and tissue homeostasis of K + . K + transporters and K + channel proteins function in potassium uptake. There are five K + transporter gene families, including the HAK/KUP/KT family, Trk/HKT family, KEA family, CHX family, and Shake K + channel family in plants [ 4 , 5 ]. The Shaker family in Arabidopsis contains 9 members, which have been well studied. The inward rectifying channel AKT1 as the first reported gene, was strongly expressed by salt and low K + stress [ 6 ]. It is reported that AKT1 and the high-affinity K + transporter HAK5 dominate above 95% of K + absolution [ 7 ]. Our previous study identified 22 HAK/KUP/KT genes and nine shake K + channel genes in sweetpotato and found that IbAKT1 -overexpressing transgenic roots could absorb more K + under K + -deficiency stress [ 8 , 9 ]. AtKCl does not possess K + channel activity, but it can balance K + uptake/leakage to modulate AKT1-mediated low K + responses [ 10 ]. K + outward rectifying channel SKOR is involved in mediating long-distance K + transport from roots to shoots [ 11 ]. AKT2 mediate dual-directional K + transport with weak voltage-dependency and realize long-distance transportation of K + in phloem [ 12 ]. Inward K + channel SPIK expressed in pollen tubes and functions in the viability of pollen tubes [ 13 ].The inward potassium ion channels KAT1 , KAT2 , and an outward potassium ion channel GORK are mainly expressed in guard cells and regulate osmotic potential and stomatal movement [ 14 ]. In additional, the activity of potassium ion channels in plants is regulated by various proteins, such as protein kinases and G proteins [ 15 , 16 ]. However, additional in-depth studies on low-K + tolerance mechanism have not been reported. Transcriptome technology as a convenient tool can rapid distinguishes differentially expressed genes (DEGs) under a variety of environmental conditions. Some important genes encoding kinases, transcription factors, carbohydrates, or involved in the signal transduction pathway including second messenger, reactive oxygen species (ROS), plant hormones during the plants responses to K-deficiency were identified via transcriptomic analyses in many plants, including rice, wheat, apple and banana [ 17 – 20 ]. To understand the molecular and physiological mechanism of sweetpotato resistance to low-K + stress, Xu32 (a low-K + tolerant genotype) and NZ1 (a low-K + sensitive genotype) were screened from 31 sweetpotato materials according to their K + utilization [ 2 ]. In this study, the physiology and biochemical indexes were compared between the two sweetpotato genotypes, and RNA sequencing (RNA-seq) was performed to explore differences in the responses in transcriptome profiles by characterizing the temporal patterns of expression and gene regulation under low-K + conditions.
Materials and methods Plant material and stress treatment Two sweetpotato genotypes, Xu32 and NZ1, were cultivated in a greenhouse cultured hydroponically in a growth chamber under the following conditions: relative humidity, 50–70%; 12-h light/12-h dark photoperiod; temperatures, 30 °C days, 25 °C nights. Sweetpotato seedlings with consistent growth and similar features, including four leaves, a base stem diameter of 12–13 mm, a stem length of 20 ± 0.5 cm, and three internodes, were selected and transferred into light-proof boxes for hydroponic cultivation. The seedlings were cultured in water for 3 days for recovery and then transferred into modified Hoagland nutrient solution. The complete nutrient solution contained 20 mM K 2 SO 4 , 2 mM Ca (NO 3 ) 2 .4H 2 O, 0.65 mM MgSO 4 .7H 2 O, 0.25 mM NaH 2 PO 4 .2H 2 O, 0.1 mM Fe-EDTA, 1 mM MnSO 4 , 1 mM ZnSO 4 .7H 2 O, 0.01 mM CuSO 4 .5H 2 O, 0.005 mM (NH 4 ) 6 Mo 7 O 24 .4H 2 O, and 1 mM H 3 BO 3 , pH 6.0. For the low-K + condition, the concentration of K 2 SO 4 were changed to 1 mM, the remaining components were not altered. The sweetpotato seedlings were propagated and preserved by our laboratory. The whole plants were harvested at the 15th day of treatment for the determination of physiological performance and RNA-seq analysis. Determination of biomass, potassium content, and soluble sugar content in plants For the determination of biomass, the seedlings of the two sweetpotato genotypes were harvested 15 days after beginning the K + -deficiency stress treatment. All plant samples were dried at 70°C for 72 h until their weight remained constant, and the dry weight was measured. The dried seedlings were ground and used for the measurement of K + contents and soluble sugar contents. The soluble sugar content was determined using anthrone colorimetric analysis using the method reported by Ebell [ 45 ]. Each sample was taken from six different plants and mixed as a biological replicate. A total of three biological replicates were taken. To analyze the K + concentration, approximately 0.1 g dried samples were collected into chemical decomposition tube with 5 ml of concentrated sulfuric acid was then added for overnight. The following day, the samples were transferred and incinerated in a muffle furnace at 350 °C for 3 h. After the solution was cooled, several drops of 30% H 2 O 2 were added until the solution become colorless. Flame photometry (PFP7; Jenway, UK) was used for determining K + concentration with an exponential calibration curve drawn by 100, 50, 25, and 12.5 ppm potassium standard. A total of three biological replicates were taken. Analysis of photosynthetic activity The net photosynthetic rate (Pn) was measured using a portable photosynthesis system (LI-6400 XT, LI-COR, Inc, Lincoln, NE, USA) at 12:00–2:00 pm on the 15th day of treatment. 2 g fresh leaves (the third to fifth) were cut into pieces and put into 50 ml centrifuge tubes with 2o ml 80% ethanol solution. Then, the centrifuge tubes were soaked in a dark place for overnight and chlorophyll content (chlorophyll A and chlorophyll B) were determined by a UV-visible spectrophotometer at 665 and 649 nm, respectively. Measurement of K + fluxes in the roots Sweetpotato root segments with 2-cm apices were used for K + flux measurement with a noninvasive microtest system (NMT, NMT-100-SIM-YG, Younger USA LLC, Amherst, MA, USA). The measuring solution contained 0.1 mM NaCl, 0.1 mM MgCl 2 , 0.1 mM CaCl 2 , and 0.5 mM KCl. The steady K + flux was recorded for 10 min in the meristem, elongation, and mature root zones. Hormone content measurement Fresh sweetpotato samples were harvested, immediately frozen in liquid nitrogen, and extracted with methanol/water/formic acid (15:4:1, V/V/V). The combined extracts were reconstituted in 80% methanol (V/V) for liquid chromatography-mass spectrometry analysis. A total of 13 materials classified into abscisic acid (ABA), jasmonic acid (JA), and salicylic acid (SA) were collected via ultra-performance liquid chromatography (ExionLCTM AD, MASS, USA) and tandem mass spectrometry (QTRAP® 6500+, MASS, USA), and analysis was performed using Analyst ® 1.6.3 software (AB SCIEXTM, MASS, USA). Total RNA extraction and cDNA library construction The RNA from two different potassium concentration level treatments of two sweetpotato cultivars was extracted and monitored on 1% agarose gels for primary detection. The RNA purity (OD260/280 and OD260/230) and integrity were measured using the NanoPhotometer® spectrophotometer (IMPLEN, CA, USA). Sequencing libraries were generated using the NEBNext® UltraTM RNA Library Prep Kit for Illumina® (NEB, California, USA), and the library quality was assessed on the Agilent Bioanalyzer 2100 system(Agilent, California, USA). Transcriptome sequencing and assembly The library preparations were sequenced on the Illumina Hiseq 2000 platform according to the manufacturer’s instructions (Illumina, San Diego, CA, USA), and paired-end reads were generated. All raw sequence reads were deposited in the NCBI Sequence Read Archive (SRA) under accession number PRJAN1013090. After removing adapters and filtering low-quality reads from the raw data, high-quality clean reads were mapped to the sweetpotato reference genome sequence using Tophat2.0 software. Identification and functional annotation of differentially expressed genes (DEGs) The DEG libraries prepared from samples of the potassium deficiency treatment and normal potassium treatment in two sweetpotato cultivars, namely the Xu32 Control, Xu32 -K, NZ1 Control, and NZ1 -K treatments, were constructed and sequenced. To estimate the gene expression levels, clean data were mapped back onto the assembled transcriptome using RSEM, and then, the read count for each gene was obtained from the mapping results and normalized using the reads per kilobase per million reads (RPKM) method. Differential expression analysis of two groups was performed using the DESeq R package (1.10.1) [ 46 ]. Gene Ontology (GO) enrichment analysis of the DEGs was implemented using the Goseq R packages based on Wallenius’ noncentral hypergeometric distribution [ 47 ], and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was conducted to determine the statistical enrichment of DEGs in KEGG pathways using KOBAS ( www.kegg.jp/kegg/kegg1.html ) [ 48 ]. Quantitative real-time polymerase chain reaction (qRT-PCR) analysis Ten randomly selected common DEGs in Xu32 and NZ1 were subjected to qRT-PCR. During this procedure, 1 μg of total RNA was transcribed into cDNA using the TIANScriptIIRT Kit (TIANGEN, Beijing, China), and qRT-PCR was performed using the OneStep Real-Time System (Applied Biosystems, Foster City, California, USA) according to the manufacturer’s instructions. The reference gene Actin was used for normalization, and three independent biological replicates were performed for each sample. The comparative CT method (2- △△ CT method) was used to analyze the gene expression levels. The specific primers used are listed in Table S 5 . Statistical analysis The data were analyzed using the SPSS software program (SPSS Statistics v. 20.0, Chicago, IL, USA), and the results were presented as the sample means ± SD ( n = 3). Statistical analysis was conducted using one-way analysis of variance (ANOVA), followed by Ducan’s test at a significance level of P < 0.05.
Results Physiological performance analysis of two sweetpotato genotypes in response to K + deficiency Significant symptoms developed in the two sweetpotato genotypes between the control and the low-K + treatments. Although both genotypes did not grow well under the low-K + treatment, NZ1 exhibited poorer performance than Xu32. Some leaves of NZ1 become chlorosis and necrosis, while leaves of Xu32 still remained green (Fig. 1 A). Hence, the average dry weight per plant was obviously reduced in NZ1 under low-K + conditions, whereas there was no significant reduction in Xu32 (Fig. 1 B). K + -deficiency stress caused a significant reduction of the Pn and chlorophyll content in both genotypes. NZ1 was much more affected than Xu32, with NZ1 and Xu32 showing 41.70% and 36.63% reductions in Pn, respectively, and 77.21% and 44.71% reductions in total chlorophyll content, respectively (Fig. 2 A and B). The total soluble sugar content exhibited a slight increase in Xu32 (0.87%) and a slight decrease in NZ1 (-4.40%) (Fig. S 1 ). Although there was little difference in K + concentrations between the two genotypes under low-K + treatment (Fig. 1 C), the K + influx rate in the roots differed between the two genotypes. The K + influx rate of the meristem zone in Xu32 was markedly higher than that in NZ1 roots under normal conditions (Fig. 1 D). The K + efflux rates of the elongation zone and mature zone in Xu32 were 5.07 pmol cm −2 s −1 and 4.28 pmol cm −2 s −1 , respectively, while the rates in NZ1 were 6.78 pmol cm −2 s −1 and 8.81 pmol cm −2 s −1 , respectively. K + -deficiency stress enhanced K + efflux in both sweetpotato genotypes. After 15 days of low-K + treatment, the K + effluxes in the elongation zone and mature zone of Xu32 were lower than those in NZ1, although the differences were not significant. The K + efflux leakage rate was 9.74 pmol cm −2 s −1 in the elongation zone and 5.99 pmol cm −2 s −1 in the mature zone of Xu32, compared with 7.46 pmol cm −2 s −1 in the elongation zone and 9.35 pmol cm −2 s −1 in the mature zone of NZ1. K + influx was still observed in the meristem zones of both genotypes, and 22-fold higher K + influx was observed in the meristem zone of Xu32 (19.34 pmol cm −2 s −1 ) than in that of NZ1 (0.86 pmol cm −2 s −1 ). Furthermore, the contents of stress-related hormones were measured. A total of 13 materials classified into ABA, JA, and SA were identified. The opposite change trends of the hormones were found between Xu32 and NZ1. After low-K + stress treatment, the contents of ABA increased by 40% in Xu32, whereas they decreased by 34.4% in NZ1; the content of its related metabolites ABA-glucosyl ester remained almost unchanged in Xu32, whereas decreased 9.0% in NZ1; the contents of JA and its related metabolites N-[(-)-Jasmonoyl]-(L)-valine, dihydrojasmonic acid, jasmonoyl-L-isoleucine, methyl jasmonate, cis(+)-12-oxphytodienoic acid increased by 37.5%, 40.0%, 25.0%, 34.2% and 66.7% in Xu32, respectively, whereas they decreased by 39.2%, 60.0%, 25.0%, 45.8% and 2.5% In NZ1, respectively; the content of SA and its related metabolites salicylic acid-O-β-glucoside increased by 15.0% and 15.1% in Xu32, respectively, whereas they decreased by 16.9% and 2.6% In NZ1, respectively (Fig. 3 ). RNA-seq data analysis and qRT-PCR validation Two cDNA libraries (Xu32 and NZ1) were established under normal and low-K + conditions. A total of 113.13 Mb and 99.31 Mb raw reads were generated using Illumina HiSeqTM technology for the Xu32 and NZ1 libraries, respectively. All of the raw reads were deposited in the NCBI SRA database (accession number PRJNA1013090). After removing the low-quality reads and trimming the adapter sequences, 109.03 Mb and 97.63 Mb clean reads were generated for the Xu32 and NZ1 libraries, respectively. In total, 73.890–83.11 Mb clean reads were successfully aligned to the sweetpotato reference genome (Table 1 ). Twelve randomly selected common DEGs in Xu32 and NZ1 were subjected to qRT-PCR to verify the accuracy of the RNA-seq results (Fig. S 2 ). Correlation analysis of the relative expression between qRT-PCR and RNA-seq was performed, as shown in Fig. 4 . The qRT-PCR results revealed that the gene expression trends were significantly correlated with those obtained from the RNA-seq data (r 2 = 0.6649, Fig. 4 ), indicating that the RNA-seq results were reliable. DEG analysis in two sweetpotato genotypes in response to K + deficiency The gene expression abundance was affected under K + -deficiency stress in the two genotypes. The DEGs were identified with a P value-adjusted < 0.005 and log2 value > 1 based on pairwise comparisons between the control and low-K + treatment for each genotype (Fig. 5 ). A total of 889 and 634 DEGs were identified in the comparisons of Xu32 Control versus Xu32 -K and NZ1 Control versus NZ1 -K, respectively, which indicated significant differences in the gene expression profiles between the control and potassium-deficiency stress groups. Notably, more DEGs were detected in Xu32 than in NZ1, suggesting that tolerance to K + deficiency could influence the expression of an increased number of genes. Additionally, the number of downregulated genes was higher in Xu32 than in NZ1, but the number of upregulated genes was similar in both genotypes. There were 236 upregulated DEGs and 653 downregulated DEGs in Xu32, while there were 269 upregulated DEGs and 365 downregulated DEGs in NZ1 (Fig. 5 B & C). Among these DEGs, there were 256 common DEGs in Xu32 and NZ1 under K + -deficiency stress. To identify DEGs of the two sweetpotato genotypes under K + deficiency, the functional annotation of GO terms was performed (Fig. S 1 ). In Xu32, the GO term single-organism process in the biological process category was the most highly represented, followed by metabolic process, cellular process, and response to stimulus. The GO term catalytic activity in the molecular function category was the most significantly represented, followed by binding, transporter activity, and antioxidant activity. Membrane and cell, with nearly identical DEG numbers, were the most highly represented GO terms in the cellular component category. In NZ1, although most of the main functions of DEGs were similar to those of Xu32, the gene numbers of the main functions were considerably less compared to those in Xu32. To further investigate which cellular network could be regulated by K + -deficiency stress, KEGG pathway enrichment analysis was performed. The results of KEGG enrichment analysis of DEGs are shown in Fig. S 2 . Twelve selected metabolic pathways were analyzed. The three pathways of phenylpropanoid biosynthesis, the metabolic pathway, and glycolysis/gluconeogenesis were the common crucial enriched pathways in both Xu32 and NZ1. Secondary metabolites related to abiotic stress were also enriched in the two different genotypes. However, the pathway of carbon fixation in photosynthetic organisms was particularly enriched in Xu32, and not in NZ1, possibly resulting in higher photosynthetic efficiency in Xu32 than in NZ1 under K + -deficiency stress (Fig. 2 ). Functional annotation of common metabolic pathways in the two sweetpotato genotypes To determine the mechanism of low-K + tolerance in sweetpotato, the common and distinct metabolic pathways between Xu32 and NZ1 plants were analyzed. According to GO functional annotation, 13 DEGs in Xu32 and 11 DEGs in NZ1 encoded transporters (Table 2 ; Fig. 6 ). Aquaporins play important roles in maintaining cell homeostasis under abiotic stress [ 21 ]. In the current study, six common genes (TU43270, TU50919, TU50920, TU50921, TU16268, and TU8825) encoding aquaporin were upregulated under K + -deficiency treatment in the two genotypes. The high-affinity K + transporter (HAK) family, as the largest K + transporter family, plays a major role in K + acquisition under low external K + contents in plants [ 22 ]. IbHAK5 (TU56207) was the only identified K + transporter DEG in the two sweetpotato genotypes. However, the relative transcript level of the gene was obviously higher in Xu32 than in NZ1 under normal and low-K + stress conditions. Three DEGs encoding zinc transporters were only identified in Xu32 and were downregulated upon K + starvation. The antioxidative defense system, which is composed of enzymatic antioxidant enzymes, contributes to removing the excess reactive oxygen species (ROS) produced under environmental stress [ 23 ]. In the present study, five common DEGs (TU61028, TU6474, TU30936, TU14404, and TU30933) encoding antioxidant enzymes were identified in both Xu32 and NZ1 (Table 3 ; Fig. 6 ). Additionally, a total of 16 unique DEGs (15 upregulated and one downregulated) encoding antioxidant enzymes were identified in Xu32, whereas only four unique DEGs (one upregulated and three downregulated) were identified in NZ1. Most stress conditions are accompanied by changes in sugar distribution, metabolism, and transportation. Sugar is an important energy source that acts as a metabolic substrate as well as a signaling molecule between cells [ 24 ]. In the present study, 14, 35, 10, 8, and 2 DEGs in Xu32 were found to be involved in starch and sucrose metabolism, glycolysis/gluconeogenesis, the pentose phosphate pathway, pyruvate metabolism, and the TCA cycle, respectively, while there were 14, 19, 8, 3, and 1 corresponding DEGs in NZ1, respectively. In brief, these key DEGs encoding transporters, or involved in the antioxidative defense system and sugar metabolism pathway, may contribute to K + -deficiency stress tolerance, but the different DEG numbers and expression intensity may give rise to different levels of tolerance to K + -deficiency stress between the two sweetpotato genotypes. Functional annotation of distinct metabolic pathways in the two sweetpotato genotypes PsbQ proteins are found in the thylakoid lumen of chloroplasts and regulate photosystem II (PSII) activity via influencing several parameters of PSII function. In this study, a photosynthesis-related gene encoding PsbQ (TU9485) was detected and was found to be downregulated under K + -deficiency stress, with subsequent severe damage of the photosynthetic system in NZ1, while this gene was not differentially regulated in Xu32 (Table 4 ; Fig. 6 ). ABA, SA, and JA are prominent stress hormones involved in the response to various stressful conditions. Importantly, after K + -deficiency stress treatment, an opposite change trend of the hormone content was observed between Xu32 and NZ1 (Fig. 3 ). To further determine how the three hormones acted in response to K + -deficiency stress, the present study evaluated the DEGs involved in the targeted hormone signal transmission. The ABA receptor family (PYR/PYL) and protein phosphatase 2 C (PP2C) are involved in the ABA-independent pathway response to various stresses [ 25 ]. In the present study, four downregulated genes encoding ABA receptor PYL (TU35784, TU35780, TU26256, and TU26257) and two genes encoding PP2C (one upregulated gene, TU35906, and one downregulated gene, TU26873) were found in Xu32 under K + -deficiency stress, whereas only one upregulated gene encoding PP2C (TU35906) was identified in NZ1 (Table 4 ; Fig. 6 ). Additionally, the TIFC/JAZ gene family represses the activity of transcription factors that promote the expression of JA-response genes [ 26 ]. A total of five genes encoding TIFC (TU59083, TU34427, TU59081, TU20724, and TU1422) were downregulated in Xu32, whereas only one gene encoding TIFC (TU51222) was upregulated in NZ1. Moreover, one gene encoding PR1 (TU9842), an SA-inducible marker gene, was upregulated only in Xu32 (Table 4 ; Fig. 6 ). Low potassium can regulate the expression of several stress-related genes. The expression levels of two genes encoding galactinol synthase (GOLS2, genes TU17507 and TU34960), one gene encoding heavy metal-associated isoprenylated plant proteins (HIPP26, the TU4051 gene), one gene encoding D-aminoacyl-tRNA deacylase (GEK1, the TUTU22143 gene), and one gene encoding basic endochitinase (CHIT1B, the TU14176 gene) were increased by K + deficiency in both sweetpotato genotypes. However, the expression level of cytochrome P450s (CYP450, genes TU23888, TU59206, TU59207, and TU23870) and pheophorbide a oxygenase (PAO, genes TU56536 and TU56537) was uniquely induced by low potassium in Xu32, whereas ROSINA (RSI, gene TU47825), metallothionein (pKIWI, gene TU49279), and dual-specificity protein kinase splA (tag, gene TU58557) were uniquely induced in NZ1 (Table S 3 ).
Discussion Potassium is an essential soil nutrient that ultimately affects yields, especially for tuber root crops. A series of adaptive biochemical and physiological responses have evolved in plants to cope with K + deficiency. In the present study, the apparent differences of phenotype, physiological and biochemical characteristics, and resistance mechanisms after K + -deficiency stress treatment in two sweetpotato genotypes were observed. First, the phenotypic responses of Xu32 and NZ1 were compared. The results showed that Xu32 had a higher capability to absorb K + than NZ1, with better growth performance, a higher Pn, and higher chlorophyll contents under low-potassium stress. In addition, RNA-seq analysis was used to reveal the molecular mechanism of sweetpotato in response to K + -deficiency stress. The reduction of the potassium concentration and accumulation led to the decline of photosynthetic function in the two sweetpotato genotypes, thereby causing the source-sink relationship to become unbalanced, which resulted in dry weight loss. Low-K + stress caused changes in sugar metabolism, including anabolism and catabolism. Environmental stresses, such as low or high temperature, osmotic stress, nutrient stress, and pests and diseases, can also trigger changes in sugar metabolism [ 24 ]. Photosynthesis is the main source of sugar. The distinct expression level of PsbQ between Xu32 and NZ1 may have contributed to the difference in the extent of Pn damage, causing the end product of photosynthetically produced sugar to differ greatly between the two genotypes. Low K + also regulated the expression of genes related to starch and sucrose metabolism, such as sucrose synthase (SS and SUS), which may have decreased the activity of sucrose synthetase in sweetpotato. More downregulated genes were involved in sugar catabolism in Xu32 than in NZ1, including starch and sucrose metabolism, the glycolysis/gluconeogenesis pathway, the pentose phosphate pathway, pyruvate metabolism, and the citrate cycle, which may have been correlated with the different changes in soluble sugars between the two sweetpotato genotypes. Moreover, trehalose 6-phosphate (T6P) is a sugar molecule with signaling function that reports the current sucrose state and plays a critical role in plant responses to environmental stresses [ 27 ]. Trehalose synthesis is catalyzed by trehalose-6-phosphate synthase (TPS) and T6P. In the present study, the TPP expression level was highly upregulated in sweetpotato plants exposed to low-potassium stress, indicating that T6P may regulate plants subject to K + -deficiency. In brief, the changes in sugar allocation, metabolism, and transport under low-K + stress contribute to low-K + resistance in sweetpotato. K + transporters and K + channels are important factors that contribute to K + uptake and translocation. Our previous studies identified and analyzed the K + transporter and channel gene families in sweetpotato and verified that IbAKT1 and IbHKT1 function in K + absorption [ 8 , 28 ]. HAK5, belonging to the high-affinity uptake system, is the only one capable of supplying K + to the plant under very low external K + conditions [ 29 ]. In the present study, IbHAK5 was highly induced in the two sweetpotato genotypes under K + -deficient conditions. It was also found that some aquaporin genes were significantly upregulated under low-potassium stress. Aquaporins have an important regulatory function in water transport and the transport of other molecules across the cell membrane and through the intercellular compartment, thereby playing important roles in maintaining cell homeostasis under abiotic stress [ 30 ]. On the basis of phylogenetic distribution and subcellular occurrence, the following five subfamilies have been classified among the aquaporin isoforms: plasma membrane intrinsic proteins (PIPs), tonoplast intrinsic proteins (TIPs), nodulin-like proteins (NIPs), small basic intrinsic proteins (SIPs), and uncharacterized intrinsic proteins (XIPs) [ 31 ]. In the present study, most of the upregulated aquaporins were TIPs, implying that tonoplast alleviated the damage caused by low potassium stress through containing stored water and regulating the movement of water. In addition, the Ca 2+ transporter Ca 2+ -ATPase, ABC transporters, and sucrose and sugar transporters were also involved in the K + -starvation resistance process, with changed transcription levels. Plant endogenous hormones not only control plant growth and development under normal conditions but also mediate plant adaption to various environmental stresses [ 32 ]. ABA, as a key regulator of abiotic stress responses in plants, is significantly triggered by salt and drought stresses [ 33 ]. The ABA core signaling pathway largely relies on the expression of the ABA receptor PYR/PYL to mediate several rapid responses to complex environmental conditions [ 34 ]. In the present study, the DEGs encoding PYLs were only inhibited in Xu32, while they remained unchanged in NZ1, implying that low-potassium stress could activate the ABA signaling pathway in the low-potassium-tolerant genotype. SA and JA mediate the regulation and control of abiotic stress tolerance in plants, and the external application of JA and SA can enhance plant resistance [ 35 , 36 ]. It has been reported that SA and JA may also play a role in the process of plant resistance against nutrient starvation stress, but the mechanism remains unclear [ 37 ]. The JA receptor COI1 can sense JA and bind to JAZ, after which JAZ is ubiquitinated and degraded, and downstream transcription factors or signal transduction proteins of JAZ are released, thus promoting the resistance response regulated by JA [ 38 ]. In the present study, the expression levels of JAZs decreased in Xu32, whereas they increased in NZ1, indicating that the JA signaling pathway was activated in Xu32 and that the accumulated JA may have allowed Xu32 to adapt to K + -starvation stress. Similarly, we inferred that SA signaling was activated in Xu32 but not in NZ1 because the key genes involved in SA signaling, such as transcription factor TGA and its downstream gene PR, were regulated only in Xu32. In addition, the contents of ABA, JA, and SA were determined, and the contents of these hormones exhibited opposite change trends in the two sweetpotato genotypes. Hormone receptors may have sensed the accumulated ABA, JA, and SA during low-K + stress treatment and then activated the corresponding hormone pathways and downstream genes, thus decreasing the damage induced by low-K + stress in Xu32. In contrast, long-term K + -deficiency treatment may have destroyed the normal growth of plants and resulted in cell death and decreased endogenous hormones in NZ1. The genes involved in early signal transduction pathways, such as Ca 2+ signaling molecules and protein kinases, were not enriched in the present study, possibly also because of the experimental design of long-term K + -deficiency stress treatment. Environmental stresses are usually accompanied by enhanced ROS production in plants, which induces oxidative stress and results in cellular damage and metabolic imbalance [ 39 ]. It is reported that ROS generation and scavenging pathways, as well as the expression of scavenging enzymes, change under various abiotic stress [ 40 ]. In the present study, the expression level of peroxidases increased more in Xu32 than in NZ1, implying that more ROS were scavenged in Xu32, thereby decreasing the oxidative stress under K + deficiency to a greater degree. Both common and distinct stress-related DEGs were found in Xu32 and NZ1, and the complex mechanism of tolerance to low potassium was further investigated. Plant adaptation to abiotic stress is significantly influenced by GolS, which is a regulatory enzyme that catalyzes the synthesis of raffinose family oligosaccharides (RFOs) [ 41 ]. The increased expression levels of two GolSs in both sweetpotato genotypes may contribute to strong low-potassium stress tolerance. In addition, HIPPs are involved in heavy metal stress tolerance and are induced in roots under excess Cd, Zn, Mn, and Cu stress [ 42 ]. In the present study, one HIPP was transcriptionally regulated in Xu32 and NZ1 under K + -starvation conditions, which may have also played a role in low-potassium stress tolerance. CYPs, as the largest enzyme family, are involved in NADPH- and/or O 2 -dependent hydroxylation reactions and are found in all domains of living organisms, including bacteria, plants, and mammals [ 43 ]. The transcription level of CYP450 shows different change trends in response to diverse stresses. For example, the expression of CtCYP71A1 in safflower was increased under drought stress, while it increased initially and subsequently decreased with ABA, GA3, and SA treatment [ 44 ]. In the present study, the decreased expression level of CYP450s in Xu32 but not in NZ1 under low-potassium stress may indicate that CYP450s are involved in the process of K + -deficiency stress response and may enhance the tolerance of Xu32.
Conclusions The two sweetpotato genotypes exhibited different physiological features and transcription levels under low-potassium stress. The common and distinct expression patterns between the two sweetpotato genotypes illustrate a complex mechanism in response to low potassium in sweetpotato. The greater number of DEGs identified in Xu32 than in NZ1 in response to K + deficiency belonged to the process of photosynthesis, carbohydrate metabolism, ion transport, hormone signaling, stress-related genes, and antioxidant systems, possibly resulting in different levels of tolerance to low potassium (Figs. 6 and 7 ). Additionally, the findings of this study have provided some candidate genes that can be used in sweetpotato breeding programs aimed at improving low-potassium stress tolerance.
Background Sweetpotato is a typical ‘‘potassium (K + ) favoring’’ food crop, which root differentiation process needs a large supply of potassium fertilizer and determine the final root yield. To further understand the regulatory network of the response to low potassium stress, here we analyze physiological and biochemical characteristics, and investigated root transcriptional changes in two sweetpotato genotypes, namely, - K tolerant “Xu32” and - K susceptible“NZ1”. Result We found Xu32 had the higher capability of K + absorption than NZ1 with better growth performance, higher net photosynthetic rate and higher chlorophyll contents under low potassium stress, and identified 889 differentially expressed genes (DEGs) in Xu32, 634 DEGs in NZ1, 256 common DEGs in both Xu32 and NZ1. The Gene Ontology (GO) term in molecular function enrichment analysis revealed that the DEGs under low K + stress are predominately involved in catalytic activity, binding, transporter activity and antioxidant activity. Moreover, the more numbers of identified DEGs in Xu32 than that in NZ1 responded to K + -deficiency belong to the process of photosynthesis, carbohydrate metabolism, ion transport, hormone signaling, stress-related and antioxidant system may result in different ability to K + -deficiency tolerance. The unique genes in Xu32 may make a great contribution to enhance low K + tolerance, and provide useful information for the molecular regulation mechanism of K + -deficiency tolerance in sweetpotato. Conclusions The common and distinct expression pattern between the two sweetpotato genotypes illuminate a complex mechanism response to low potassium exist in sweetpotato. The study provides some candidate genes, which can be used in sweetpotato breeding program for improving low potassium stress tolerance. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09939-5. Keywords
Supplementary Information
Acknowledgements We would like to thank Dr. Li Zongyun and Dr. Ma Daifu for providing valuable feedback during the implementation of the experiments and the writing of the manuscript. Authors' contributions R.J, M.X.Y, G.H.L, M.L, P.Z, Q.Q.Z, X.Y.Z, J.W and Y.C.Y were responsible for the conception, planning, and organization of the experiments. R.J performed the experiments and wrote the manuscript. M.X.Y, G.H.L, M.L, P.Z, Q.Q.Z, X.Y.Z, J.W and Y.C.Y analyzed the data. Z.Z, A.J.Z, J.Y, Z.Y.L and Z.H.T reviewed the manuscript. All authors read and approved the final manuscript. Funding This work was supported by the Special Fund for Scientific Research of Shanghai Landscaping & City Appearance Administrative Bureau (G222413), China Agriculture Research System of MOF and MARA, and Xuzhou Science and Technology Plan Project (KC22035). Availability of data and materials The RNA-seq datasets analysed during the current study are available in the NCBI repository, accession numbers: PRJNA1013090. Declarations Ethics approval and consent to participate The sweetpotato seedlings were permitted to be used in this study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Genomics. 2024 Jan 15; 25:61
oa_package/39/31/PMC10789036.tar.gz
PMC10789037
38221618
Introduction COVID-19 disease was officially declared as pandemic in March 2020 and lasted for about 2 years. Control measures included restriction on movements (lockdown) resulting in disruption of health care services including routine immunization. In May 2020, WHO reported nearly 90% disruptions to essential health services, immunization services being most frequently affected [ 1 , 2 ]. Coverage of DPT-3 and first dose of measles containing vaccine (MCV-1) dropped from 86% in 2019 to 81% in 2021, leaving around 5 million more children unvaccinated in 2021 as compared to 2019 [ 3 ]. According to UNICEF, 67 million children missed out entirely or partially on routine immunization between 2019 and 2021 [ 4 ]. Zero-dose children (unvaccinated) increased from 13 million in 2019 to 18 million in 2021 and the partially vaccinated children increased from 6 million in 2019 to 25 million in 2021 [ 4 ]. Due to sharp decline in vaccinated children, one in five children worldwide were not fully protected against vaccine-preventable diseases (VPDs). This can lead to secondary outbreaks of VPDs and higher childhood morbidity and mortality. The number of measles cases doubled in 2022 compared with the previous year [ 5 ]. Prior to COVID-19 pandemic, inequity in routine immunization has been reported, particularly in low- and middle-income countries (LMICs), with children in low socioeconomic strata and remote rural areas less likely to be fully vaccinated due to inadequate health infrastructure and poor supply chain [ 6 ]. Although immunization coverage in India increased from 62% (NFHS 2015-16) to 76.4% (NFHS 2019-21) in children of age 12–23 months by special immunization drives like Mission Indradhanush, there is evidence of existing inequalities in routine immunization coverage in India prior to COVID-19 pandemic [ 7 – 10 ]. Several barriers during the pandemic including parental and health care worker’s concerns regarding exposure to COVID infection, transport restrictions due to lockdown, economic hardships, reallocation of resources, disruptions in supply chains acted as an add-on to the existing inequities in routine immunization [ 1 , 11 ]. Studies from India have reported major decline in vaccination coverage during the pandemic period [ 12 , 13 ]. However, no study has been conducted in the post-COVID period to assess the extent of recovery from the impact of the pandemic. We conducted this study to determine the impact of COVID-19 pandemic on routine immunization and assess the immunization status of children in the post-COVID era.
Materials and methods We conducted this cross-sectional survey in a tertiary care hospital of Delhi from February 2023 to May 2023 after approval from the institutional ethical committee (IECHR-2023-58-1-R1). According to Kumar et al . , 17.8% children admitted in hospital were completely immunized [ 14 ]. Sample size was calculated for a power of 80%, and alpha error of 0.05 was 225. Written informed consent were obtained from the parents. Parents of 225 consecutive children 1–6 years in the pediatric ward were interviewed by semi-structured open-ended questionnaire at the time discharge from hospital. Demographic and socioeconomic data were recorded when the children were well enough to be discharged. Children who had received BCG, three doses each of oral polio vaccine (OPV)/pentavalent vaccine/rotavirus vaccine and one dose of measles vaccine within first year of life were classified as completely immunized(14). Those who had missed any dose of a mentioned vaccines were labeled as partially immunized, and those who had not received any vaccine within first year of life were classified as non-immunized(14). Immunization status confirmed with immunization card or hospital prescriptions. Reasons for partial and non-immunization were recorded. If there was delay in routine immunization due to COVID-19 pandemic, reasons were recorded for the same. Statistical analysis was done using SPSS v29.0 (IBM, SPSS statistics for windows, Armonk, NY: IBM Corp, USA). The p value of < 0.05 was considered as significant. Clinicodemographic profile and reasons for delay were expressed as percentages. Number of children with complete, partial and no immunization were expressed as percentage. Association between immunization status and sociodemographic profile was determined using chi-square and logistic regression analysis.
Results We enrolled 225 children, 133 (59.1%) were male and 92 (40.9%) females. Age distribution was 92 (40.8%; 95% CI 34–47%) in 12–24 months, 43 (19.1%; 95% CI 14–24%) in 25–36 months and 90 (40%; 95% CI 34–46%) were in 37–72 months age category. Majority of cases i.e., 146 (64.9%; 95% CI 59–71%) resided in Delhi and 79 (35.1%; 95% CI 29–41%) were from neighboring states. Of the 225 children, 162 (72%; 95% CI 66–78%) were completely immunized, 55 (24.4%; 95% CI 19–30%) were partially immunized and 8 (3.6%; 95% CI 1–6%) were unimmunized. For the purpose of analysis, partially and unimmunized cases were grouped together, and characteristics of completely immunized were compared to the combined group of partially and unimmunized cases. Table 1 shows the comparison of completely immunized and partial/unimmunized groups in relation to the demographic profiles. Children who had hospital deliveries, birth order ≤ 2 and completely immunized siblings were more likely to be completely immunized ( p < 0.05). Parents who had better education level were found to have better immunization status of their children ( p < 0.001). However, sex, address, birth weight, socioeconomic status and distance of vaccination center were not significantly associated with complete immunization (> 0.05). As shown in Fig. 1 , 1st dose of measles vaccine and 3rd doses of pentavalent vaccine (DPT + Hib + Hep B), OPV and rotavirus vaccine were most commonly missed vaccine among the partially/non-immunized children. Reasons for non-immunization and partial immunizations are shown in Table 2 . Lack of knowledge and awareness ( n = 36, 57.1%; 95% CI 45–70%), presence of illness in child ( n = 21, 33.3%; 95% CI 21–45%) were the common reasons for partial and non-immunization. Lack of knowledge and awareness (50.9%; 95% CI 37–65%) was the most common reason for partial immunization. Other reasons were: busy schedule of parents ( n = 5), went to village/hometown ( n = 5), due to religious belief ( n = 4), disharmony in parents/family ( n = 1), fear of side effect/reaction/ pain ( n = 1) and due to shifting of home ( n = 1). Lack of knowledge and awareness was more common in parents with lesser level of education and in those who resided outside Delhi ( p < 0.001). However, no association was found between lack of knowledge/ awareness and socio-economic status of the family ( p > 0.05). Delay in immunization due to COVID-19 was found in 50 cases (22.2%; 95% CI 17–28%). Out of these 39 (17.3%; 95% CI 12–22%) had missed their scheduled routine immunization but received vaccination in catch-up visits and were completely immunized at the time of interview. Rest of the 11 cases remained partially immunized. No case remained completely unimmunized due to COVID-19 pandemic. Table 3 shows the reasons for delay in covid-19 pandemic. Restrictions of movement (64%; 95% CI 50–78%), fear of being exposed to COVID-19 (52%; 95% CI 38–66%) were the most common reasons for delay in immunization during COVID-19 pandemic. Others reasons for delay in covid were: non-availability of vaccine in center ( n = 7), went to village/hometown due to covid pandemic ( n = 5), nearby vaccination center was closed ( n = 3), health workers not coming for vaccination ( n = 2) and someone from the family got ill/died during covid ( n = 1). As shown in Fig. 1 , measles vaccine (MR-1 and MR-2) and 3rd doses of pentavalent vaccine, OPV and rotavirus vaccine were the most commonly missed/delayed vaccine due to COVID-19 pandemic (Fig. 2 ).
Discussion The present study assessed the impact of COVID-19 pandemic on the routine immunization. Our results suggest that COVID-19 pandemic led to delayed vaccinations and disruptions in routine immunization. In our study, 22.2% reported delay in routine immunization due to the pandemic. Our results are consistent with the previous reports. Alsuhaibani et al. reported delay in routine immunization in 24% of cases [ 15 ]. A recent study from Rajasthan also reported 31% COVID-19 related disruptions in immunization [ 16 ]. Immunization began to decline in March–April 2020 when lockdown was implemented in India. A study from India by Shet et al. reported a decline of 83.1% in vaccination in April–June 2020 and a decline of 32.6% in September 2020 [ 13 ]. Although Government of India continued all health services including immunization services during the lockdown but utilization of immunization services substantially declined. Shet et al. reported 33.4% complete or partial suspension of immunization services at various centers in India [ 13 ]. Barriers from supply chain and barriers from the caregiver’s side were responsible for decreased utilization of health care services [ 11 ]. Consistent with previous research reports, our study showed that fear of contracting COVID-19 was the most common reason among parents [ 15 , 17 , 18 ]. The other reasons included non-availability of vaccines and closed dispensaries/clinics. During the pandemic, many hospitals were providing COVID-19 related services exclusively. During the pandemic, parents preferred home vaccination or facilities dedicated exclusively for immunization services. Anganwadi centers (community level mother–child nutrition and welfare centers) which are alternative vaccination facilities were also closed [ 19 ]. In our study, few caregivers reported health workers were not coming for vaccination visits as the reason for delay. Health care workers faced challenges in delivering services due to fear of infection, stress, inadequate protective equipment [ 11 ]. We found that third dose of pentavalent (DPT3 + Hep B + Hib), OPV and first dose of measles were the most common among missed vaccinations. This finding is consistent with previous studies [ 20 , 21 ]. In our study, vaccination administration around birth (BCG, hepatitis B) was less affected when compared to vaccination in later stage of life. A likely explanation for this finding is that these children were in health facilities and within the reach of routine health services and the subsequent visits were more affected [ 22 ]. Similar trends where administration of BCG and Hepatitis B declined less than later vaccines [ 12 ]. On the contrary, Pakistan’s Sindh province reported BCG vaccination as more disrupted than follow-up vaccines due to enrollment services getting more affected [ 11 ]. Prevalence of completely immunized children in our study was 72%, and that of partially immunized and non-immunized children were 24.4% and 3.6%, respectively. Studies from pre-COVID era has reported even lower prevalence of fully immunized children. Kumar et al. reported prevalence of fully immunized children as 17.85% [ 14 ]. Vohra et al. reported that 56.4% children aged 12–23 months were fully immunized [ 23 ]. According to National Family Health Survey (NFHS) 2015–16, 62% children aged 12–23 months were fully vaccinated, whereas NFHS 2019–21 reported 76.4% children as fully vaccinated [ 7 , 8 ]. These findings show that India has been successful in increasing the immunization coverage with the help of special immunization campaigns such as Mission Indradhanush and supplementary immunization activities such as measles-rubella campaign. Most of the children who had missed vaccination due to COVID-19 pandemic, received their missed vaccination in subsequent catch-up visits. In the pre-COVID era, the pre-existing inequalities and socio-demographic factors such as parental education, socio-economic class, birth order, religion, family structure have been the challenging factors in achieving the goal of universal immunization. We found that lack of knowledge, child’s illness, busy parents, fear of side effects, religious beliefs were the common reasons for partial and non-immunization. Similarly, Kumar et al. and Vohra et al. documented lack of knowledge, fear of side effects, child’s illness and busy parents as the most common reasons for partial or non-immunization in the pre-COVID era [ 14 , 23 ]. In the present study, higher parental education, lower birth order and institutional deliveries were found to be associated better immunization status of children. Similar findings have been documented in the previous studies conducted before COVID-19 pandemic [ 10 , 14 , 23 ]. These preexisting challenges were compounded by the disruptions during the COVID-19 pandemic. The sample size calculation based on 17.8% prevalence of admitted children who are completely Immunized being based on the hospital data does not reflect the true picture and is not representative of the community, this being limitation of the study. This affects generalizability of the study for community. It is noteworthy that a major proportion of the delay/non-immunization was attributed to lack of awareness only a a small chunk was contributed to reverse migration to native villages and non-accessibility of the services in their native places.
Conclusion We conclude from our study that although disruptions in COVID-19 pandemic resulted in delay or missed vaccination during the lockdown but to some extent, immunization coverage recovered with catch-up campaigns. Further, immunization coverage more largely depends on the people’s behavior, lack of awareness and socio-demographic factors and not covid pandemic as such. In the future with possibility of similar pandemics, disruptions can be avoided with better resource management and immunization strategies. Recovery in immunization coverage can be achieved with more supplementary immunization activities and awareness campaigns.
Background Low immunization coverage in India attributes to many factors including sociodemographic factors and people’s behavior. COVID-19 pandemic resulted in disruptions in achieving optimum availability and utilization of immunization services. This study was carried out to find out the immunization status of children in the post COVID era and various factors responsible for non-immunization during the pandemic. Methods This cross-sectional study included parents of 225 admitted children aged 1–6 years were interviewed using a semi-structured open-ended questionnaire. Children were classified as completely immunized, partially immunized and unimmunized on the basis of vaccines missed given under first year of life. Reasons for non-immunization and delay/missed vaccination during COVID-19 pandemic were recorded. Results Of the 225 children, 162 (72%; 95% CI 66–78%) were completely immunized, 55 (24.4%; 95% CI 19–30%) were partially immunized and 8 (3.6%; 95% CI 1–6%) were unimmunized. Parents with hospital deliveries, higher education level and lesser birth order were more likely to have children with better immunization status ( p < 0.05). First dose of measles scheduled at 9 months and 3rd dose of pentavalent vaccine/OPV/Rotavirus vaccine scheduled at 14 weeks were most commonly missed vaccines among partially immunized. Lack of awareness ( n = 36, 57.1%; 95% CI 45–70%) was the common reason for partial and non-immunization followed by illness of child ( n = 21, 33.3%; 95% CI 21–45%) and COVID-19 pandemic ( n = 11, 17.4%; 95% CI 8–27%). Pandemic was reason for delay in 50 (22.2%; 95% CI 17–28%) children. Restrictions of movement (64%; 95% CI 50–78%), fear of being exposed to COVID-19 (52%; 95% CI 38–66%) were the most common reasons for delay during the pandemic. Of the 50 children who had delay due to pandemic, 39 children (17.3%; 95% CI 12–22%) received their catch-up immunization after the pandemic. No child remained completely unimmunized due to COVID-19 pandemic. Conclusions Although COVID-19 pandemic resulted in disruptions in routine immunization services, sociodemographic factors such as awareness for immunization, parental education and various beliefs for immunization were responsible for the children remaining unimmunized or partially immunized after the pandemic. Keywords
Author contributions AA conceptualized the study, was involved in study design, protocol submission, obtaining ethical approval, data analysis, literature search, manuscript drafting and critical analysis of manuscript. Did not receive funding. Approves of final version. NA was involved in study design, protocol submission, obtaining ethical approval, data collection and analysis, literature search, manuscript drafting and critical analysis of manuscript. Did not receive funding. Approves of final version. PA was involved in study design, protocol submission, obtaining ethical approval, data collection and analysis, literature search, manuscript drafting and critical analysis of manuscript. Did not receive funding. Approves of final version. Funding None. Availability of data and materials Data and material will be available with Dr Anju Aggarwal. Declarations Ethics approval and consent to participate Ethical approval was taken from institutes ethical committee—IECHR-2023–58-1-R1. Consent for publication All authors give consent for publication. Competing interests None.
CC BY
no
2024-01-16 23:45:34
J Health Popul Nutr. 2024 Jan 15; 43:8
oa_package/bb/c6/PMC10789037.tar.gz
PMC10789038
0
Introduction The COVID-19 pandemic disproportionately impacted residents of long-term care facilities (LTCFs), who have suffered higher mortality rates than the general population; in Washington State (WA), LTCF-associated cases represent 3% of cases, but 30% of deaths due to SARS-CoV-2 [ 1 ]. This impact materialized in WA and across the US despite early recognition of LTCFs as high-risk settings due to residents’ advanced age, chronic underlying health conditions, congregate living, asymptomatic transmission, and movement of healthcare personnel [ 2 – 4 ]. Based on these concerns, Centers for Disease Control and Prevention (CDC) developed recommendations over the course of the pandemic for infection prevention and control (IPC) in LTCFs, including training, use of personal protective equipment (PPE) and hygiene measures, visitor restrictions, resident distancing and cohorting, environmental cleaning and disinfection, testing and reporting to public health jurisdictions, and provision of staff sick leave [ 5 ].Similarly, WA’s governor, secretary of health, and Department of Health (DOH) developed and instituted regulations and guidance governing prevention efforts [ 6 , 7 ]. Centers for Medicare and Medicaid Services (CMS) outlined rules for testing staff and residents of LTCFs [ 8 ]. Changes in these rules, regulations, and guidance over time are expected to have impacted transmission dynamics in LTCF settings. One key tool for understanding transmission dynamics in-place is pathogen genomic sequencing and analysis, particularly phylogeographic analysis. Understanding sampling methodology is important for describing potential bias in this type of analysis [ 9 – 11 ]. Systems for sequencing SARS-CoV-2 specimens have changed over time. Prior to March 2021, sampling for sequencing from WA residents was convenience- or research-based. In March 2021, a sentinel surveillance system was implemented in WA to support representative sampling [ 9 ]. The population of WA LTCF-associated cases with genomic data available is as-yet undescribed. Additionally, the utility of the existing surveillance system for adding insight and actionable data for public health practice has not been completely explored. Multiple examples of genomic epidemiology studies of single outbreaks or facilities exist in the literature, including from WA. A previous study documented the utility of targeted genomic surveillance during two SARS-CoV-2 outbreaks in LTCFs in WA [ 12 ]. Likewise, a study of a single LTCF-associated outbreak in WA early in the pandemic utilized genomic epidemiology to understand phylogenetic clustering of cases within the facility [ 13 ]. Fewer studies have leveraged pathogen genomic data to describe how transmission dynamics changed over the pandemic or describe the impact of sequence data availability on public health action. A review article assessing published genomic epidemiologic investigations during 2020 documented the value of this type of analysis for identifying independent clusters of infections but found that large-scale sequencing of outbreaks added limited value after sequencing initial cases, focusing on individual outbreak- or facility-level studies [ 14 ]. An analysis of all care-home linked cases in the east of England used genomic epidemiology to explore large-scale transmission dynamics in nearly 300 facilities; however, this analysis was limited to a 3-month study period [ 15 ]. Here, we aim to assess the utility of genomic data produced for LTCF-associated cases to add information for public health action over the course of the SARS-CoV-2 pandemic, from 2020–2022. We pair patient-level epidemiological and pathogen genomic data to understand variations in transmission patterns over time. Specifically, we address the following questions of public health concern: is available genomic data obtained from LTCF-associated cases representative of all LTCF-associated cases? Do temporal changes in guidance or policy apparently impact intra-facility transmission patterns? Given available data, which genomic-epidemiologic methods are most applicable for ongoing or routine data analysis? And finally, what changes are needed to ensure the ongoing use of genomic data to explore transmission in LTCF settings?
Methods Data collection and cleaning All confirmed COVID-19 cases reported among WA residents in the Washington Disease Reporting System (WDRS) as of December 19, 2022 were included, including reinfection cases [ 16 ]. Sequences uploaded to the GISAID EpiCoV database indicating WA in their geographic tag were linked to these cases using laboratory accession numbers or patient demographics [ 17 ]. For cases with multiple specimens sequenced, only the first specimen was used for analysis. Long-term care facilities were defined as: nursing homes, assisted living facilities, adult family homes, enhanced services facilities, and intermediate care facilities for individuals with intellectual disabilities. Cases in WDRS are categorized as LTCF-associated if association with a facility is noted in case interview, medical record, facility line list, address or telephone match to the facility or another measure indicated by the Local Health Jurisdiction. LTCF-associated cases therefore include residents, employees, and visitors if association is noted. Enhanced data obtained on October 24, 2022 from Yakima Health District tracking additional details related to LTCF cases and outbreaks were linked to WDRS and GISAID data using name and date of birth and conducting probabilistic matching with manual review. Representativeness analysis All epidemiological data analysis was performed in R version 4.2.2 [ 18 ]. Representativeness of LTCF-associated cases with sequencing performed was assessed by comparing to all LTCF-associated cases on: sex, age, race, ethnicity, language, outbreak association, symptom status, hospitalization, death, and facility type. Sampling for sequencing over time in the full population and in LTCFs was graphed. Definition of study time-periods Information available from the WA Governor’s News Release Archive and WA DOH records was used to construct a timeline of key modifications to rules, regulations, or guidance for LTCFs. This timeline was used to divide the study period into six segments of approximately similar lengths, marked by key policy changes (Table 1 ). Events that impacted movement or visitation and sample selection for sequencing were prioritized in defining study time-periods. Genomic subsampling Full global data, restricted to those samples with complete date information available, were downloaded from GISAID. Due to the challenges associated with the size of this dataset, we subsampled to include: all sequences from Washington State, 3,000 random sequences from North America, and 3,000 random sequences from regions outside North America to allow for both spatiotemporal diversity and contextualization of LTCF-associated samples in WA. Contextual data included in the phylogenetic analyses were selected from this down-sampled dataset according to genetic proximity to the focal samples (LTCF-associated samples). We specified contextual data sampling to include up to 1,500 genomes per time-period from WA, sampled from all counties and months, ten genomes per month from other US states, and ten genomes per month from each of the global regions. Known duplicate samples were excluded from the contextual sampling. Phylogenetic tree generation Phylogenetic trees corresponding to the six study periods were constructed using Nextstrain SARS-CoV-2 workflow, which aligns sequences against the Wuhan Hu-1 reference using nextalign ( https://github.com/nextstrain/nextclade ), infers a maximum-likelihood phylogeny using IQ-TREE, and estimates molecular clock branch lengths using TreeTime. We specified the use of discrete trait analysis (DTA) within TreeTime [ 19 , 20 ]. Data from Yakima LTCFs were separated into two time periods: January-August 2020 and August 2021-December 2022; phylogenetic trees corresponding to each of these time periods were constructed in Nextstrain as described above. These trees were used to select three facilities for further analysis. Discrete trait analysis Migration history was inferred for each of the time-periods using a LTCF-associated binary variable. We defined a migration event into a LTCF as occurring if a parent node had > 50% probability to be assigned the “non-LTCF discrete trait”, and the child node had > 50% probability to be assigned as “LTCF.” The Python library Baltic was used for parsing phylogenetic trees and estimating post-introduction clade sizes (version downloaded from: https://github.com/alliblk/ncov-humboldt/blob/main/baltic.py). [ 21 ]. The introduction rate was calculated as the number of unique introduction events over time. Genomic epidemiologic analysis Agreement between clade designation and “outbreak-association” status in the metadata was analyzed for clade sizes > 1. Statewide data were not available for type of association (staff/resident/visitor); age group was evaluated as a proxy to understand possible staff versus visitor introductions. Microreact was used to visualize multiple data elements overlaid on the state-wide phylogenetic trees [ 22 ]. Sub-trees for each of the Yakima-specific facilities selected for further analysis were imported into MicrobeTrace for visualization and network analysis [ 23 ]. Transmission tree inference Time trees from the January-August 2020 analysis for the three Yakima facilities were input into TransPhylo version 1.3.2 to infer transmission trees and describe the role of staff versus resident introduction and transmission events [ 24 , 25 ]. Previous analyses of SARS-CoV-2 genomic data using TransPhylo were used as reference [ 26 – 28 ]. For this analysis, minimum branch distance was set to one day and viral generation times 1–14 days with a median of 5.5 days and equal sampling time were assumed, [ 26 ] along with a gamma distribution. Markov chain Monte Carlo (MCMC) analysis was performed with 500,000 iterations. Convergence was visually inspected.
Results Among 58,086 LTCF-associated COVID-19 cases, 4,550 (7.8%) had sequencing performed on at least one specimen. This compares to an average of 9.6% of all reported WA cases with genomic data available. The proportion of cases with sequencing data available varies over time (Fig. 1 ), ranging from 5 to 30% across study periods. LTCF-associated cases were sequenced at higher frequencies than general-population cases prior to November 2021. During and after November 2021, LTCF-associated cases were sequenced at similar or lower frequency than all cases, with a notable drop-off in sampling beginning in May 2022. A comparison of difference in the percent of LTCF sequencing from the percent of total case sequencing is shown in Supplemental Fig. 1 . Sequencing rates vary at the facility- and outbreak-level. Table 2 compares LTCF-associated cases with sequences available to all LTCF-associated cases. Cases with sequences available were generally demographically representative of all cases by age group, sex, race/ethnicity, language, and facility type but were more likely fatal or hospitalized and were more likely to have symptom information available. Figure 2 shows time-scaled (A) and divergence-scaled (B) phylogenetic trees of sequenced LTCF cases across all time periods outlined in Table 1 . LTCF-associated cases are dispersed and intermixed with both LTCF-associated and non-LTCF cases; across each time-period the dominant lineages match across these groups (Supplemental Fig. 2 ). Multiple epidemiological clusters within unique facilities are visualized, as well as linked cases from different facilities. Many visualized clusters reveal phylogenetic diversity with long branch lengths, indicating missing samples in the transmission chains consistent with known sampling patterns. Age-group was evaluated as a proxy for resident status using supplemental data from Yakima County. The oldest age groups, consisting of persons aged 65 and older were > 90% residents. Persons in the 45–64 age group were 43.3% residents; 95.5% of persons 18–44 were staff. Across all time periods, sequences from different age groups are interspersed. Figure 3 shows the post-introduction clade sizes among LTCFs in each time-period. Most clusters are single introductions across all time-periods, with large outbreaks (> 10 sequences) becoming increasingly rare. The average number of introductions per day varied from 1.6 during time-period 4 to 0.7 during time-period 3. Additional detail regarding post-introduction clade sizes, introductions per day, and information regarding sampling during each time-period is provided in Supplemental Table 1 . Among cases inferred to be associated with introduction clades sized > 1, varying proportions were labeled as outbreak-associated in the epidemiologic dataset over time, ranging from 49.2%-97.4% (Table 3 ). Yakima county long-term care facility-associated transmission Yakima Health District reported supplemental data on 1,725 cases associated with ten facilities; 1,452 (84%) of these case records were linked to WDRS data by probabilistic matching. Genomic data were available for 667 cases. Sequenced cases from Yakima were highly representative based on age, sex, and race. Sequenced cases were more likely to be fatalities (11.1% of sequenced cases vs 8.1% of all facility cases). Phylogenetic visualization spanned two time periods, which covered 98% of sequences: January-August 2020 and August 2021-December 2022. Several large facility-associated outbreaks were visualized; three facilities were selected for additional analyses (Supplemental Fig. 3 a-b). Facility A was selected due to identification of one prolonged cluster spanning April-June 2020; a divergence tree of each selected outbreak is shown in Fig. 4 . Facility B was selected due to two large overlapping outbreaks early in the pandemic with multiple introductions later in the pandemic. Facility C was selected due to apparent multiple introduction events over the course of the pandemic, including early in the pandemic. Resident and staff infections were interspersed across the tree and network visualizations. Trace diagrams resulting from the TransPhylo analysis revealed uncertainty in the parameter values, likely due preponderance of identical consensus genomes, impacting Transphylo’s ability to resolve within- and between-case genetic diversity, as has been described previously for SARS-CoV-2 transmission reconstruction [ 27 ]. The Facility A transmission reconstruction inferred 12% of cases as unsampled sources (Supplemental Fig. 4 ) and inferred a resident as source. During this period, 56% of known cases from Facility A were sequenced (Supplemental Table 2 ). An outbreak spanning March 18, 2020 to April 15, 2020 included 27 Facility B sequences; during this period, 58% of known Facility B cases were sequenced. Another 33 sequences from this facility were associated with a separate outbreak spanning April 19, 2020 to May 7 2020. From April-August 2020, 69% of reported cases from Facility C were sequenced and at least 18 separate introduction events were documented, only one of which apparently led to an outbreak of > 5 cases as visualized in the genomic data. This outbreak included 62 sequences and spanned April 15-May 14, 2020. The proportion of staff amongst all cases was consistent across these four outbreaks, ranging from 17%-22%. The ratio of observed to expected inferred transmission events attributed to staff ranged from 0.66–1.17, providing evidence that both staff and residents are driving transmission in these outbreaks (Supplemental Table 2 ).
Discussion Here, we analyzed epidemiologic and genomic data associated with LTCFs in WA to characterize transmission dynamics and inform ongoing data utilization. Transmission dynamics in LTCFs changed over the course of the COVID-19 pandemic, with variable introduction rates into LTCFs, but decreasing amplification within LTCFs. Particularly during March-August 2020, a period marked by little population immunity and initiation of non-pharmaceutical interventions, COVID-19 spread in LTCFs via high introduction rates and intra-facility transmission. The number of introduction events and intra-facility clade sizes decreased during August 2020-March 2021; vaccination campaigns began in December 2020. Additionally, CMS released testing requirements for staff and residents in August 2020. Although the introduction rate more than doubled between this time-period and the subsequent two study periods, the percentage of introduction events leading to large clade sizes remained stable. This indicates that despite more frequent introductions during these time periods, post-introduction within-LTCF transmission was curbed, possibly due to vaccination and improved IPC. These study periods were marked by transmission of Delta and Omicron variants, with high levels of community transmission likely contributing to introduction rates. While case counts were high, the genomic data show that incidence was largely driven by repeated introduction events rather than intensive within-LTCF spread. Over the course of the pandemic, LTCF-associated cases are dispersed throughout the trees and intermixed with both LTCF-associated and non-LTCF cases, indicating that SARS-CoV-2 lineages circulating in LTCFs matched those circulating in surrounding communities. Dominant lineages in each time-period matched when comparing LTCF-associated cases to Washington cases included in the tree. This finding is consistent with a similar study performed in the UK [ 15 ]. Similarly, sequences from different age groups are interspersed, indicating likely bi-directional transmission between staff and residents. This observation was validated for a small number of outbreaks, demonstrating proportional inferred transmission from staff and residents. Interpretation of these findings is limited by variable sequencing over time. For much of the pandemic, testing and sequencing from LTCFs occurred at higher proportions than for the general population of COVID-19 cases. This over-sampling inflates the number of introductions and clade sizes when contextualized among other WA sequences. Changes in the relative proportion of LTCF cases sequenced and in sampling intensity are expected to impact findings of the DTA analysis and comparison across timepoints. However, when considering the direction of expected change, we anticipate the results identified herein are generally a conservative estimate. This conclusion was drawn after comparing the relative direction of change considering sampling proportion and sampling intensity across time-periods to the number of large clades identified. Overall, sequenced LTCF cases were found to be representative of COVID-19 cases in LTCFs. The potential contribution of genomic data in defining outbreak-related cases was quantified. In the absence of genomic data, outbreak-association is determined using the current Council for State and Territorial Epidemiologists (CSTE) case definition. However, this definition cannot differentiate between concurrent but independent introduction events or outbreaks and relies on epidemiologic data capture. Analysis of the agreement between outbreak-tagged cases in the epidemiological data and cases identified in post-introduction clades sized > 1 revealed that epidemiologic data is growing more disparate from genomic data over time. Specifically, during periods 4–6, cases inferred within LTCF post-introduction clades were less likely to be recorded as outbreak-associated in the epidemiologic datasets compared to during study periods 1–3. This finding suggests that genomic data could greatly inform outbreak definitions, especially in settings of decreased epidemiologic data capture. In the absence of genomic data, outbreaks may also be over-estimated as multiple introduction events are not considered. Although we attempted transmission reconstruction of four outbreaks in Yakima County, uncertainty in the parameter values limits interpretation of results. Indeed, based on known sequencing rates, TransPhylo estimated fewer missing links than expected and epidemiological data including onset dates provided conflicting results. Methods that utilize additional epidemiological data in reconstruction, such as extension of the outbreaker2 model, may be more useful in this setting [ 29 , 30 ]. Visualization of this large genomic dataset over time provides insight into useful bioinformatic tools and methods for application in public health practice. Early in the pandemic, many clusters of cases with long persistence were observed. Genomic epidemiology tools often rely on distance thresholds for defining clusters. These tools are difficult to apply in settings of prolonged transmission, as evolution over time is expected. Application of tools requiring thresholds may result in inference of independent clusters in situations of prolonged transmission. This was observed when attempting to use one such tool, MicrobeTrace, in the analysis of outbreaks in Yakima County. In this study, the utilization of DTA analysis with paired epidemiologic data allowed observation of prolonged outbreaks without the need for thresholds. This study faced several important limitations. First, genomic data captured for LTCF-associated cases were associated with more severe cases. The majority of LTCF-associated outbreaks had no sequences available; this requires an assumption that the sampled LTCFs are representative of the unsampled facilities. Based on our case-level representativeness assessment, including proportional sampling by facility type, we believe this assumption is reasonable. The DTA analysis was performed using a binary variable for LTCF-association; analysis at the facility level may reveal additional introduction events and patterns of inter-facility spread. Demonstrating the relative rarity of large outbreaks caused by a single introduction late in the pandemic is an important finding; however, many guidance, policy, regulation, practice, immunity, and prevention method (including new availability of vaccines) changes occurred over the study period, prohibiting a causal analysis of which component changes led to this impact and limiting our study to observational findings. This study had several notable strengths. First, we assessed genomic sampling representativeness at the case-level, enabling DTA analysis and interpretation. Second, paired epidemiologic and pathogen genomic data were available with additional detail available for Yakima County cases, facilitating in-depth analysis of transmission. In particular, the ability to de-duplicate sequences early in the pandemic impacted study findings; during the first time-period there were an average of three (triplicative) genomes available among sequenced cases. Analysis in the absence of epidemiologic data will over-represent these cases, inflating genomically-defined clusters. Finally, genomic studies to understand a single or a few outbreaks are commonly performed and reported in the literature. By looking at data over time, we add important context regarding the changing transmission dynamics associated with LTCFs. Paired genomic and epidemiologic data enable phylogenetic analysis to understand transmission patterns, identify apparent clusters, and form hypotheses regarding transmission networks. However, metadata is not consistently available on some key variables, including type of LTCF association (staff/resident/visitor), dates of association, or travel history. Given currently available data, methods for tree building for hypotheses generation on a routine basis are recommended. Cluster detection tools for outbreak identification are likely of limited use, as most facilities do not have sequencing performed and data is not timely. However, cluster detection on available genomic data may help to identify temporal patterns of intra-facility spread versus repeated introduction. The current data types and quality captured by routine surveillance data collection is inadequate for applying methods to infer transmission or identify introduction sources with certainty. Although this data may be available through enhanced investigations in some counties, as with Yakima County, the general absence of this data limits broader analysis. Importantly, we noted a decrease in data capture from LTCFs over time. Depending on goals for use of genomic data, sentinel surveillance should be increased or targeted surveillance implemented to ensure available data for analysis; likewise, if cluster detection is a desired outcome, data timeliness should be improved. These findings reflect challenges facing many SARS-CoV-2 genomic data capture systems presently. Antigen-based testing is common but is not compatible with available specimen retrieval practices and sequencing capacity; advances compatible with ongoing genomic data capture are needed. With present patterns of sequencing, LTCFs are underrepresented; expansion to sentinel facilities or during outbreak investigation is recommended. Additionally, genomic epidemiologic workforce capacity embedded within the teams that surveil for outbreaks in healthcare settings is required.
Conclusions In conclusion, this analysis identified changing transmission dynamics in LTCFs over the course of the COVID-19 pandemic, with smaller post-introduction clades noted later in the study period despite periods of high introduction rates. This finding is encouraging for the many control efforts that have been put in place in these facilities over time, including vaccination, infection prevention, and testing and reporting to public health jurisdictions, although causal theories could not be tested and natural immunity was also accumulating during this time. LTCFs are likely to remain vulnerable institutions in which ongoing respiratory pathogen monitoring and outbreak control is warranted. Genomic data have the potential to increase the specificity of outbreak detection and resulting public health actions. Ongoing genomic epidemiologic analysis of LTCF-associated data is encouraged to facilitate situational awareness, potential cluster detection, and hypothesis-generation for further targeted analysis.
Background Long-term care facilities (LTCFs) are vulnerable to disease outbreaks. Here, we jointly analyze SARS-CoV-2 genomic and paired epidemiologic data from LTCFs and surrounding communities in Washington state (WA) to assess transmission patterns during 2020–2022, in a setting of changing policy. We describe sequencing efforts and genomic epidemiologic findings across LTCFs and perform in-depth analysis in a single county. Methods We assessed genomic data representativeness, built phylogenetic trees, and conducted discrete trait analysis to estimate introduction sizes over time, and explored selected outbreaks to further characterize transmission events. Results We found that transmission dynamics among cases associated with LTCFs in WA changed over the course of the COVID-19 pandemic, with variable introduction rates into LTCFs, but decreasing amplification within LTCFs. SARS-CoV-2 lineages circulating in LTCFs were similar to those circulating in communities at the same time. Transmission between staff and residents was bi-directional. Conclusions Understanding transmission dynamics within and between LTCFs using genomic epidemiology on a broad scale can assist in targeting policies and prevention efforts. Tracking facility-level outbreaks can help differentiate intra-facility outbreaks from high community transmission with repeated introduction events. Based on our study findings, methods for routine tree building and overlay of epidemiologic data for hypothesis generation by public health practitioners are recommended. Discrete trait analysis added valuable insight and can be considered when representative sequencing is performed. Cluster detection tools, especially those that rely on distance thresholds, may be of more limited use given current data capture and timeliness. Importantly, we noted a decrease in data capture from LTCFs over time. Depending on goals for use of genomic data, sentinel surveillance should be increased or targeted surveillance implemented to ensure available data for analysis. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-17461-2. Keywords
Supplementary Information
Abbreviations Long-term care facility Washington state Centers for Disease Control and Prevention Infection prevention and control Personal protective equipment Washington State Department of Health Centers for Medicare and Medicaid Services Washington Disease Reporting System Discrete trait analysis Markov chain Monte Carlo Council for State and Territorial Epidemiologists Acknowledgements We acknowledge the following individuals for their role in data linkage and maintenance: Peter Gibson, Cory Yun, Emily Nebergall, Allison Thibodeau, Frank Aragona, Topias Lemetyinen, Allison Warren, Cameron Ashton, Sarah Jinsiwale, and Laura Marcela Torres. Additionally, we acknowledge the following originating laboratories for providing specimens for whole genome sequencing: Aegis Sciences Corporation, Altius Institute for Biomedical Sciences, Atlas Genomics, Avero Diagnostics, Central Washington Hospital, Curative Labs, FidaLab, Fulgent Genetics, Helix/Illumina, Incyte Diagnostics, Interpath Laboratory, Kaiser Permanente Washington Health Research Institute, Laboratory Corporation of America, Mid Valley Hospital Laboratory, Northwest Laboratory, Overlake Hospital, Providence Regional Medical Center Everett, Providence Sacred Heart Medical Center, Quest Diagnostics Incorporated, Seattle Flu Study, St. Michael Medical Center, University of Washington Virology, US Airforce School of Aerospace Medicine, Washington State Department of Health Public Health Laboratories. We acknowledge the following submitting laboratories for generating the genetic sequence data and sharing via GISAID: Altius Institute for Biomedical Research, Atlas Genomics, Centers for Disease Control and Prevention, Curative Labs, Providence St. Joseph Health Molecular Genomics Laboratory, Seattle Flu Study, University of Washington Virology, US Airforce School of Aerospace Medicine, Washington State Department of Health Public Health Laboratories. Funding for data collection was provided by Centers for Disease Control and Prevention (CDC) ELC EDE. Authors’ contributions HNO: conceptualization, data curation, formal analysis, software, methodology, visualization, writing. AB: methodology, software, writing—review& editing. SML: methodology, software, writing—review& editing. NS: data curation, investigation, writing—review& editing. AT: data curation, investigation, writing—review& editing. EB: data curation, investigation, writing—review& editing. LK: data curation, investigation, writing—review& editing. MS: data curation, investigation, writing—review& editing. JB: data curation, investigation, writing—review& editing. JPH: conceptualization, supervision, writing—review& editing. SL: conceptualization, supervision, writing—review& editing. JGB: conceptualization, supervision, writing—review& editing. TB: conceptualization, supervision, software, writing—review& editing. Funding Funding provided by Centers for Disease Control and Prevention. TB is a Howard Hughes Medical Institute Investigator. Availability of data and materials The data that support the findings of this study are available from Washington State Department of Health (doh.wa.gov) but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors (Hanna Oltean) upon reasonable request and with permission of Washington State Department of Health. Declarations Ethics approval and consent to participate The Washington State and University of Washington Institutional Review Boards determined this project to be surveillance activity and exempt from review; the need for informed consent was waived through this determination. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Public Health. 2024 Jan 15; 24:182
oa_package/1d/b6/PMC10789038.tar.gz
PMC10789039
0
Background Multiple sclerosis (MS) is a chronic, neurodegenerative, and demyelinating disease of the central nervous system [ 1 ]. Globally, the disease counts around 2.5 million cases. MS is usually diagnosed when people are between 20 and 40 years old, making it the most prevalent cause of disability in young adults of working age [ 2 , 3 ]. MS causes a wide variety of symptoms, with fatigue, cognitive, and motor problems commonly reported [ 4 ]. Up to 65% of people with MS (PwMS) develop cognitive deficits [ 5 , 6 ], which severely affect daily life functioning and ultimately health-related quality of life (HRQoL) [ 7 ]. About 65% of all PwMS become unemployed within 5 years after diagnosis [ 8 – 10 ], with cognitive impairment being one of the main reasons for unemployment and work-related problems [ 11 , 12 ]. MS and cognitive rehabilitation Current rehabilitation for cognitive impairment in PwMS is limited and focusses mostly on restorative and compensatory strategies [ 13 ]. Previous studies consistently demonstrate mild-to-moderate effects of cognitive training on cognitive performance in PwMS (i.e., functional training [ 14 ]). A recent study suggests that people with relapsing remitting MS (RRMS) and larger grey matter volume were more likely to improve on information processing speed after cognitive training compared to people with progressive MS (PMS) and grey matter atrophy [ 15 ]. Furthermore, another study provides similar results in that there might be an early window of opportunity for cognitive training as PwMS with an intact brain network (compared to healthy controls) benefited from a cognitive rehabilitation programme, while PwMS with brain network deficits did not show beneficial effects from the intervention [ 16 ]. Next to cognitive training, earlier work has also shown that physical exercise appears to improve cognitive functioning in PwMS [ 17 , 18 ]. Studies have shown that both cognitive training and exercise positively influence brain functioning [ 19 – 22 ]. Enhanced effects can be expected from the combination of cognitive training and exercise, as was illustrated in patients with mild cognitive impairment and Alzheimer’s disease [ 23 ]. However, the actual effects of such a combination on cognition still need to be established in PwMS. MS and work With respect to work-related problems, interventions are typically only provided when PwMS are already on sick leave or have lost their job [ 24 ]. These interventions might therefore be too late, as having and keeping a job is important for people’s social contacts, self-respect and to feel valued [ 25 ]. In addition, job and/or productivity loss have economic consequences for both the PwMS as well as society at large [ 26 ]. For instance, already for the mildly affected MS group (expanded disability status scale score (EDSS) 0–3), the mean utility (i.e., value for given states of health between 1 (full health) and 0 (death)) and annual MS-related healthcare costs were estimated at 0.744 and €23,100 in the Netherlands respectively, which primarily resulted from productivity losses [ 27 ]. Research on work-related interventions for PwMS is directly needed but remains scarce. In fact, in recent years the ability to work and being employed increasingly received attention within healthcare research. This is not surprising as work participation is a significant determinant of HRQoL in PwMS, independent of their experienced health [ 28 ]. The Dutch government encourage individuals with a chronic disease such as MS to self-manage and take control of their lives, including their work [ 29 ]. However, self-managing daily demands in a dynamic work context where activities request a high level of cognitive and psychological skills is challenging. For PwMS, being able to work is therefore not only a matter of self-management of work challenges but is also highly dependent on the work context. A supportive work environment that is willing to adjust and fine-tune the work to the cognitive abilities of the employee and to provide emotional support is imperative for workers with chronic diseases such as MS to be able to remain employed [ 30 ]. As such, a proactive and timely work-related intervention that includes active involvement of the workplace may enable patients to effectively deal with work challenges and prevent sick leave and job loss. Don’t be late! While the physical limitations of MS can (partly) be compensated with mobility aids (e.g., wheelchair, orthoses) and workplace adjustments, such solutions are scarce for cognitive deficits. Currently, interventions for cognitive impairments and work-related problems start when these problems are often already too advanced and difficult to overcome [ 25 ]. This triggers a negative cascade of events that inevitably leads to further cognitive deterioration, unemployment and decreased HRQoL. Therefore, it is of great importance to intervene in the early stage of cognitive impairment. The Don’t be Late! project aims to provide timely intervention in PwMS with mild cognitive impairment who are still employed. The primary aim is to investigate the effectiveness of two innovative interventions as compared to enhanced usual care in improving HRQoL. These interventions are aimed at preventing and/or postponing cognitive decline and work-related problems. Secondary aims are, 1) to assess the effectiveness of the investigated interventions in improving cognitive, psychological and work functioning, and in enhancing the brain’s functional network, 2) to examine which factors (i.e., baseline cognitive, psychological, work and brain MRI-parameters) are predictive of the response to the investigated interventions, 3) to assess which mechanisms mediate the effect of the investigated interventions on HRQoL, and 4) to assess the cost-effectiveness of the investigated interventions. For the qualitative study, the primary aim is to qualitatively reflect on the process and outcome of the investigated interventions considering the perspectives of relevant stakeholders and to investigate how to foster smooth and successful implementation in clinical practice.
Methods/design Study design and setting The Don’t be late! research project consists of three work packages, of which this protocol describes the second and third. Work package 1—‘Timely identification of cognitive decline in Multiple Sclerosis’ aims to 1) identify early cognitive decline in PwMS, and 2) to validate the Multiple Screener in 750 PwMS, a digital tool for administering neuropsychological tests in PwMS [ 31 ]. Work package 2 concerns a randomised controlled trial containing two intervention arms (‘strengthening the brain’ and ‘strengthening the mind’) and a control condition (‘enhanced usual care’). This study follows a repeated-measures design and is performed at Amsterdam UMC and Leiden University, the Netherlands. Participants will be selected from a pool of eligible participants from work package 1 and participants will be recruited through other studies in which they indicated that they could be approached for further research participation, as well as through social media channels. Eligible individuals will be randomised over the three arms. Results of measurements for work package 2 overlapping with work package 1 will be adapted from work package 1 for participants included through work package 1. Work package 3 is a qualitative study using semi-structured interviews with representatives from all stakeholder groups to investigate the process of the interventions. Additionally, focus groups are used to provide a deeper understanding of the results of the interventions and to investigate how to successfully implement the interventions into clinical practice. Study population Work package 2: Randomized controlled trial We aim to include 270 participants in the randomized controlled trial. In order to be eligible to participate, PwMS must meet the following criteria: (1) confirmed MS diagnosis according to the McDonald 2017 criteria [ 32 ], (2) age between 18 and 67, (3) no changes in disease modifying therapy in the last three months prior to inclusion—this criterion only applies at inclusion to ensure participants are in a stable situation at the start of the study and for follow-up measures, changes in treatment will be registered but will not result in exclusion from the study, (4) no current relapse or steroid treatment in the six weeks prior to study visits, (5) presence of mild cognitive deficits (at least one test with a Z-score of -1.0 to -1.99 below norm scores of healthy controls on the Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS) battery [ 33 ]; not fulfilling the criteria for severe cognitive impairment (Z-score of -2.0 on ≥ 2 tests)), (6) performing paid work for at least 12 h per week, (7) being able to participate in an exercise intervention (i.e., EDSS < 6.0), and (8) fulfilling safety criteria for MRI (no metal inside body, not pregnant, no claustrophobia). Exclusion criteria are (1) presence of neurological (other than MS) and psychiatric disorders, (2) a current or history of drug or alcohol abuse, (3) being unable to speak or read Dutch, (4) currently on sick leave for a period of 6 weeks or longer, and (5) currently pregnant. Interventions Participants will be randomly allocated into one of three groups, ‘ strengthening the brain’ , ‘ strengthening the mind’ , or ‘ enhanced usual care’ . The interventions ‘strengthening the brain’ and ‘strengthening the mind’ have a duration of four months and will be optimised towards a participant’s personal needs. The enhanced usual care group is acting as control group where participants are asked to continue life ‘as is’ for the duration of the intervention (4 months). Strengthening the brain ‘Strengthening the brain’ is a lifestyle intervention which combines physical exercise with cognitive training. For the exercise component, participants receive exercise and lifestyle coaching (part in kind contribution of Personal Fitness Nederland, Fit for Life programme, www.personalfitnessnederland.nl ) where one-on-one training will be provided at one of the 94 studios of Personal Fitness Nederland (PFN). Every session will exist of a combination of cardio (aerobic) and strength (anaerobic) training dependent on the goals of the participant. At the start of the programme an intake will take place, where the physical fitness of the participant will be determined, and attention will be paid to specific health issues and goals of the participant. Sessions will be made increasingly more challenging. Each session will start with weighing the participant followed by 30 min of exercise in a studio where no other people are present, guaranteeing full attention to the participant. Next to the weekly face-to-face sessions, participants are asked to perform pre-set exercises at home twice a week (20 min each) which are guided by instruction videos on an online platform. A quantitative assessment of the adherence (how many sessions attended and progress between training sessions) will be done by the lifestyle coach and participants will write down their goals concerning exercise and diet for the upcoming week in a personal log. Lifestyle coaches will also provide participants with diet schemes and mental coaching. This will also be recorded using the personal log. For the cognitive component, participants will follow cognitive training using a Dutch home-based computerised cognitive training, BrainGymmer © ( https://www.Braingymmer.com ), which has been used in multiple studies [ 34 – 37 ]. A variety of cognitive functions are trained (information processing speed, spatial memory, working memory, executive functioning) rather than one cognitive function in particular. The cognitive training will focus on the cognitive function that was most impaired on the neuropsychological assessment at baseline. The training has an adaptive mechanism, which will adjust to the participant’s performance level to make it challenging for all participants. The programme will log training time and the percentage of correctly performed games. Participants are instructed to train for 60 min per week. Strengthening the mind The ‘strengthening the mind’ intervention consists of biweekly contact with trained work coaches who are all diagnosed with MS themselves (in kind contribution of the Dutch MS Society, MSVN). The intervention focuses on (re)discovery of a sustainable and healthy balance between relevant work values, the challenges workers with MS are facing, and at the same time meeting the work demands in a dynamic work context with ongoing technological developments. This will be reached using a combination of the capability approach and the participatory approach: “Working Positively”. The starting point of the capability approach related to work participation is to explore what people find important and valuable in work – what they would like to achieve in a given (work) context – and moreover, in the case of individuals confronted with a chronic and progressive disease such as MS, to ascertain whether people are enabled and able to do so. The participatory approach uses a practical, stepwise manner to detect challenges at the workplace and implement solutions by actively involving both the worker and the workplace (e.g., the supervisor). Every participant will be matched with a trained work coach that also has an MS diagnosis. The starting point (step 1) of the coaching will be an assessment of individual work values using the capability set for work questionnaire [ 38 ] and to become acquainted with the worker and their working context. This starts by identifying which work values are important to the worker with MS. Secondly, it will be assessed to what extent workers are enabled to achieve these values at work, and thirdly to what extent they are able to achieve these values at work. Any discrepancies patients experience in being enabled and able to achieve important work values will be flagged, as these indicate barriers for optimal and satisfactory work participation [ 39 ]. Additionally, the employer or other representative of the workplace will be requested to join at least one coaching session to provide information related to work demands and context from their perspective as is described as an essential element of the participatory approach. In case the participant prefers not to disclose their MS diagnosis to the workplace, an independent occupational health professional may act as a representative of the workplace. In the second step of this intervention, both the worker and the representative of the workplace prioritise the important values and challenges to meet work demands (e.g., work tasks, time pressure, working hours, peak loads, or other challenges within the working context). The worker and representative of the workplace will select three work challenges that will be addressed during the coaching period. Third, for each work challenge both the worker and the representative of the workplace will think of possible and practical solutions under guidance of the coach. Fourth, a plan of action is developed in which consensus for the proposed solutions are achieved and a plan for its execution is agreed upon. The fifth step describes and ensures the implementation of solutions, and in the final step the degree and the effects of the implemented solutions will be evaluated with all stakeholders involved. If necessary, the plan of action is adjusted [ 40 ]. The intervention is completed either when satisfactory solutions have been implemented for all identified work challenges, or after biweekly coaching has taken place for a period of 4 months. Enhanced usual care Participants in the enhanced usual care condition will watch a pre-recorded video together with a researcher with the opportunity to ask questions afterwards. The video provides a standardised explanation of cognitive decline in PwMS based on the Dutch book “MS and Cognition, by scientists for people with MS and their surroundings” (editors: Hanneke Hulst & Jeroen Geurts). The video includes information about the frequently affected cognitive domains in MS and their relation to brain pathology. Participants will be asked to continue their life ‘as is’ during the time of the intervention (4 months). The main reason for incorporating enhanced usual care in the protocol is to avoid resentful demoralisation of participants assigned to this group. Outcome measures Work package 2: Randomized controlled trial As illustrated in Figs. 1 and 2 , measurements in the randomized controlled trial will take place right before the intervention (baseline; T0), directly after the intervention (four months after baseline; T1), at short- and long-term follow-up (10 months (T2) and 16 months (T3), respectively). Demographic and disease-related measures During the baseline assessment, demographic and clinical characteristics will be gathered from the participants. The following characteristics will be determined: Age, sex, length, weight, highest level of education attained, job type, working hours, disability pension, current and history of exercise activity, year of diagnosis, MS subtype, disease duration, disease severity using EDSS, and medication history. Primary outcome measure The primary outcome measure of this study is HRQoL, which will be assessed using a composite score of the 36-item Short Form (SF-36) [ 41 ]. HRQoL will be determined at all four measurement moments during the study, which will allow to assess short-term and long-term changes in HRQoL. The effects of the interventions on the SF-36 as overall value and per subcategory will be analysed for timepoints T1, T2, and T3. Subcategories of the SF-36 contain physical functioning, role limitations because of physical health problems, bodily pain, social functioning, general mental health, role limitations because of emotional problems, vitality, and general health perceptions. The validity and reliability of the SF-36 are well established in healthy controls and PwMS [ 41 , 42 ]. Secondary outcome measures Cognitive measures Cognitive functioning will be assessed using the MACFIMS battery [ 33 ] and the Multiple Sclerosis Neuropsychological Screening Questionnaire (MSNQ) [ 43 ]. The MACFIMS battery consists of the following tests: Dutch adaptation of the Controlled Oral Word Association Test (COWAT); Dutch Letter Fluency Test [ 44 , 45 ], Judgement of Line Orientation (JLO) [ 45 ], Dutch version of the California Verbal Learning Test, second edition (CVLT-II) [ 46 – 48 ], Brief Visuospatial Memory Test-Revised (BVMT-R) [ 49 ], Paced Auditory Serial Addition Test (PASAT) [ 50 ], Symbol Digit Modalities Test (SDMT) [ 51 ], and the Sorting Test from the Delis-Kaplan Executive Function System (DKEFS) [ 52 ]. The SDMT and PASAT include the adaptations from Rao [ 53 ]. To test for performance validity, the Amsterdam Short Term Memory Test (ASTM) [ 54 ] and the Rey 15-Item Test [ 55 ] will be assessed. The Rey 15-Item Test will only be assessed if the total score on the ASTM indicates underperformance (a cut-off of ≤ 84 will be applied). Work measures Measures reflecting on work include work participation and productivity, assessed using the Work Productivity and Activity Impairment Questionnaire: General Health [ 56 ], work difficulties, using the Multiple Sclerosis Work Difficulties Questionnaire [ 57 , 58 ], the capability to carry out work activities, assessed with the Capability Set for Work Questionnaire [ 39 ] which has been used in previous studies with workers with MS [ 38 ], and quality of working life, which will be assessed with the valid and reliable Quality of Working Life Questionnaire for Cancer Survivors [ 59 ] which has been used in multiple patient populations [ 60 – 62 ]. Structural and functional brain measures The MRI scan features an expanded clinical protocol, focused on brain and lesion volumes and structural and functional connectivity. Lesion masks and volumes will automatically be detected on 3D-FLAIR [ 63 ]. Grey matter volume, white matter volume and total brain volume will be determined using FSL-SIENAX, after lesion filling [ 64 ] on the 3DT1. Volumes of the deep grey matter structures will be determined using FIRST [ 65 ], which will also be subtracted from SIENAX-derived segmentations to derive total cortical volume. Cortical thickness will be determined using Freesurfer (Charlestown, Massachusetts). Diffusion Tensor Imaging (DTI) will be performed to investigate the microstructural integrity of the white matter [ 66 ]. Tract Based Spatial Statistics (TBSS, FSL) will be used to investigate structural integrity across the main white matter tracts in the brain [ 67 ]. Furthermore, probabilistic tractography using MRTrix will be used to visualise specific tracts in the white matter to determine structural connectivity and the volume of the specific white matter tracts of interest [ 68 – 70 ]. The amplitude of regional functional activation will be determined using blood-oxygen level dependent (BOLD) response during an episodic memory encoding task. The task has been specifically developed to assess memory function and robustly evokes brain activation in the hippocampus [ 71 , 72 ], showing hippocampal changes in MS. It makes use of an event-related design of which only the correctly remembered items will be modelled (to ensure proper attention). The task contains different landscapes which have to be judged to be tropical landscapes or non-tropical landscapes. A retrieval task will be held after the MRI scanning has finished. FSL-FEAT will be used to analyse the BOLD responses for the correctly remembered items. Resting-state fMRI will be used to assess functional connectivity (FC). Images will be pre-processed using FSL and corrected for motion using ICA-AROMA. FC will be calculated by correlating the averaged time series of brain regions. These regions are defined using the cortical Brainnetome atlas, and the deep grey matter atlas that is part of FIRST. Subsequently, all pair-wise connectivity scores will be corrected for the whole-brain mean, to deal with individual fingerprint effects [ 73 ]. Dynamic FC will be quantified by separating time series into sliding windows, calculating the variability over time of functional connectivity strength [ 74 ]. Static and dynamic FC patterns will be summarized across regions forming separate resting-state networks, such as the default-mode and fronto-parietal networks [ 75 ]. In addition, we will use connectivity patterns to calculate measures of static and dynamic network topology, such as global and local efficiency, using the Brain Connectivity Toolbox (BCT) in Matlab (Natick, Massachusetts: The MathWorks Inc.) [ 76 ]. Network topology will also be assessed using eigenvector centrality mapping, which determines the network importance of individual regions, which was previously validated for MS [ 77 ]. Psychological measures Fatigue will be assessed using the Checklist Individual Strength (CIS) [ 78 ], validated in Dutch PwMS [ 79 ]. Mood and anxiety will be tested using the Hospital Anxiety and Depression Scale (HADS) [ 80 ] which has been validated in PwMS [ 81 ]. Resilience will be measured using the valid and reliable Connor Davidson Resilience Scale [ 82 ]. Perceived level of stress will be assessed using the Perceived Stress Scale (PSS) [ 83 ], validated in PwMS [ 84 ]. Social mindfulness will be assessed using the paradigm by van Doesum et al. (2013) [ 85 ], the standard assessment to measure social mindfulness. Social participation will be measured using the valid and reliable PROMIS ‘Ability to Participate in Social Roles and Activities’ item bank [ 86 ]. Societal costs and general quality of life Societal costs include healthcare, patient and family, and lost productivity costs and will be assessed using the iMTA Productivity Cost Questionnaire (iPCQ) [ 87 ] and iMTA Medical Cost Questionnaire (iMCQ) [ 88 ]. Additionally, the EuroQol five-level questionnaire (EQ-5D-5L) will be used to measure general quality of life. The Dutch EQ-5D-5L tariff will be used to convert EQ-5D-5L health states to utility scores to enable the calculation of quality-adjusted life-years (QALYs) [ 89 , 90 ]. Physical functioning In order to examine physical functioning, balance, walking speed, endurance, grip strength, and dexterity will be assessed. Balance will be assessed using the Mini-BESTest, which has been shown to be a reliable and valid balance assessment in PwMS [ 91 , 92 ]. The test contains 14 items which can be categorised into anticipatory postural adjustments, reactive postural control, sensory orientation, and dynamic gait. Each item can be scored on a 3-point scale. A total score of 28 points can be achieved, where a score < 19 induces an increased risk of falling [ 91 ]. Walking speed will be measured using the Timed 25-Foot Walk (T25FW), which is a valid and reliable measure to assess ambulatory performance [ 93 ]. Participants will walk between two cones, 7.62 m apart. Participants will perform the T25FW four times, two times as fast as possible and two times on their comfortable walking speed [ 94 ]. Endurance will be measured using the Shuttle Walk Test (SWT). It is a basic test which can be conducted with few materials. The SWT has recently been validated and is proven to be a reliable outcome measure in ambulatory PwMS [ 95 ]. For this study, the protocol of Singh et al. (1992) will be used [ 96 ]. Grip strength will be assessed using a JAMAR hand-held dynamometer and will be expressed in kilograms. Participants are asked to pinch the dynamometer in a seated position with their arm held out in a 90-degree angle two times with each hand [ 97 ]. Upper limb dexterity will be measured with the combination of the 9-Hole Peg Test (9HPT) [ 98 ] and the Purdue Pegboard Test (PPT) [ 99 ]. The 9HPT is an often-used measure for dexterity in PwMS [ 98 ]. To additionally assess bimanual motor function, the PPT is included. Grip strength, 9HPT, and PPT are valid measures for upper limb assessment in PwMS [ 100 ]. Blood sampling Blood will be drawn at three timepoints and will be stored in a biobank created for this study such that markers of interest can be studied retrospectively. Effectiveness and adherence of treatment protocol A quantitative assessment of the ‘strengthening the brain’ and ‘strengthening the mind’ programmes will be conducted using the Goal Attainment Scaling (GAS) [ 101 ]. Participants will formulate, together with a researcher or work coach, three to four goals following the SMART (Specific, Measurable, Attainable, Realistic, Timely) principle. For each goal, the expected or ‘level 0’ outcome will be carefully defined at baseline. Goals will be weighted for importance and difficulty. At the end of the intervention, the participant and researcher or coach will agree upon if the level of the goal was achieved (0); slightly exceeded (+ 1) or greatly exceeded (+ 2); or if it was ‘not quite achieved’ (-1) or ‘nowhere near’ (-2). The lifestyle coach in the ‘strengthening the brain’ intervention will assess how many sessions were attended and will also assess the progress between training sessions. Similarly, adherence to the cognition training within ‘strengthening the brain’ will be logged (e.g., training time and percentage of correctly performed games). In the ‘strengthening the mind’ intervention, the work coach will note the number, duration, and form of the consults (face-to-face and/or online), the number of consults the workplace representative was involved in, the role of the workplace representative, the three identified work challenges, and the proposed solutions to these work challenges. The work coach will record for each challenge the worker indicated to what extent the challenge was successfully addressed at the end of the coaching period using the Visual Analogue Scale (VAS) ranging from 0 to 100%. Work Package 3: Qualitative study A selection of stakeholders involved in the project (PwMS, lifestyle- and work coaches, neurologists, neuropsychologists, occupational physicians, and occupational therapists) will be invited for the semi-structured interviews and focus groups. The interviews aim to provide insight into the experiences of stakeholders regarding the interventions and will be planned in batches to ensure an equal number of participating patients and coaches during the whole study period, preferably within a timeframe of 3 months from the last training session to avoid recall-bias. Interviews will be based on a topic list to ensure that all relevant questions will be addressed, and will continue until saturation is reached, meaning that no new themes emerge from the analysis. In case logistical improvements are brought up during the early interviews, adjustments will be made for future participants (e.g., if participants prefer to exercise in the morning rather than in the evening). Content-wise no changes will be allowed to the interventions. It is expected that 12–15 interviews with participants and 10–12 interviews with professionals from each group will be sufficient to reach saturation. Focus groups will be organised to further understand the effects of the ‘strengthening the brain’ and ‘strengthening the mind’ interventions, and to explore factors that promote or hinder the implementation of the interventions. The focus groups will consist of 8–10 participants to ensure optimal exchange of perspectives and dialogue and a script will be prepared. First, we will organize homogeneous focus groups (with participants and professionals separately); next, a heterogeneous dialogue group will be held in which participants and professionals reflect on the experiences and outcomes together. In forming focus groups, we will include diverse participants (considering sex, age, educational level). Focus groups will be organised after the individual interviews have been conducted and analysed, and short-term quantitative measures have been assessed. As summarised in Table 1 , a total of nine focus groups will be organised. For each of the two interventions, three focus groups will be organised, one with PwMS (group 1 and 2), one with coaches (group 3 and 4), and one heterogeneous group (a combination of PwMS and coaches/supervisors, group 5 and 6). One focus group will be organised for the work supervisors for the intervention ‘strengthening the mind’ (group 7). To enable implementation of the interventions in the near future, we will also organise a focus group with (referring) health care professionals (neurologists, neuropsychologists, occupational physicians/therapists; group 8), and a mixed group of PwMS and work supervisors that participated in the interventions and coaches of both interventions and healthcare professionals to discuss what is needed to introduce the interventions in practice successfully (group 9). Interviews and focus groups will be audio-recorded and transcribed to be further analysed. Sample size The sample size of 270 participants is based on a power calculation. A review of earlier studies on the effects of cognitive rehabilitation in MS [ 14 ] suggests that we can expect moderate effects (effect size of 0.35) between pre- and post-intervention. Assuming statistical significance of 0.025, a power of 0.80, and an effect size of 0.35, 75 participants per group are needed. A conservative alpha of 0.025 has been chosen to take into account that we compare two interventions with a control group. A correlation of 0.6 between the measures was assumed. Based on previous experiences, a drop-out of 20% over the study period is expected. Therefore, we will include 90 subjects per group to ensure sufficient power. This study will be carried out using an intention-to-treat protocol. Therefore, subjects who withdraw from the study after inclusion will not be replaced. Recruitment Most of the participants participated in work package 1 before participating in the current study. These participants will mainly be recruited through 12 hospitals in the Netherlands, each having an MS population of 250–650 people. Of the 750 participants who take part in work package 1, we expect that approximately 30% has mild cognitive deficits [ 102 ], and therefore 220 participants who can enrol in the current study. The remaining 50 participants will be recruited through social media, and we can approach potential participants for this study by contacting PwMS who gave permission to be approached for further research projects. PwMS who are willing to participate will be checked for eligibility based on the in/exclusion criteria. People who are eligible to participate in work packages 2 and 3 will receive an information letter and will be asked to sign informed consent. Allocation and blinding After baseline measurements, participants will be randomly assigned to one of the three conditions with a 1:1:1 allocation using a block randomisation of Research Randomizer ( https://www.randomizer.org ) to ensure equal group sizes. The block randomisation will not be disclosed to researchers performing measurements and analyses. After receiving informed consent, the involved lifestyle coach or work coach will be informed of the inclusion of the participant in their treatment arm by a researcher who is not involved with the measurements and analyses. Cognitive measures, structural and functional brain measures, neurological-, blood-, and physiological measures will be collected, analysed, and stored under a single-blinded protocol. To achieve single blinding, eligible participants will be randomised and assigned to an intervention by three designated researchers (SV, MR, KH) who are not involved in data collection and analyses. The researchers who are involved in data collection and analyses (JA, SS) are not informed about the allocation, nor discuss the intervention with the participants and involved coaches. In addition, researchers who carry out the measurements will explicitly instruct participants not to disclose any information about the intervention they are following. Data capture and data monitoring Before the start of recruitment, study researchers responsible for data collection are trained on the use of assessments. All study data, except the structural and functional MRI data, are stored in Castor Electronic Data Capture (EDC) system, which is a secure, web-based application with features like audit-trails, monitoring, and capturing and integrating external data. Additionally, regular monitoring will be carried out through the sponsor to ensure data quality, accuracy, and GCP adherence. Collected data will be stored using a code and will always be checked by a second researcher to minimise input errors. Imaging data will be stored on the image data server of the hospital. Statistical analysis Data will be analysed using R (R Core Team, Vienna, Austria) and Rstudio version 4.2 or higher (PBC, Boston, MA) and/or SPSS version 28 or higher (IBM, Armonk, NY). Analyses will be performed according to intention-to-treat and per-protocol. The main focus of the analyses will be on the intention-to-treat analysis, as this will reduce bias and better represents daily practice. A per-protocol analysis will be performed to evaluate the effect of the intervention on itself, supplementary to the intention-to-treat analysis. Data will be included in the per-protocol analysis if the participant completed the intervention. The alpha level will be set at a statistical threshold of α = 0.05, corrected for multiple comparisons when applicable. GAS will be used to discriminate between responders and non-responders concerning the interventions. For the outcome cognitive functioning we will calculate a reliable change index (RCI) for each cognitive test based on the enhanced usual care group. Using the RCI scores allows us to correct for learning effects. An RCI score above + 1.64 or below -1.64 is assumed to reflect significant improvement or decline, respectively [ 5 , 103 ]. The psychological and work functioning data will be obtained via questionnaires and will be analysed as continuous variables together with the physiological data. MRI images will be analysed using FSL ( https://fsl.fmrib.ox.ac.uk ) and Freesurfer (Charlestown, Massachusetts). Data obtained (atrophy, white matter integrity, task-specific brain activation and brain connectivity) will be exported to Rstudio after which mixed-model analyses will be performed. Missing data will be minimised by using digital questionnaires, that prompt participants to answer each question before being allowed to proceed. For other outcome measures, multiple imputation will be used for missing baseline and follow-up data. Outliers will be identified and excluded from the main analysis. Primary analyses will evaluate the effectiveness of the investigated interventions (‘strengthening the brain’, ‘strengthening the mind’) compared to a control group (‘enhanced usual care’) in improving HRQoL (overall score on SF-36). A mixed-model analysis will be performed, with time (T0, T1, T2, and T3) and condition (‘strengthening the brain’, ‘strengthening the mind’, and ‘enhanced usual care’) as fixed factors, and subject as random factor. P-values and estimates of effect sizes will be obtained. Additionally, separate mixed-model analyses will be performed to evaluate the effect of the investigated interventions compared to the control group on cognitive, psychological, physiological, and work functioning, and enhancing the brain’s functional network. Predictors of treatment response will be investigated by adding baseline scores on the biological, psychological, and environmental measures to the mixed-model analyses. Mediation analyses will be used to study which mechanisms (i.e.., biological, psychological, and environmental) mediate the effect of the interventions on HRQoL. Conditions for mediation will be tested [ 104 ]: 1) investigated interventions should affect HRQoL; 2) investigated interventions should affect the presumed mediator; 3) presumed mediator and HRQoL should be related. The cost-effectiveness analyses will be performed from a healthcare and a societal perspective according to the intention-to-treat principle. Intervention costs will be calculated using a bottom-up micro-costing approach. Missing cost and effect data will be imputed using multiple imputation by chained equations (MICE) with predictive mean matching to account for the skewed distribution of costs. The number of imputed datasets will be increased until the loss of efficiency is smaller than 5% [ 105 ]. Each dataset will be analysed separately as described below, after which results will be pooled using Rubin’s rules. Bivariate regression models will be used to estimate cost and effect differences between the intervention groups and the control group, while adjusting for confounders if necessary. Incremental cost-effectiveness ratios will be calculated by dividing the difference in costs by the difference in effects. Statistical uncertainty will be estimated using bias-corrected and accelerated bootstrapping and will be presented in cost-effectiveness planes and acceptability curves. Sensitivity analyses are performed to assess the robustness of the results. Qualitative analysis The analysis of the interviews will delve into the experiences of stakeholders with the interventions. The analysis of the focus groups focuses on interpretation of the effects and possible explanations of the interventions. Additionally, analysis of the focus groups explores factors that influence the implementation. For the qualitative study, thematic analysis will be used to identify recurrent themes of meaning within the qualitative data [ 106 ]. After transcription, the data will be analysed and open coded using ATLAS.TI version 23 or higher. First the smallest units possible will be determined and coded, next the coded segments will be combined to identify themes. Current status Participants are currently being recruited. The first participant was included on April 16th, 2023.
Discussion In this multi-arm, single-blind, controlled trial, 270 PwMS will be randomly assigned to a lifestyle intervention, a work intervention or enhanced usual care. Interventions will have a duration of four months with a total follow-up time of 12 months, which is a longer follow-up period than typical for these types of interventions and thus allows us to evaluate long-term effects. The interventions will be tailor-made depending on the individual needs of the participant. In the two different interventions, HRQoL is enhanced via two hypothesized working mechanisms. In strengthening the brain we aim to postpone the development of cognitive decline through a combination of physical exercise, one-on-one mental coaching, dietary advice, and cognitive training. In strengthening the mind we aim to prevent job loss through combining the capability approach and the participatory approach in one-on-one coaching. In this study we specifically aim for PwMS with only mild cognitive impairment, thereby intervening when problems are not advanced yet and prevention may still be possible. The study focusses on HRQoL, which, along with other outcome measures, will be monitored for a total of 16 months, aiming to enhance our understanding of the effectiveness of the interventions over a longer period. Additionally, the involvement of a wide variety of specialised personnel reflects an interdisciplinary approach, resulting in a broad view on what is important for improving HRQoL in PwMS. The qualitative study adds insights from the participants’ and relevant stakeholders’ experiences during the interventions. This will enable interpretation of the found effects and provide insights into factors relevant for implementation into clinical practice. The interventions ‘strengthening the brain’ and ‘strengthening the mind’ are aimed at two different problems: cognitive decline and job-loss. Both interventions use a tailor-made approach and are both aiming for an improvement in HRQoL albeit via different working mechanisms. Combining such diverse interventions might be a way forward to improve care for this group as problems are rarely one-sided. Additionally, the interventions are designed in co-design with end users to make it more feasible to adapt towards clinical practice. In summary, the outcome of this study is expected to support the paradigm shift from symptom management towards preventative interventions, ultimately improving HRQoL in PwMS.
Background Up to 65% of people with multiple sclerosis (PwMS) develop cognitive deficits, which hampers their ability to work, participating in day-to-day life and ultimately reducing quality of life (QoL). Early cognitive symptoms are often less tangible to PwMS and their direct environment and are noticed only when symptoms and work functioning problems become more advanced, i.e., when (brain) damage is already advanced. Treatment of symptoms at a late stage can lead to cognitive impairment and unemployment, highlighting the need for preventative interventions in PwMS. Aims This study aims to evaluate the (cost-) effectiveness of two innovative preventative interventions, aimed at postponing cognitive decline and work functioning problems, compared to enhanced usual care in improving health-related QoL (HRQoL). Methods Randomised controlled trial including 270 PwMS with mild cognitive impairment, who have paid employment ≥ 12 h per week and are able to participate in physical exercise (Expanded Disability Status Scale < 6.0). Participants are randomised across three study arms: 1) ‘strengthening the brain’ – a lifestyle intervention combining personal fitness, mental coaching, dietary advice, and cognitive training; 2) ‘strengthening the mind’ – a work-focused intervention combining the capability approach and the participatory approach in one-on-one coaching by trained work coaches who have MS themselves; 3) Control group—receiving general information about cognitive impairment in MS and receiving care as usual. Intervention duration is four months, with short-term and long-term follow-up measurements at 10 and 16 months, respectively. The primary outcome measure of the Don’t be late! intervention study will be HRQoL as measured with the 36-item Short Form. Secondary outcomes include cognition, work related outcomes, physical functioning, structural and functional brain changes, psychological functioning, and societal costs. Semi-structured interviews and focus groups with stakeholders will be organised to qualitatively reflect on the process and outcome of the interventions. Discussion This study seeks to prevent (further) cognitive decline and job loss due to MS by introducing tailor-made interventions at an early stage of cognitive symptoms, thereby maintaining or improving HRQoL. Qualitative analyses will be performed to allow successful implementation into clinical practice. Trial registration Retrospectively registered at ClinicalTrials.gov with reference number NCT06068582 on 10 October 2023. Keywords
Abbreviations 9-Hole Peg Test Amsterdam Short Term Memory Test Brain Connectivity Toolbox Blood-Oxygen Level Dependent Brief Visuospatial Memory Test-Revised Checklist Individual Strength Controlled Oral Word Association Test California Verbal Learning Test, second edition Sorting Test from the Delis-Kaplan Executive Function System Diffusion Tensor Imaging Electronic Data Capture Expanded Disability Status Scale EuroQol five-dimensional questionnaire Functional Connectivy Hospital Anxiety and Depression Scale Health-related quality of life IMTA Medical Cost Questionnaire IMTA Productivity Cost Questionnaire Judgement of Line Orientation Minimal Assessment of Cognitive Functioning in Multiple Sclerosis Multiple Sclerosis Multiple Sclerosis Neuropsychological Screening Questionnaire MS vereniging Nederland Paced Auditory Serial Addition Test Personal Fitness Nederland Progressive MS Purdue Pegboard Test Perceived level of Stress People with MS Quality-Adjusted-Life-Years Quality of life Reliable Change Index Relapsing remitting MS Symbol Digit Modalities Test Shuttle Walk Test Timed 25-Foot Walk Tract Based Spatial Statistics Visual Analogue Scale Acknowledgements Members of the Don’t be late! consortium are: Participating sites (Casper E.P. van Munster, Amphia Ziekenhuis, Breda, The Netherlands; Renske G. Wieberdink, MS Centrum Stedendriehoek, Gelre, The Netherlands; Jolijn Kragt, Reinier de Graaf Ziekenhuis, Delft, The Netherlands; Judith Schouten, Rijnstate, Arnhem, The Netherlands; Erwin L.J. Hoogervorst, St. Antonius Ziekenhuis, Nieuwegein, The Netherlands; Paul A.D. Bouma, Tergooi Ziekenhuizen, Hilversum, The Netherlands; Floris G.C.M. De Kleermaeker, Viecuri Medisch Centrum, Venlo, The Netherlands; Meike Holleman, Jeroen Bosch Ziekenhuis, ’s-Hertogenbosch, The Netherlands; Sofie Geurts, Canisius Wilhelmina Ziekenhuis, Nijmegen, The Netherlands; Christaan de Brabander, Admiraal de Ruyter Ziekenhuis, Vlissingen, The Netherlands; Nynke F. Kalkers, OLVG, Amsterdam, The Netherlands); Bram A.J. den Teuling, Pim van Oirschot, Sonja Cloosterman, Sherpa B.V., Nijmegen, The Netherlands; Jos Vermeer, Personal Fitness Nederland (PFN) B.V., Eindhoven, The Netherlands; Chris C. Schouten, Dutch MS Society, Den Donder, The Netherlands; Gerard J. Stege, Merck B.V., Schiphol-Rijk, The Netherlands; Thijs van ‘t Hullenaar, Sanofi B.V., Genzyme Europe, Amsterdam, The Netherlands. We would like to thank all members of the consortium for their contribution to the project. Dissemination policy After data collection has finished, data will be analysed, and results will be published in scientific journals and presented at (inter)national scientific meetings. An embargo period of one year will be used before study data will be shared on a data repository. Authors’ contributions JA: Conceptualisation, Methodology, Writing – original draft, Visualisation, Project Administration. SS: Conceptualisation, Methodology, Writing – Review & Editing, Project Administration. JB: Methodology, Writing – Review & Editing. VG: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. BJ: Conceptualisation, Writing – Review & Editing, Supervision. MK: Conceptualisation, Supervision. MR: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. FS: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. ES: Conceptualisation, Methodology, Writing – Review & Editing, MS: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. BU: Conceptualisation, Methodology, Supervision. SV: Conceptualisation, Methodology, Writing – Review & Editing, Project Administration. PW: Conceptualisation, Writing – Review & Editing. GW: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. KH: Conceptualisation, Methodology, Writing – Review & Editing, Supervision. HH: Conceptualisation, Methodology, Writing – Review & Editing, Supervision, Funding Acquisition. All authors read and approved the final manuscript. Authors’ information Not applicable. Funding This project is peer-reviewed and funded by the Dutch Research Council (NOW) as part of the Dutch National Research Agenda (NWA) (file number NWA.1292.19.064). The funder has no influence on the design of the study and data collection, analysis, and interpretation and in writing the manuscript. Availability of data and materials No datasets were generated or analysed during the current study. Declarations Ethics approval and consent to participate The study has been approved by the medical ethics review committee of Amsterdam University Medical Centers (METc 2023.0613, protocol version 3 date 16–02-2023). Any future substantial changes to the protocol will undergo review and approval by the medical ethics review committee. This study will be carried out according to the Declaration of Helsinki. The study is registered at ClincialTrials.gov with registration number NCT06068582. Written informed consent for publication of results will be obtained from all participants. Consent for publication Not applicable. Competing interests JA, SS, JB, VG, BJ, MK, MR, FS, ES, SV, PW, GW, and KH have nothing to declare with respect to this study. BU has received research support and/or consultancy fees from Biogen Idec, Genzyme, Merck Serono, Novartis, Roche, Teva and Immunic Therapeutics. MS serves on the editorial board of Neurology, Multiple Sclerosis Journal and Frontiers in Neurology, receives research support from the Dutch MS Research Foundation, Eurostars-EUREKA, ARSEP, Amsterdam Neuroscience, MAGNIMS and ZonMW (Vidi grant, project number 09150172010056) and has served as a consultant for or received research support from Atara Biotherapeutics, Biogen, Celgene/Bristol Meyers Squibb, EIP, Sanofi, MedDay and Merck. HH is an editor of the Multiple Sclerosis Journal controversies sections, receives research support from the Dutch MS Research Foundation and the Dutch Research Council. She has served as a consultant for or received research support from Atara Biotherapeutics, Biogen, Novartis, Celgene/Bristol Meyers Squibb, Sanofi Genzyme, MedDay and Merck BV.
CC BY
no
2024-01-16 23:45:34
BMC Neurol. 2024 Jan 15; 24:28
oa_package/a8/1b/PMC10789039.tar.gz
PMC10789040
0
Introduction Favipiravir was developed in 2002 as an anti-influenza medication [ 1 ]. It is a pyrazinecarboxamide derivative, a prodrug that is metabolised within cells to its active antiviral form, favipiravir-ribofuranosyl-5'-triphosphate (favipiravir-RTP). Favipiravir-RTP is a nucleoside analogue which selectively inhibits viral RNA-dependent RNA polymerase and has shown in vitro activity against many RNA viruses [ 2 ]. Favipiravir has been licensed in Japan for influenza, and in China for investigational use, but it has not been licensed elsewhere. Favipiravir has been used in influenza at two doses- an initial dose of 3.2g (D0) followed by 1.2g daily thereafter, and a higher dose of 3.6g D0 and 1.6g daily (which is the dose used in this study). A trial using much higher doses of favipiravir (6g D0, and 2.4g daily D1-9) was conducted in patients with Ebola virus disease in Guinea, although the study had no control arm and could not reach conclusions on efficacy [ 3 ]. Favipiravir was identified as having antiviral activity against the SARS-CoV-2 virus through early in vitro screening [ 4 – 6 ], albeit at concentrations up to 1,000 fold higher than those required to inhibit influenza in vitro [ 7 ]. Studies in hamsters have demonstrated a beneficial antiviral effect against SARS-CoV-2 although only at very large doses, suggesting that high exposures might be needed to achieve beneficial effects in treating COVID-19 [ 8 , 9 ]. Therapeutic recommendations for the treatment of early COVID-19 still vary widely. Favipiravir has been recommended and was widely used as a treatment for COVID-19 in some countries, including Thailand ( https://ddc.moph.go.th/viralpneumonia/eng/file/guidelines/g_treatment.pdf ). Although some observational studies have suggested benefit from favipiravir [ 10 – 14 ], and a large clinical benefit was reported in one open-label randomised controlled trial (with shortening of time to clinical improvement from 14 to 2 days in hospitalised patients) [ 15 ], the other reported randomised trials have either shown no benefit, or the evidence of clinical efficacy has been marginal or unconvincing [ 16 – 29 ]. However, several of these studies were conducted in hospitalised patients, in whom the window of opportunity for antivirals to benefit may have closed. Antiviral drugs work better in early illness than in later infections in hospitalised patients where inflammatory pathology dominates. Dosing has also varied between the favipiravir studies. Given the lower antiviral activity of favipiravir against SARS-CoV-2 relative to influenza, high doses are probably necessary for optimal in vivo antiviral efficacy. Reassuringly no significant safety or tolerability issues have been identified in these clinical studies, although concerns have been raised regarding the risk to the fetus if potentially mutagenic antiviral nucleoside analogues are given to pregnant women [ 30 ]. Overall, the available evidence still leaves considerable uncertainty whether or not high-dose favipiravir is a useful antiviral treatment of early COVID-19 in outpatients. We present the results from a randomised platform trial assessing the in vivo antiviral activity of favipiravir in adults with acute early COVID-19.
Methods PLATCOV is an ongoing phase 2 open label, randomised, controlled adaptive platform trial (ClinicalTrials.gov: NCT05041907 registered 13/09/2021) [ 31 ]. It provides a standardised quantitative comparative method for in vivo assessment of potential antiviral treatments in low-risk adults with early symptomatic COVID-19. Daily oropharyngeal viral densities are measured by qPCR. The primary outcome measure in PLATCOV is the viral clearance rate derived from the slope of the log 10 oropharyngeal viral clearance curve over the next 7 days following randomisation, estimated under a linear model [ 32 ]. The treatment effect is defined as the multiplicative change in viral clearance rate estimate relative to the contemporaneous no study drug arm (detailed below). The trial was conducted in Bangkok: Faculty of Tropical Medicine (FTM), Mahidol University, Bangplee hospital, Samut Prakarn; and Vajira hospital, Navamindradhiraj University, Bangkok, all in Thailand and in Belo Horizonte, Minas Gerais, Brazil (see Supplementary materials ). All patients provided fully informed written consent. All methods were approved and carried out in accordance with local and national research boards in Thailand, the Mahidol University Faculty of Tropical Medicine Ethics Committee, the Central Research Ethics Committee, Thailand, the National Research Ethics Commission of Brazil, and the Oxford University Tropical Research Ethics Committee (see Supplementary materials ). The PLATCOV trial was coordinated and monitored by the Mahidol Oxford Tropical Medicine Research Unit (MORU) in Bangkok, and overseen by a trial steering committee (TSC). Interim results were reviewed regularly by a data and safety monitoring board (DSMB). The funders had no role in the design, conduct, analysis or interpretation of the trial. Participants and procedures Previously healthy adults aged between 18 and 50 years were eligible for the trial if they had early symptomatic COVID-19 (i.e., reported symptoms for ≤ 4 days), oxygen saturation ≥ 96%, were unimpeded in activities of daily living, and gave fully informed consent to study participation. SARS-CoV-2 positivity was defined either as a nasal lateral flow antigen test which became positive within two minutes (STANDARD® Q COVID-19 Ag Test, SD Biosensor, Suwon-si, Korea) or a positive PCR test within the previous 24h with a cycle threshold value (Ct) < 25 (all viral gene targets), both of which suggest high pharyngeal viral densities. The latter was added on 25 November 2021 to include those patients with recent PCRs confirming high viral loads. This was the only change to the pre-trial pre-specified inclusion/exclusion criteria. Exclusion criteria included taking any potential antivirals or pre-existing concomitant medications, chronic illness or significant comorbidity, haematological or biochemical abnormalities, pregnancy (a urinary pregnancy test was performed in females), breastfeeding, or contraindication or known hypersensitivity to any of the study drugs [ 31 ]. Block randomisation was performed for each site via a centralised web-app designed by MORU software engineers using RShiny®, hosted on a MORU webserver. At enrollment, after obtaining fully informed consent and entering the patient details, the app provided the randomised allocation. The no study drug arm comprised a minimum proportion of 20% of patients at all times, with uniform randomisation ratios applied across the active treatment arms. The study was open-label (no placebos). Enrolled patients were either admitted to the study ward (in Thailand), consistent with National recommendations at the time, or followed as outpatients at home (in Brazil). After randomisation and baseline procedures (see Supplementary materials ) oropharyngeal swabs (two swabs from each tonsil) were taken as follows. Each flocked swab (Thermo Fisher MicroTest® and later COPAN FLOQSwabs®) was rotated against the tonsil through 360° four times and placed in Thermo Fisher M4RTTM viral transport medium (3mL). Swabs were transferred at 4–8°C, aliquoted, and then frozen at -80°C within 48h. Separate swabs from each tonsil were taken once daily from day 0 to day 7, and again on day 14. Each swab was processed and tested separately. Vital signs were recorded three times daily and symptoms and any adverse effects were recorded daily [ 31 ]. Patients allocated to favipiravir received 1800mg on an empty stomach, (nine 200mg tablets; Favir®, Government Pharmaceutical Organization in Thailand, n = 100; or Avigan®, FUJIFILM Toyama Chemical Co., Ltd. in Brazil n = 16), at the start of treatment followed 12 h later by a further 1800mg. Thereafter the patients took 800mg twice daily for a further 6 days totalling 13.2g over 7 days. All patients received standard symptomatic treatment excluding antivirals. The TaqCheck® SARS-CoV-2 Fast PCR Assay (Applied Biosystems, Thermo Fisher Scientific, Waltham, Massachusetts) quantitated viral densities (SARS-CoV-2 RNA copies per mL). This multiplexed real-time PCR method detects the SARS-CoV-2 N and S genes, and human RNase P in a single reaction. RNase P was used to correct for variation in human cell content in samples. Viral densities were quantified against ATCC heat-inactivated SARS-CoV-2 (VR-1986HK strain 2019-nCoV/USA-WA1/2020) standards. Viral variants were identified using Whole Genome Sequencing (see Supplementary materials ). Outcome measures The primary outcome measure was the rate of viral clearance, expressed as a slope coefficient [ 32 ], and estimated under a Bayesian hierarchical linear model (mixed-effects model) fitted to the daily log 10 oropharyngeal swab eluate viral density measurements between days 0 and 7 (18 measurements per patient). Before model fitting, Ct values were transformed to RNA copies per mL using a random effects linear model fit to the ATCC controls (random slope and intercept for each plate with additional fixed effects for each laboratory). Viral load measurements below the limit of quantification (Ct values ≥ 40) were treated as left-censored under the model. A non-linear model (allowing an initial log-linear increase in viral loads followed by a log-linear decrease in some patients) was also fitted to the data as a sensitivity analysis. All models included slope and intercept covariate effects for the virus variant, expressed as the major sub-lineages). Additional models included slope and intercept covariate effects for age, vaccination status, and days since symptom onset. The estimated individual viral clearance rates (i.e., slope coefficients from the model fit) can be expressed as clearance half-lives (t 1/2 = log 10 0.5/slope). The treatment effect was defined as the multiplicative change (%) in the mean viral clearance rate relative to the no study drug arm (i.e., how much the test treatment accelerates on average the viral clearance) [ 32 ]. Thus, a 50% increase in clearance rate equals a 33% reduction in clearance half-life. All-cause hospitalisation for clinical deterioration (until day 28) was a secondary endpoint. For each studied intervention the sample size was adaptive based on prespecified futility and success stopping rules. Initially the futility stopping rule was set as a probability > 0.9 that the acceleration in viral clearance was < 5%, but at the prespecified open first interim analysis performed after 50 patients had been enrolled, the futility threshold was increased to 12.5%. Adverse events were graded according to the Common Terminology Criteria for Adverse Events v.5.0 (CTCAE). Summaries were generated if the adverse event was ≥ grade 2 and was new or had increased in intensity. Serious adverse events were recorded separately and reported to the DSMB. Statistical analysis All analyses were done in a prespecified modified intention-to-treat (mITT) population, comprising patients who had ≥ 3 days follow-up data. A series of linear and non-linear Bayesian hierarchical models were fitted to the viral quantitative PCR (qPCR) data ( Supplementary materials ). Model fits were compared using approximate leave-one-out comparison as implemented in the package loo . All data analysis was done in R version 4.0.2. Model fitting was done in S tan via the RS tan interface. All code and data are openly accessible via GitHub: https://github.com/jwatowatson/PLATCOV-Favipiravir .
Results The trial began recruitment on 30 September 2021. On 31 October 2022, the favipiravir arm of the trial was stopped and favipiravir was removed from the randomisation lists in Thailand and Brazil following a recommendation from the DSMB as the prespecified futility margin had been reached. This decision was based on PCR data from 102 patients randomised to favipiravir and 104 concurrent controls. Of the 615 patients enrolled by that time, 116 patients had been randomised to receive favipiravir, 132 had been randomised to no study drug, and the remainder ( n = 367) were randomised to other interventions (casirivimab/imdevimab, tixagevimab/cilgavimab, remdesivir, ivermectin, nitazoxanide, fluoxetine, molnupiravir, or nirmatrelvir/ritonavir). Virological responses The mITT population included 114 patients randomised to favipiravir and 126 patients randomised to no study drug (Fig. 1 ). The baseline geometric mean (GM) oropharyngeal swab eluate viral load was 5.5×10 5 RNA copies/mL (IQR 4.7×10 5 to 6.3×10 5 ), (Table 1 , Fig. 2 a). Rates of viral clearance were estimated under a linear mixed-effects model fit to all PCR data taken up to day 7 after randomisation in the mITT population (4,318 swabs in 240 patients, of which 3,839 were above the lower limit of quantification, 89%). A non-linear model was used as a sensitivity analysis. Under the linear model, there was no evidence of a difference in viral clearance rates between the favipiravir treated patients and those receiving no study drug (mean difference: –1%; 95%CI: -14% to 14%). The posterior probability that the effect was less than the pre-specified futility margin of 12.5% was 0.97 (Fig. 2 b). The non-linear model gave very similar estimates (mean difference: -5%; 95%CI: -14% to 6%; probability less than 12.0% equal to 1). Under the linear model, patients treated with favipiravir had an estimated median viral clearance half-life of 16.6 h (range 6.7 to 48.0) and patients randomised to the no study drug arm had an estimated median viral clearance half-life of 15.7 h (range 3.4 to 42.1), (Fig. 3 a). In patients receiving favipiravir, there was no association between body weight (i.e., mg/kg dose of favipiravir) and the estimated viral clearance ( p = 0.2) (Fig. 3 b). Adverse effects The oropharyngeal swabbing procedures and all treatments were well-tolerated. There were three serious adverse events (SAEs) in the no study drug arm and two in the favipiravir arm, all resulting in the secondary endpoint of clinical deterioration leading to hospitalisation for medical reasons (three patients with raised creatinine phosphokinase (CPK) were already inpatients for isolation reasons; two no study drug and one favipiravir). In the favipiravir arm, a patient was readmitted 2 days after completing the 7-day course of favipiravir with fever and a maculopapular rash over the face, trunk, back, and extremities with sparing of the palms and soles. The rash was reviewed by a dermatologist who diagnosed a viral exanthem not related to the study drug. Two patients in the no study drug arm and one in the favipiravir arm had raised creatinine phosphokinase (CPK) levels (> 10 times ULN) attributed to COVID-19-related skeletal muscle damage. These improved with fluids and supportive management and were considered unrelated to study treatment. One patient in the no study drug arm was readmitted one day after discharge due to chest pain and lethargy. All clinical and laboratory investigations were normal and the patient was discharged the following day. There were no treatment related serious adverse events.
Discussion Continued uncertainty over the value of different COVID-19 treatments has resulted in substantial variation in therapeutic guidelines and clinical practices across the world. In the absence of other affordable and available oral antiviral treatments favipiravir has been recommended for the treatment of uncomplicated COVID-19 in several countries including Japan, Russia, Saudi Arabia, Turkey, Hungary, Kenya and Thailand (where it was recommended for patients with mild COVID-19 pneumonia from May 2020 until December 2022) ( https://ddc.moph.go.th/viralpneumonia/eng/file/guidelines/g_treatment.pdf ). Knowing definitively if an antiviral drug has antiviral efficacy in vivo should be a prerequisite for its deployment. But the urgency and gravity of the spreading pandemic in 2020 meant that many drugs were recommended without clear evidence of clinical benefit. In this fourth year of the COVID-19 pandemic, increasingly mild clinical presentations resulting from immune protection from vaccines and previous infections, declining viral virulence, and availability in some regions of newly developed oral antivirals with proven efficacy (notably molnupiravir and nirmatrelvir/ritonavir) [ 33 , 34 ], have all contributed to favipiravir being no-longer recommended for COVID-19. For the same reason use of other repurposed drugs has also decreased. This has left substantial uncertainty as to their clinical benefit in COVID-19, and their potential use in future pandemics caused by novel viruses. This comparative in vivo pharmacodynamic assessment conducted in “low risk” adults with early symptomatic COVID-19 infections shows that favipiravir, given at relatively high oral doses, does not have measurable antiviral activity in vivo and is, therefore, very unlikely to be clinically beneficial. The lack of demonstrable in vivo activity contrasts with the approximate 30 to 40% acceleration in viral clearance rate observed for remdesivir and molnupiravir in this trial platform [ 31 ]. The main limitation of our study that it is open label, which may have led to more withdrawals in the no study drug arm. Favipiravir was well-tolerated at the high doses used in this study. Favipiravir has complex non-linear pharmacokinetic properties [ 32 ]. It is metabolised primarily in the liver by aldehyde oxidase and excreted via the kidneys. Because of dose and time dependent auto-inhibition of aldehyde oxidase, favipiravir boosts its own plasma concentrations. This can result in exposures over twice the SARS-CoV-2 in vitro EC 90 [ 6 ], although there is substantial inter-patient variability in achieved plasma concentrations, and lower exposures have been noted in certain populations, e.g. those from the United States compared to Japan and China [ 35 ]. Despite pharmacokinetic modelling suggesting that exposures sufficient for an antiviral effect can be achieved, the relationship between ex vivo SARS-CoV-2 inhibitory concentrations and consequent therapeutic effects in COVID-19 in vivo is uncertain. This study does not exclude therapeutic benefit from even higher oral or parenteral doses of favipiravir, although there was no evidence of a dose response relationship in this study derived from the variation in weight adjusted doses. Similar negative results have been reported recently with ivermectin [ 36 ], which also fails to halt disease progression when given to outpatients [ 37 ]. In contrast, the antiviral remdesivir clearly does accelerate viral clearance [ 38 ], and in clinical trials it does prevent disease progression [ 39 ]. The association between accelerated viral clearance and improved clinical outcomes in early COVID-19 has been confirmed in studies with monoclonal antibodies as well as the newly developed antiviral drugs [ 33 , 34 , 36 , 40 – 42 ]. In contrast, the reported lack of demonstrable antiviral effect in the PINETREE study of remdesivir, despite demonstration of a clear clinically beneficial effect, likely resulted from too infrequent nasopharyngeal viral density measurements and from the statistical analysis approach used to assess differences in viral loads. All these studies were completed in largely unvaccinated populations at a time when a higher proportion of COVID-19 infections progressed to hospitalisation and severe outcomes. If repeated today such studies would need to be substantially, and perhaps prohibitively, larger in order to detect clinical benefit. For example, molnupiravir was shown to provide clinical benefit in studies conducted over two years ago [ 33 ], but in the more recent community based PANORAMIC study [ 43 ] conducted in the UK there was no clear effect of molnupiravir on hospitalisation or death, despite recruiting 26,411 patients. However, molnupiravir was associated with a reduced time to recovery (although it was an open-label study) and faster reduction in viral loads. Given the very low event rate for the primary endpoint, despite its size, the PANORAMIC study was still underpowered. The time and expense required to conduct large phase III studies in vaccinated populations and the difficulty of demonstrating efficacy using clinical end-points in early infections suggests that other approaches are needed for therapeutic assessment in COVID-19 (and other viral respiratory infections). The simple methodology described in this study provides one possible solution. It is readily performed anywhere which can perform accurate qPCR viral quantitation and it gives a rapid comparative assessment with much lower patient numbers than clinical trials with currently used viral endpoints (e.g. time-to-clearance) [ 44 ]. Duplicate daily oropharyngeal swabs are well-tolerated (whereas daily nasopharyngeal swabbing is not). The pharmacometric assessment can be used to characterise in vivo antiviral efficacy in real-time and thereby inform choice of drugs for large trials and therapeutic practice. Regulatory authority and treatment guideline decisions should be based upon evidence of in vivo antiviral efficacy, as well as in vitro evidence.
Brief summary In early symptomatic COVID-19 treatment, high dose oral favipiravir did not accelerate viral clearance. Background Favipiravir, an anti-influenza drug, has in vitro antiviral activity against SARS-CoV-2. Clinical trial evidence to date is inconclusive. Favipiravir has been recommended for the treatment of COVID-19 in some countries. Methods In a multicentre open-label, randomised, controlled, adaptive platform trial, low-risk adult patients with early symptomatic COVID-19 were randomised to one of ten treatment arms including high dose oral favipiravir (3.6g on day 0 followed by 1.6g daily to complete 7 days treatment) or no study drug. The primary outcome was the rate of viral clearance (derived under a linear mixed-effects model from the daily log 10 viral densities in standardised duplicate oropharyngeal swab eluates taken daily over 8 days [18 swabs per patient]), assessed in a modified intention-to-treat population (mITT). The safety population included all patients who received at least one dose of the allocated intervention. This ongoing adaptive platform trial was registered at ClinicalTrials.gov (NCT05041907) on 13/09/2021. Results In the final analysis, the mITT population contained data from 114 patients randomised to favipiravir and 126 patients randomised concurrently to no study drug. Under the linear mixed-effects model fitted to all oropharyngeal viral density estimates in the first 8 days from randomisation (4,318 swabs), there was no difference in the rate of viral clearance between patients given favipiravir and patients receiving no study drug; a -1% (95% credible interval: -14 to 14%) difference. High dose favipiravir was well-tolerated. Interpretation Favipiravir does not accelerate viral clearance in early symptomatic COVID-19. The viral clearance rate estimated from quantitative measurements of oropharyngeal eluate viral densities assesses the antiviral efficacy of drugs in vivo with comparatively few studied patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08835-3. Research in context Evidence before this study • The in vivo antiviral effect of favipiravir in patients with early symptomatic COVID-19 was not known. Added value of this study • High-dose favipiravir did not demonstrate antiviral activity in early symptomatic COVID-19. • The rate of viral clearance derived from frequent oropharyngeal swabbing in early COVID-19 can be used to characterise in vivo antiviral efficacy. Implications of all available evidence • In vivo antiviral activity of COVID-19 therapeutics should be used to inform policies and practices. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08835-3. Keywords
Supplementary Information
Acknowledgements We thank all the patients with COVID-19 who volunteered to be part of the study. We thank the data safety and monitoring board (DSMB) (Tim Peto, André Siqueira, and Panisadee Avirutnan); the trial steering committee (TSC) (Nathalie Strub-Wourgaft, Martin Llewelyn, Deborah Waller, and Attavit Asavisanu); Sompob Saralamba and Tanaphum Wichaita for developing the RShiny randomisation app; and Mavuto Mukaka for invaluable statistical support. We also thank all the staff of the Clinical Trials Unit (CTU) at MORU, PCR Expert group (Janjira Thaipadungpanit, Audrey Dubot-Pérès and Clare Ling), Thermo Fisher for their excellent support with this project, and all the hospital staff at the Hospital of Tropical Diseases, Faculty of Tropical Medicine, Bangplee (BP) and Vajira (VJ) hospitals, as well as those involved in sample processing in MORU and the processing and analysis at the Faculty of Tropical Medicine (FTM), molecular genetics laboratory. We thank the MORU Clinical Trials Support Group (CTSG) for data management, monitoring, ethics and regulatory submissions and logistics, and the purchasing, administration and support staff at MORU, and those at the Brazil site who provided expert help in managing patients (Joseane Fratari, Josiane Vaz, Fátima Brant and Lísia Esper). Authors’ contributions V.L., J.A.W., S.B., W.H.K.S., and N.J.W wrote the first draft of the manuscript. P.J., V.L., T.S., T.N., B.H., S.S., K.P., P.B., V.C., P.J.A., M.M., S.P., W.P., W.P. were responsible for collection of clinical data. T.N. was responsible for data curation. J.A.W. was responsible for statistical analysis and the figures. S.K., W.M., M.Y.A., R.A.S., F.M.S., R.T., M.I., K.C. were responsible for laboratory testing and analysis. V.K., J.T., were responsible for trial set-up and monitoring. C.C. was responsible for coordination of the study in Brazil and J.J.C. and S.B. for safety monitoring and document preparation. W.R.J.T., A.M.D, N.P.J.D, N.J.W supervised the study and gave scientific input. All authors reviewed the manuscript. Funding “Finding treatments for COVID-19: A phase 2 multi-centre adaptive platform trial to assess antiviral pharmacodynamics in early symptomatic COVID-19 (PLAT-COV)” is supported by the Wellcome Trust Grant ref: 223195/Z/21/Z through the COVID-19 Therapeutics Accelerator. Availability of data and materials All code and data are openly accessible via GitHub: https://github.com/jwatowatson/PLATCOV-Favipiravir . The final datasets will be stored locally and securely at the Mahidol Oxford Research Unit for long-term storage and access. Additional anonymised participant data can be made available by request on a case-by-case basis from the MORU Data Access Committee at [email protected] and can be made available by request to the corresponding author. Declarations Ethics approval and consent to participate All patients provided fully informed written consent. The trial was approved by local and national research ethics boards in Thailand (Faculty of Tropical Medicine Ethics Committee, Mahidol University, FTMEC Ref: TMEC 21–058) and the Central Research Ethics Committee (CREC, Bangkok, Thailand, CREC Ref: CREC048/64BP-MED34), in Brazil by the Research Ethics Committee of the Universidade Federal de Minas Gerais (COEP-UFMG, Minas Gerais, Brazil, COEP-UFMG) and National Research Ethics Commission- (CONEP, Brazil, COEP-UFMG and CONEP Ref: CAAE:51593421.1.0000.5149), and by the Oxford University Tropical Research Ethics Committee (OxTREC, Oxford, UK, OxTREC Ref: 24–21). All methods were performed in accordance with the relevant guidelines and regulations (e.g. Declaration of Helsinki). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Infect Dis. 2024 Jan 15; 24:89
oa_package/62/82/PMC10789040.tar.gz
PMC10789041
38221620
Background Multiple acyl-CoA dehydrogenase deficiency (MADD; ORPHAcode 26791), also known as glutaric aciduria type II (GAII), is an autosomal recessive disorder that occurs at a global and ethnically variable prevalence of approximately 1 in 200,000 births [ 1 ]. It is caused by pathogenic variants (at least 400) in the ETFA , ETFB and ETFDH genes, which encode for the alpha and beta subunits of the electron transfer flavoprotein (ETF) and ETF-ubiquinone oxidoreductase (ETFQO; EC:1.5.5.1), respectively. Collectively, ETF and ETFQO are responsible for re-oxidising reduced mitochondrial flavin adenine dinucleotide (FADH 2 ), which in turn sustains mitochondrial fatty acid β-oxidation (FAO), amino acid catabolism and choline metabolism [ 2 , 3 ]. Clinically, MADD may be divided into three phenotypes. The first and second (MADD types I and II, respectively) are characterised by the neonatal-onset of severe and often fatal symptoms with multi-system involvement (including leukodystrophy, hypotonia, cardiomyopathy, hepatomegaly, and renal abnormalities), as well as hypoglycaemia, hyperammonaemia, and metabolic acidosis (with/without ketosis) [ 4 – 6 ]. MADD types I and II may be distinguished from each other by the presence (type I) or absence (type II) of congenital abnormalities and are caused, at essentially equal frequencies, by pathogenic variants in ETFA , ETFB , and ETFDH [ 5 ]. By contrast, the third phenotype (MADD type III) has a later-onset and presents with milder/delayed symptoms which are highly heterogeneous. Such symptoms may include recurrent/intermittent episodes of lethargy and vomiting; muscle, cardiac and/or liver involvement; as well as lipid storage myopathy, hypoglycaemia, metabolic acidosis, and/or hyperammonaemia. MADD type III is reported to be caused mainly by ETFDH variants and is also known as riboflavin (Rb)-responsive MADD owing to its amenability to treatment [ 5 , 7 ]. To estimate the burden of disease, a disease severity scoring system (MADD-DS3; MADD-disease severity 3) may be used [ 6 ]. Moreover, disease-severity is often linked to the patient’s response to treatment, which includes a high-caloric diet with fat and protein restriction, the strict avoidance of fasting, and supplementation with L-carnitine and Rb (as first-line treatment) [ 8 , 9 ]. At present, South Africa (SA) does not have a compulsory newborn screening (NBS) programme. Instead, the biochemical diagnosis of MADD follows (i) a clinical presentation in symptomatic cases, or (ii) an abnormal NBS profile in asymptomatic cases—during which MADD is defined as a secondary condition. Symptomatic patients undergo a thorough metabolic work-up including (i) urinary organic acids, (ii) plasma/serum acylcarnitines, and (iii) plasma/serum amino acids and display a unique metabolic fingerprint as described in a recent review by Mereis et al . [ 3 ]. As with other inherited metabolic disorders, final confirmation of the diagnosis is obtained by genetic analysis. In SA, as with most understudied populations, pathogenic variant screening for inherited metabolic disorders is less common. Such screening is, however, offered for a small selection of disorders (including glutaric aciduria type I, isovaleric acidaemia, galactosemia, and MPV17-hepatocerebral mitochondrial DNA depletion syndrome) following the comprehensive description of cohorts in literature. Currently, substantial knowledge of the genotype–phenotype and biochemical profiles, and the subsequent response to treatment are not available for MADD in the different SA populations. Prior to this study, a novel pathogenic ETFDH variant (c.[1067G > A], p.[Gly356Glu]) and one previously described pathogenic variant (c.[1448C > T], p.[Pro483Leu]) were reported to result in MADD in the White SA population [ 10 ]. In the homozygous state, these variants cause MADD types I and III, respectively, and it was proposed by van der Westhuizen et al. [ 10 ] that the prevalence of c.[1067G > A] in this SA population could possibly be the result of a founder effect. In our study, we aimed to address these hypotheses and other limitations by investigating a cohort of 14 recruited MADD patients, under the auspices of the International Centre for Genomic Medicine in Neuromuscular Diseases (ICGNMD). We extensively characterised the clinical phenotype, biochemical profile, and genetics of MADD in SA, and provide knowledge that will contribute to its timely diagnosis, clinical management, and therapeutic intervention.
Methods Cohort selection and sampling Following informed consent/assent, the patients in this study were enrolled as part of the ICGNMD study with ethical approval numbers 19/LO/1796 (HRA and HCRW), NWU-00966-19-A1 and NWU-00966-19-A1-01 (NWU), 296/2019 (University of Pretoria; UP), B19/01/002 (Stellenbosch University; SU), and 605/2020 (UCT). Probands and their affected/unaffected first-degree relatives were recruited either retrospectively or prospectively, based on a clinical and metabolic diagnosis of MADD, via one of three SA academic, state-funded hospitals: Steve Biko Academic Hospital, Tygerberg Hospital, and Red Cross War Memorial Children's Hospital. For (i) genetic, (ii) protein, and (iii) metabolic analyses, the following samples were obtained: (i) whole blood, saliva (living patients), or urine (deceased patients from whom no blood or saliva was available), (ii) primary skin fibroblasts (P5 and P9), as well as (iii) urine collected during the first metabolic presentation (P1, P4–P6, P8–P12 and P14)/during metabolic decompensation (P7), and after therapeutic management (P1–P11, P13 and P14). Clinical and biochemical investigations Patients were extensively evaluated by paediatric or adult neurologists according to protocols set forth by the ICGNMD. This included obtaining relevant demographic information, family history, medical history, and current treatment (Table 1). Moreover, deep phenotypic data (Additional file 1 ) were collected during a comprehensive clinical re-assessment of each proband. SDS-PAGE and Western blot analysis of primary skin fibroblasts, established from two adult patients (P5 and P9) and two healthy controls (matched in age, gender, and ethnicity to the patients), were performed to evaluate the steady state level of ETFQO. Immunoblotting was conducted using primary antibodies at 1:1,000 directed against ETFQO (ab131376; Abcam, Cambridge, UK) and β-actin (as housekeeping gene; ab6276; Abcam), and an HRP-conjugated secondary antibody at 1:10,000 (ab97023; Abcam). Findings are shown in Additional file 3 . For targeted metabolic analyses, urine organic acids and their glycine conjugates were extracted, derivatised, and analysed using the 7890A gas chromatography system coupled to the 5977A MSD mass spectrometer (Agilent Technologies, California, USA) as described by Erasmus et al. [ 21 ] and refined by Reinecke et al. [ 22 ]. Data acquisition was facilitated with the 5977 MassHunter Data Acquisition software (B.07.04.2260; Agilent Software); Automated Mass spectral Deconvolution and Identification System software (AMDIS v2.73; National Institute for Standards and Technology) was used to identify the component peaks and perform spectral deconvolution. Underivatised amino acids in urine were analysed using validated liquid chromatography-triple quadrupole mass spectrometry and the MassChrom ® Amino Acid Analysis kit (75111; Chromsystems, Gräfelfing, Germany). Samples were prepared via the Microlab STAR Liquid Handling System (Hamilton, Nevada, USA) and analysed on the Infinity 1290 II liquid chromatography system coupled to the 6470 Triple Quadrupole liquid chromatography-mass spectrometer (Agilent Technologies). For urine acylcarnitine analysis, electrospray ionisation-tandem mass spectrometry was performed using the 1290 Infinity II liquid chromatography system coupled to the 6410 Triple Quadrupole liquid chromatography-mass spectrometer (Agilent Technologies) as described by Pitt et al. [ 23 ]. A standard mixture of stable acylcarnitine isotopes was used for quantification [ 24 ]. Amino acid and acylcarnitine data were acquired and quantified using MassHunter Data Acquisition software and MassHunter QQQ Quantitative software (v10.00; Agilent Software), respectively. All reported processed data are shown in Additional file 2 . Genetic analyses To identify potential pathogenic variants in the proband DNA, WES was performed at Macrogen Europe (Amsterdam, the Netherlands) using the NovaSeq 6000 HiSeq platform (Illumina Inc., San Diego, USA) and the SureSelect Human All Exon V6 panel (5190-8865; Agilent Technologies). Reads (50 × average depth coverage) were aligned to the Genome Reference Consortium Human Build (GRCh) 38.p13, and variants were annotated according to GRCh38 using the Ensembl Variant Effect Predictor (v108) [ 25 ]. High-quality variants were filtered by applying the Genomics’ England PanelApp, “Rhabdomyolysis and metabolic muscle disorders_1.57” panel [ 26 ]; the pathogenicity of potential disease-causing variants was evaluated and classified as such, according to the ACMG guidelines [ 11 ]. All potential pathogenic variants identified were confirmed and segregation analysis in family members was conducted via Sanger sequencing. Allele frequency and haplotyping To estimate the allele (carrier) frequency of the two most encountered variants in the cohort (c.[1067G > A] and c.[1448C > T]), PCR–RFLP analysis was performed as previously described [ 10 ] on newborn dried blood spot (DBS) samples of the four largest SA population groups—African, White SA, Indian, and mixed ethnicity. To this end, a total of 2,844 anonymised and randomised DBS samples per variant per population group were used: 594 representatives of mixed ethnicity as well as 750 representatives of each of the following populations: African, White SA and Indian ethnicity. The samples represented an equal distribution of healthy males and females and were kindly provided by the NBS laboratory at the NWU Centre for Human Metabolomics. Next, we investigated whether these two variants arose due to founder mutations by determining the haplotype(s) of the affected individuals using a GSA v3.0 array (Illumina Inc.). Sample processing was performed at UCL Genomics (UCL Great Ormond Street Institute of Child Health, London, UK), according to the manufacturer's instructions; the resulting raw IDAT files were processed using Genome Studio (v2.0.5; Illumina Inc.) and converted to PLINK (v1.9) format [ 27 ]. Samples and variants with a call rate below 90% were excluded from further analysis. Thereafter, samples were compared to determine familial relations and phased using Eagle (v2.4.1) [ 28 ]. Finally, haplotype sharing between each sample was determined with Germline2 software (v1.0) [ 29 ] and shared regions were visualised using R (v4.2.2).
Results and discussion Cohort Over a period of three years and using a retrospective and prospective recruitment strategy at three different centres across SA, 12 apparently unrelated families (ten of White SA and two of mixed ethnicity) with 14 clinically affected patients (five males and nine females) were recruited. All patients were born to non-consanguineous parents and disease onset ranged from birth to 41 years, with four deaths recorded (at 9 days, 14 days, 3 months, and 23 years, respectively). Where possible, a consistent clinical re-evaluation was conducted according to ICGNMD guidelines; for most of the cases, biological samples acquired during acute metabolic presentation (n = 12/14) and following therapeutic management (n = 13/14) were collected/available and could be re-analysed (for the remainder, the original data were used, where available). It should be noted that SA patients of African and Indian ethnicity have previously been diagnosed with MADD on a biochemical level at the Centre for Human Metabolomics [North-West University (NWU)] and the NHLS Inherited Metabolic Disease Molecular Laboratory [University of Cape Town (UCT)]—the two SA facilities where metabolic testing is offered as a main service (unpublished data). However, while all possible efforts were made to include patients of these population groups, they were regrettably lost to follow-up prior to the study’s onset and an opportunity for subsequent genetic confirmation did not present itself. Moreover, no new cases were identified in these population groups during this study. We therefore recognise a potential referral bias within the population groups included. It is our hope that this problem can be addressed by metabolic and genetic NBS in the future. Clinical, biochemical, and genetic features Overall, the SA MADD cohort displayed heterogeneous clinical presentations (Table 1 and Additional file 1 ), together with the characteristic diagnostic urinary metabolites associated with MADD (Table 2 and Additional file 2 ). While plasma/serum acylcarnitine and amino acid findings are typically used for diagnosis, this matrix was limited for the majority of the patients (sample collection and analyses were conducted over a 30-year period) and consequently, the levels of these metabolites in the urine are reported instead. The variants identified, their location in ETFQO, and their pathogenicity, according to the American College of Medical Genetics and Genomics (ACMG) criteria, are described in Table 3 [ 11 ]. The elucidation of these variants in the SA populations now enable rapid screening methods which were not available before, thereby allowing the timely confirmation of the disorder (in hours) in symptomatic patients. This prompt identification of MADD-related variants can be used to aid the reproductive choices of carrier/affected patients and is especially valuable for prenatal testing and the testing of neonates who may require immediate therapy following birth (i.e., those affected with MADD type I/II). In all recruited cases, a resolved genotype–phenotype result was obtained. Three distinct groups were observed, based on disease severity and response to treatment, as discussed below. Severe, Rb-unresponsive MADD patient group The first group included three patients (P6, P8 and P12), who presented with the hallmark features of neonatal-onset MADD. Clinically, their symptoms were severe and progressive, and patients were metabolically and phenotypically unresponsive to treatment with Rb. The first case was a male neonate (P6), born prematurely (at 38 weeks) to parents of White SA ethnicity, who presented within the first week of life with acute metabolic decompensation. Key clinical features included congenital cardiac abnormalities (pulmonary stenosis, a patent foramen ovale with a right to left shunt, and a large atrial septum aneurysm) and convulsions with early neonatal death on day 9 of life. Metabolic profiling indicated the typical MADD biochemical fingerprint [ 12 ], including dicarboxylic aciduria, increased 2-hydroxyglutaric acid, elevated glycine conjugates (short-, short-branched-, and medium-chain-related), increased disease-associated acylcarnitine conjugates, as well the characteristic increase in sarcosine on the amino acid profile. Both FAO and branched-chain amino acid catabolism were affected, correlating with previous studies on severe MADD cases [ 6 ]. The patient displayed a MADD-DS3 score of 30, further supporting a diagnosis of MADD type I. A then novel homozygous variant—c.[1067G > A] (p.[Gly356Glu])—with in silico and structural evidence of pathogenicity (ACMG classification: likely pathogenic) was identified in this neonate [ 10 ]. The second patient was a female infant (P8), born at term to parents of mixed ethnicity, who presented at birth with metabolic acidosis, hypotonia and feeding difficulties. Disease-specific metabolic markers were similar to those of P6, and the MADD-DS3 score of 21 was high. The patient succumbed at the age of three months. The homozygous c.[1067G > A] variant was subsequently identified; however, based on the lack of any reported congenital features (absent post-mortem examination), a diagnosis of MADD type I/II was given. The third case was a female neonate (P12), born at term to parents of mixed ethnicity. A urinary organic acid profile typical of MADD was confirmed by the hospital that made the diagnosis (no residual urine collected before or after treatment was available to re-analyse for the purpose of this study). Clinical features included metabolic acidosis, hypoglycaemia, hyperammonaemia, pancytopaenia, acute kidney injury, hyponatremia, and hypocalcaemia, and the patient succumbed at 14 days of age. Based on the absence of any reported congenital features (no post-mortem examination) and the high MADD-DS3 score of 24, a diagnosis of MADD type I/II was given. Whole exome sequencing (WES) and segregation analysis revealed that the patient was compound heterozygous for c.[976G > C];c.[1067G > A] (p.[Gly326Arg];p.[Gly356Glu]). Variant c.[976G > C] is classified as a likely pathogenic variant of unknown significance according to the ACMG criteria and, to our knowledge, has been reported to occur in only one late-onset case of Chinese ethnicity by Xi et al. [ 13 ] as compound heterozygous with the common variant, c.[250G > A] (p.[Ala84Thr]; ACMG classification: pathogenic). By contrast, the c.[1067G > A] variant has only been encountered in the SA population to date [ 10 ]. Considering the treatment-unresponsive metabolic profile (P6 and P12) and rapid clinical deterioration of patients P6, P8 and P12, it may be inferred that c.[1067G > A] is a highly pathogenic variant. Its presence on both alleles, or its bi-allelic combination with another variant affecting the same protein domain (ubiquinone-binding domain) of ETFQO, appears to lead to insufficient enzymatic compensation for adequate ETFQO activity. While it is evident that L-carnitine supplementation facilitated the formation of disease specific acylcarnitine conjugates, we hypothesise that the homozygous c.[1067G > A];c.[1067G > A] and compound heterozygous c.[976G > C];c.[1067G > A] genotypes result in an ETFQO protein of which the folding cannot be sufficiently rescued/stabilised by Rb treatment. Moderate, variably Rb-responsive MADD patient group The second group included eight patients who presented with moderate, heterogeneous phenotypes, all showing a varying response to treatment. The onset of symptoms was observed in the neonatal period (P1), infancy (P2, P7, P9 and P10), as well as childhood (P3, P11 and P13), and all patients displayed the characteristic clinical features of MADD. These included metabolic decompensation (n = 5), muscle weakness (n = 4), muscle pain (n = 3), hypotonia (n = 5), neck flexor weakness (n = 5), susceptibility to fatigue (n = 2), restrictive ventilatory defect (n = 1), gastrointestinal involvement (n = 6), elevated creatine kinase (CK) (n = 2), recurrent infections (n = 2), lethargy (n = 2), cognitive disability (n = 2), delayed gross motor development (n = 2), migraine/paroxysmal headache (n = 3), seizures (n = 2), coma (n = 3), skeletal involvement (n = 2), and liver dysfunction (n = 4). Uncommon symptoms included ketosis at the time of metabolic crisis (n = 5) and Beevor’s sign (n = 1). Owing to the availability of data, the baseline urine organic acids of only five of the eight patients are reported, and the data of P7 represent the urinary organic acids present upon considerable decompensation near the time of demise. 1 At first presentation, patients P1, P7 and P9–P11 displayed an increase (to a variable extent) in the diagnostic urine organic acid markers associated with MADD, albeit less pronounced than that of P6, P8 and P12. Most of these patients had increased concentrations of urinary glutaric acid, ethylmalonic acid, dicarboxylic acids, and 2-hydroxyglutaric acid. The excretion of N -hexanoylglycine and, to a variable extent, branched-chain-related glycine conjugates were mostly observed. Moreover, all eight patients displayed increased disease-associated urinary acylcarnitines and elevated sarcosine. The biomarker assessment correlated with previous observations in moderate cases where FAO seems to be initially/mostly affected and branched-chain amino acid catabolism is influenced to a lesser extent [ 6 ]. It is important to note that the metabolic profiling was greatly dependent on the time of sample collection and that P3 and P9 had received L-carnitine treatment from an early stage in their lives due to the prior diagnosis of a sibling with MADD. Apart from P7, the clinical symptoms, together with most of the urine metabolites, improved upon dietary adjustment in combination with treatment with L-carnitine (P9 and P10), L-carnitine and Rb (P2, P3, P11 and P13), or L-carnitine, Rb, and coenzyme Q10 (P1). Carnitine conjugation indicated that accumulating acyl-CoAs were being detoxified via the carnitine transportation system, which likely explains the less prominent MADD organic acid signature observed. By contrast, the metabolic response to treatment of P7—who succumbed to a stroke at the age of 23 years—was more comparable to that of severe MADD, a finding that correlated with the severity of the clinical presentation as summarised in Table 1 and Additional file 1 . WES and segregation analysis by Sanger sequencing revealed four compound heterozygous variants in this group of White SA-ethnicity patients. These included: (i) c.[740G > T];c.[1448C > T] (p.[Gly247Val];p.[Pro483Leu]) in P13, (ii) c.[287dupA*];c.[1448C > T] (p.[Asp97Glyfs*24];p.[Pro483Leu]) in siblings P2 and P3, and (iii) c.[1067G > A];c.[1448C > T] (p.[Gly356Glu];p.[Pro483Leu]) in P1, P7, P9, P10, and P11. The novel c.[287dupA*] variant affects the third exon of ETFDH , leading to a premature stop codon, and is classified as likely pathogenic according to the ACMG criteria. The c.[740G > T] variant shares the same classification as c.[287dupA*], and has been reported only once before as a compound heterozygous variant along with the likely pathogenic c.[389A > T] (p.[Asp130Val]) variant in a late-onset MADD case of Chinese ethnicity [ 14 ]. All variants identified in this group encode for highly conserved amino acids. Once again, the MADD-DS3 scores confirmed that the disease burden increases when the c.[1067G > A] variant is present. This finding is corroborated by the level of steady state ETFQO protein in skin fibroblasts of P5 (c.[1448C > T];c.[1448C > T]) and P9 (c.[1067G > A];c.[1448C > T]), which show a 55% and 73% decrease, respectively, in comparison to fibroblasts from healthy controls (Additional file 3 ). Similarly, van der Westhuizen et al. [ 10 ] reported an 83% decrease in the steady state level of ETFQO in muscle from a patient with a compound heterozygous c.[1067G > A];c.[1448C > T] genotype, when compared to a healthy control. By contrast to the severe, Rb-unresponsive MADD patient group, four of the five patients affected by the c.[1067G > A] variant (P9, P10, P11 and P1) in this group were found to be very amenable to treatment. It is, therefore, reasonable to conclude that when c.[1067G > A] is encountered as a compound heterozygous variant along with a variant which affects a different protein domain of ETFQO (e.g., c.[1448C > T]), sufficient enzymatic compensation occurs which may allow for adequate ETFQO activity. Based on the onset of disease, a diagnosis of either MADD type II (P1) or type III (P2, P3, P7, P9, P10, P11 and P14) was given to the patients in this group. Mild, Rb-responsive MADD patient group The final group included three patients (P4, P5 and P14) of White SA ethnicity, who presented later in life with mild and non-progressive (treatment-related) phenotypes. Clinically, their symptoms were heterogenous, but to some extent characteristic of MADD. The disease presentation included metabolic decompensation (n = 2), muscle weakness (n = 3), neck flexor weakness (n = 2), gastrointestinal involvement as the disease progressed (n = 1), elevated CK (n = 2), lethargy (n = 1), encephalopathy (n = 1), cerebral white matter abnormalities (n = 1), and liver dysfunction (n = 1), with two patients exhibiting ketosis. Initially, statin-induced myositis was suspected in P5 until hepatic features, including lipid deposits on the liver biopsy and raised transaminases prompted a metabolic work-up. P5 and P14 displayed urine metabolites associated with MADD, including dicarboxylic aciduria, raised 2-hydroxyglutaric acid (P5), acylglycine conjugates (with less prominent branched-chain-related conjugates), short- and medium-chain acylcarnitines as well as the presence of sarcosine on the amino acid profile. As indicated earlier in the moderate MADD group, FAO tends to be more affected compared to the branched-chain amino acid catabolism in the less severe cases [ 6 ], which correlated with our findings. Literature shows that plasma/serum acylcarnitine profiling is typically most informative when diagnosing a late-onset MADD case, as organic acid profiling may only be remarkable at the time of a metabolic crisis or catabolic status induced by fasting. However, acylcarnitine assessments have been inconclusive in some cases, particularly if the patient has insufficient free carnitine available to promote conjugation [ 5 , 12 ]. The latter has also led to false negative NBS results in mild MADD cases, as reported by Lin et al. [ 15 ]. In our study, P14 showed a free carnitine concentration below the limit of detection and normal butyryl/isobutyryl-, isovaleryl- and glutarylcarnitine levels in plasma/serum (results not shown). Interestingly, this patient presented with severe ketosis, as well as a prominent increase in 2-hydroxyglutaric acid and cis -4-decenedioic acid with unremarkable increases in glutaric acid and ethylmalonic acid. These inconsistent urinary organic acid findings corroborate the observations of Goodman et al . [ 16 ] and support their suggestion to discontinue the use of the term “Glutaric aciduria type II,” as it may be diagnostically misleading. In addition, our data indicate that mild cases of MADD may be missed or incorrectly diagnosed (e.g., as a different fatty acid oxidation disorder) on a biochemical level, depending on the metabolic (i.e., anabolic/catabolic status due to the time of sample collection) and systemic free carnitine status of the patient. Consequently, this study emphasises the metabolite variation within this disease group and advocates for repeat testing if the clinical presentation and routine chemistry are suggestive of MADD. Following dietary adjustment and treatment with L-carnitine, Rb and coenzyme Q10 (P5 and P14), the clinical and biochemical aberrations of P14 essentially normalised. P4, however, presented with only increased ethylmalonic acid. This patient, the mother of P2 and P3, underwent metabolic testing following the children’s diagnosis and was on L-carnitine treatment at the time of sample collection. Clinical-biochemical improvements, specifically after therapeutic intervention, have been observed in several cases of late-onset MADD, and our clinical and metabolic results (although only three cases were included) strongly correlate with previous investigations [ 5 , 17 ]. Genetic analyses revealed that all three patients had a homozygous genotype for the known pathogenic c.[1448C > T] (p.[Pro483Leu]) variant. This genotype (c.[1448C > T]; c.[1448C > T]) has been reported in numerous other cases that, similarly to this group, presented with adult-onset, Rb-responsive MADD [ 10 , 17 ]. The Rb-responsive nature of this variant is further corroborated in a study by Cornelius et al. [ 18 ], in which it was shown that ETFQO activity in c.[1448C > T]-modified HEK293 cells could be restored from ~ 45 to ~ 85% (corresponding to an increase of ~ 50 to ~ 80% steady-state ETFQO) when moderately Rb-deficient cultures were treated with a saturating concentration of Rb. Therefore, considering the literature, together with the time of onset, severity and treatment response of this group, a diagnosis of MADD type III was given to P4, P5 and P14, despite moderate MADD-DS3 scores. Allele frequency spectrum and haplotypes To determine the allele frequency of the variants identified, PCR–RFLP analysis was used to screen the two most frequently occurring variants, c.[1067G > A] and c.[1448C > T], in the four largest population groups in SA. The study yielded no homozygous or heterozygous genotypes for any of the variants in any of the population groups assessed so that the allele frequencies of the population groups investigated were calculated as < 0.00067% (African, White SA, and Indian ethnicity) or < 0.00084% (mixed ethnicity) (Table 3 ). All five variants identified in the cohort were subsequently compared to gnomAD (v2.1.1) [ 19 ] and the Human Heredity and Health in Africa (H3Africa) project [ 20 ]. Of these variants, only two were recorded on gnomAD, displaying exome allele frequencies of < 0.0001% (c.[740G > T] and c.[1448C > T]); hitherto, none of the variants has been identified in the H3Africa data. The absence of the variants in the above-mentioned population databases, together with the frequency at which they were identified in the SA cohort, indicates with high probability the pathogenicity of these five variants and supports their causative role in MADD. Haplotyping was performed on those patients from whom sufficient DNA could be obtained. Upon recruitment, patients and their families were invited to self-report their ethnicity and region of birth. Consequently, haplotyping was performed on ten patients of White SA ethnicity (P1–P6, P9–P11 and P13) and one patient of mixed ethnicity (P8) born in various geographical regions across SA (including the central, north-eastern, north-western, south-eastern, and south-western provinces). Apart from the siblings, P2 and P3, and their mother, P4, all patients were found to be unrelated down to the second degree. DNA samples from those patients harbouring the c.[1067G > A] variant (P1, P6, P8 and P9–P11), displayed variable lengths of a shared haplotype on one allele, with a minimal overlapping region of 7.2 Mb (Fig. 1 ). Of these samples, the two with a homozygous c.[1067G > A];c.[1067G > A] genotype (P6 and P8) exhibited homozygosity for the shared haplotype in the region of ETFDH . Similar results were obtained for the DNA samples of those patients harbouring the c.[1448C > T] variant [P1, P4 (including P2 and P3 due to their first-degree relation), P5, P9–P11 and P13], with a minimal overlapping region of 4 Mb (Fig. 2 ). Again, the two patients who had a homozygous c.[1448C > T];c.[1448C > T] genotype (P4 and P5), displayed homozygosity for the shared haplotype in the ETFDH region. These findings suggest the presence of two separate founder haplotypes on which the c.[1067G > A] and c.[1448C > T] variants arose, with that of the former in the White SA population. However, without access to additional control data from the same population(s), it is currently not possible to estimate the haplotype frequency with confidence.
Results and discussion Cohort Over a period of three years and using a retrospective and prospective recruitment strategy at three different centres across SA, 12 apparently unrelated families (ten of White SA and two of mixed ethnicity) with 14 clinically affected patients (five males and nine females) were recruited. All patients were born to non-consanguineous parents and disease onset ranged from birth to 41 years, with four deaths recorded (at 9 days, 14 days, 3 months, and 23 years, respectively). Where possible, a consistent clinical re-evaluation was conducted according to ICGNMD guidelines; for most of the cases, biological samples acquired during acute metabolic presentation (n = 12/14) and following therapeutic management (n = 13/14) were collected/available and could be re-analysed (for the remainder, the original data were used, where available). It should be noted that SA patients of African and Indian ethnicity have previously been diagnosed with MADD on a biochemical level at the Centre for Human Metabolomics [North-West University (NWU)] and the NHLS Inherited Metabolic Disease Molecular Laboratory [University of Cape Town (UCT)]—the two SA facilities where metabolic testing is offered as a main service (unpublished data). However, while all possible efforts were made to include patients of these population groups, they were regrettably lost to follow-up prior to the study’s onset and an opportunity for subsequent genetic confirmation did not present itself. Moreover, no new cases were identified in these population groups during this study. We therefore recognise a potential referral bias within the population groups included. It is our hope that this problem can be addressed by metabolic and genetic NBS in the future. Clinical, biochemical, and genetic features Overall, the SA MADD cohort displayed heterogeneous clinical presentations (Table 1 and Additional file 1 ), together with the characteristic diagnostic urinary metabolites associated with MADD (Table 2 and Additional file 2 ). While plasma/serum acylcarnitine and amino acid findings are typically used for diagnosis, this matrix was limited for the majority of the patients (sample collection and analyses were conducted over a 30-year period) and consequently, the levels of these metabolites in the urine are reported instead. The variants identified, their location in ETFQO, and their pathogenicity, according to the American College of Medical Genetics and Genomics (ACMG) criteria, are described in Table 3 [ 11 ]. The elucidation of these variants in the SA populations now enable rapid screening methods which were not available before, thereby allowing the timely confirmation of the disorder (in hours) in symptomatic patients. This prompt identification of MADD-related variants can be used to aid the reproductive choices of carrier/affected patients and is especially valuable for prenatal testing and the testing of neonates who may require immediate therapy following birth (i.e., those affected with MADD type I/II). In all recruited cases, a resolved genotype–phenotype result was obtained. Three distinct groups were observed, based on disease severity and response to treatment, as discussed below. Severe, Rb-unresponsive MADD patient group The first group included three patients (P6, P8 and P12), who presented with the hallmark features of neonatal-onset MADD. Clinically, their symptoms were severe and progressive, and patients were metabolically and phenotypically unresponsive to treatment with Rb. The first case was a male neonate (P6), born prematurely (at 38 weeks) to parents of White SA ethnicity, who presented within the first week of life with acute metabolic decompensation. Key clinical features included congenital cardiac abnormalities (pulmonary stenosis, a patent foramen ovale with a right to left shunt, and a large atrial septum aneurysm) and convulsions with early neonatal death on day 9 of life. Metabolic profiling indicated the typical MADD biochemical fingerprint [ 12 ], including dicarboxylic aciduria, increased 2-hydroxyglutaric acid, elevated glycine conjugates (short-, short-branched-, and medium-chain-related), increased disease-associated acylcarnitine conjugates, as well the characteristic increase in sarcosine on the amino acid profile. Both FAO and branched-chain amino acid catabolism were affected, correlating with previous studies on severe MADD cases [ 6 ]. The patient displayed a MADD-DS3 score of 30, further supporting a diagnosis of MADD type I. A then novel homozygous variant—c.[1067G > A] (p.[Gly356Glu])—with in silico and structural evidence of pathogenicity (ACMG classification: likely pathogenic) was identified in this neonate [ 10 ]. The second patient was a female infant (P8), born at term to parents of mixed ethnicity, who presented at birth with metabolic acidosis, hypotonia and feeding difficulties. Disease-specific metabolic markers were similar to those of P6, and the MADD-DS3 score of 21 was high. The patient succumbed at the age of three months. The homozygous c.[1067G > A] variant was subsequently identified; however, based on the lack of any reported congenital features (absent post-mortem examination), a diagnosis of MADD type I/II was given. The third case was a female neonate (P12), born at term to parents of mixed ethnicity. A urinary organic acid profile typical of MADD was confirmed by the hospital that made the diagnosis (no residual urine collected before or after treatment was available to re-analyse for the purpose of this study). Clinical features included metabolic acidosis, hypoglycaemia, hyperammonaemia, pancytopaenia, acute kidney injury, hyponatremia, and hypocalcaemia, and the patient succumbed at 14 days of age. Based on the absence of any reported congenital features (no post-mortem examination) and the high MADD-DS3 score of 24, a diagnosis of MADD type I/II was given. Whole exome sequencing (WES) and segregation analysis revealed that the patient was compound heterozygous for c.[976G > C];c.[1067G > A] (p.[Gly326Arg];p.[Gly356Glu]). Variant c.[976G > C] is classified as a likely pathogenic variant of unknown significance according to the ACMG criteria and, to our knowledge, has been reported to occur in only one late-onset case of Chinese ethnicity by Xi et al. [ 13 ] as compound heterozygous with the common variant, c.[250G > A] (p.[Ala84Thr]; ACMG classification: pathogenic). By contrast, the c.[1067G > A] variant has only been encountered in the SA population to date [ 10 ]. Considering the treatment-unresponsive metabolic profile (P6 and P12) and rapid clinical deterioration of patients P6, P8 and P12, it may be inferred that c.[1067G > A] is a highly pathogenic variant. Its presence on both alleles, or its bi-allelic combination with another variant affecting the same protein domain (ubiquinone-binding domain) of ETFQO, appears to lead to insufficient enzymatic compensation for adequate ETFQO activity. While it is evident that L-carnitine supplementation facilitated the formation of disease specific acylcarnitine conjugates, we hypothesise that the homozygous c.[1067G > A];c.[1067G > A] and compound heterozygous c.[976G > C];c.[1067G > A] genotypes result in an ETFQO protein of which the folding cannot be sufficiently rescued/stabilised by Rb treatment. Moderate, variably Rb-responsive MADD patient group The second group included eight patients who presented with moderate, heterogeneous phenotypes, all showing a varying response to treatment. The onset of symptoms was observed in the neonatal period (P1), infancy (P2, P7, P9 and P10), as well as childhood (P3, P11 and P13), and all patients displayed the characteristic clinical features of MADD. These included metabolic decompensation (n = 5), muscle weakness (n = 4), muscle pain (n = 3), hypotonia (n = 5), neck flexor weakness (n = 5), susceptibility to fatigue (n = 2), restrictive ventilatory defect (n = 1), gastrointestinal involvement (n = 6), elevated creatine kinase (CK) (n = 2), recurrent infections (n = 2), lethargy (n = 2), cognitive disability (n = 2), delayed gross motor development (n = 2), migraine/paroxysmal headache (n = 3), seizures (n = 2), coma (n = 3), skeletal involvement (n = 2), and liver dysfunction (n = 4). Uncommon symptoms included ketosis at the time of metabolic crisis (n = 5) and Beevor’s sign (n = 1). Owing to the availability of data, the baseline urine organic acids of only five of the eight patients are reported, and the data of P7 represent the urinary organic acids present upon considerable decompensation near the time of demise. 1 At first presentation, patients P1, P7 and P9–P11 displayed an increase (to a variable extent) in the diagnostic urine organic acid markers associated with MADD, albeit less pronounced than that of P6, P8 and P12. Most of these patients had increased concentrations of urinary glutaric acid, ethylmalonic acid, dicarboxylic acids, and 2-hydroxyglutaric acid. The excretion of N -hexanoylglycine and, to a variable extent, branched-chain-related glycine conjugates were mostly observed. Moreover, all eight patients displayed increased disease-associated urinary acylcarnitines and elevated sarcosine. The biomarker assessment correlated with previous observations in moderate cases where FAO seems to be initially/mostly affected and branched-chain amino acid catabolism is influenced to a lesser extent [ 6 ]. It is important to note that the metabolic profiling was greatly dependent on the time of sample collection and that P3 and P9 had received L-carnitine treatment from an early stage in their lives due to the prior diagnosis of a sibling with MADD. Apart from P7, the clinical symptoms, together with most of the urine metabolites, improved upon dietary adjustment in combination with treatment with L-carnitine (P9 and P10), L-carnitine and Rb (P2, P3, P11 and P13), or L-carnitine, Rb, and coenzyme Q10 (P1). Carnitine conjugation indicated that accumulating acyl-CoAs were being detoxified via the carnitine transportation system, which likely explains the less prominent MADD organic acid signature observed. By contrast, the metabolic response to treatment of P7—who succumbed to a stroke at the age of 23 years—was more comparable to that of severe MADD, a finding that correlated with the severity of the clinical presentation as summarised in Table 1 and Additional file 1 . WES and segregation analysis by Sanger sequencing revealed four compound heterozygous variants in this group of White SA-ethnicity patients. These included: (i) c.[740G > T];c.[1448C > T] (p.[Gly247Val];p.[Pro483Leu]) in P13, (ii) c.[287dupA*];c.[1448C > T] (p.[Asp97Glyfs*24];p.[Pro483Leu]) in siblings P2 and P3, and (iii) c.[1067G > A];c.[1448C > T] (p.[Gly356Glu];p.[Pro483Leu]) in P1, P7, P9, P10, and P11. The novel c.[287dupA*] variant affects the third exon of ETFDH , leading to a premature stop codon, and is classified as likely pathogenic according to the ACMG criteria. The c.[740G > T] variant shares the same classification as c.[287dupA*], and has been reported only once before as a compound heterozygous variant along with the likely pathogenic c.[389A > T] (p.[Asp130Val]) variant in a late-onset MADD case of Chinese ethnicity [ 14 ]. All variants identified in this group encode for highly conserved amino acids. Once again, the MADD-DS3 scores confirmed that the disease burden increases when the c.[1067G > A] variant is present. This finding is corroborated by the level of steady state ETFQO protein in skin fibroblasts of P5 (c.[1448C > T];c.[1448C > T]) and P9 (c.[1067G > A];c.[1448C > T]), which show a 55% and 73% decrease, respectively, in comparison to fibroblasts from healthy controls (Additional file 3 ). Similarly, van der Westhuizen et al. [ 10 ] reported an 83% decrease in the steady state level of ETFQO in muscle from a patient with a compound heterozygous c.[1067G > A];c.[1448C > T] genotype, when compared to a healthy control. By contrast to the severe, Rb-unresponsive MADD patient group, four of the five patients affected by the c.[1067G > A] variant (P9, P10, P11 and P1) in this group were found to be very amenable to treatment. It is, therefore, reasonable to conclude that when c.[1067G > A] is encountered as a compound heterozygous variant along with a variant which affects a different protein domain of ETFQO (e.g., c.[1448C > T]), sufficient enzymatic compensation occurs which may allow for adequate ETFQO activity. Based on the onset of disease, a diagnosis of either MADD type II (P1) or type III (P2, P3, P7, P9, P10, P11 and P14) was given to the patients in this group. Mild, Rb-responsive MADD patient group The final group included three patients (P4, P5 and P14) of White SA ethnicity, who presented later in life with mild and non-progressive (treatment-related) phenotypes. Clinically, their symptoms were heterogenous, but to some extent characteristic of MADD. The disease presentation included metabolic decompensation (n = 2), muscle weakness (n = 3), neck flexor weakness (n = 2), gastrointestinal involvement as the disease progressed (n = 1), elevated CK (n = 2), lethargy (n = 1), encephalopathy (n = 1), cerebral white matter abnormalities (n = 1), and liver dysfunction (n = 1), with two patients exhibiting ketosis. Initially, statin-induced myositis was suspected in P5 until hepatic features, including lipid deposits on the liver biopsy and raised transaminases prompted a metabolic work-up. P5 and P14 displayed urine metabolites associated with MADD, including dicarboxylic aciduria, raised 2-hydroxyglutaric acid (P5), acylglycine conjugates (with less prominent branched-chain-related conjugates), short- and medium-chain acylcarnitines as well as the presence of sarcosine on the amino acid profile. As indicated earlier in the moderate MADD group, FAO tends to be more affected compared to the branched-chain amino acid catabolism in the less severe cases [ 6 ], which correlated with our findings. Literature shows that plasma/serum acylcarnitine profiling is typically most informative when diagnosing a late-onset MADD case, as organic acid profiling may only be remarkable at the time of a metabolic crisis or catabolic status induced by fasting. However, acylcarnitine assessments have been inconclusive in some cases, particularly if the patient has insufficient free carnitine available to promote conjugation [ 5 , 12 ]. The latter has also led to false negative NBS results in mild MADD cases, as reported by Lin et al. [ 15 ]. In our study, P14 showed a free carnitine concentration below the limit of detection and normal butyryl/isobutyryl-, isovaleryl- and glutarylcarnitine levels in plasma/serum (results not shown). Interestingly, this patient presented with severe ketosis, as well as a prominent increase in 2-hydroxyglutaric acid and cis -4-decenedioic acid with unremarkable increases in glutaric acid and ethylmalonic acid. These inconsistent urinary organic acid findings corroborate the observations of Goodman et al . [ 16 ] and support their suggestion to discontinue the use of the term “Glutaric aciduria type II,” as it may be diagnostically misleading. In addition, our data indicate that mild cases of MADD may be missed or incorrectly diagnosed (e.g., as a different fatty acid oxidation disorder) on a biochemical level, depending on the metabolic (i.e., anabolic/catabolic status due to the time of sample collection) and systemic free carnitine status of the patient. Consequently, this study emphasises the metabolite variation within this disease group and advocates for repeat testing if the clinical presentation and routine chemistry are suggestive of MADD. Following dietary adjustment and treatment with L-carnitine, Rb and coenzyme Q10 (P5 and P14), the clinical and biochemical aberrations of P14 essentially normalised. P4, however, presented with only increased ethylmalonic acid. This patient, the mother of P2 and P3, underwent metabolic testing following the children’s diagnosis and was on L-carnitine treatment at the time of sample collection. Clinical-biochemical improvements, specifically after therapeutic intervention, have been observed in several cases of late-onset MADD, and our clinical and metabolic results (although only three cases were included) strongly correlate with previous investigations [ 5 , 17 ]. Genetic analyses revealed that all three patients had a homozygous genotype for the known pathogenic c.[1448C > T] (p.[Pro483Leu]) variant. This genotype (c.[1448C > T]; c.[1448C > T]) has been reported in numerous other cases that, similarly to this group, presented with adult-onset, Rb-responsive MADD [ 10 , 17 ]. The Rb-responsive nature of this variant is further corroborated in a study by Cornelius et al. [ 18 ], in which it was shown that ETFQO activity in c.[1448C > T]-modified HEK293 cells could be restored from ~ 45 to ~ 85% (corresponding to an increase of ~ 50 to ~ 80% steady-state ETFQO) when moderately Rb-deficient cultures were treated with a saturating concentration of Rb. Therefore, considering the literature, together with the time of onset, severity and treatment response of this group, a diagnosis of MADD type III was given to P4, P5 and P14, despite moderate MADD-DS3 scores. Allele frequency spectrum and haplotypes To determine the allele frequency of the variants identified, PCR–RFLP analysis was used to screen the two most frequently occurring variants, c.[1067G > A] and c.[1448C > T], in the four largest population groups in SA. The study yielded no homozygous or heterozygous genotypes for any of the variants in any of the population groups assessed so that the allele frequencies of the population groups investigated were calculated as < 0.00067% (African, White SA, and Indian ethnicity) or < 0.00084% (mixed ethnicity) (Table 3 ). All five variants identified in the cohort were subsequently compared to gnomAD (v2.1.1) [ 19 ] and the Human Heredity and Health in Africa (H3Africa) project [ 20 ]. Of these variants, only two were recorded on gnomAD, displaying exome allele frequencies of < 0.0001% (c.[740G > T] and c.[1448C > T]); hitherto, none of the variants has been identified in the H3Africa data. The absence of the variants in the above-mentioned population databases, together with the frequency at which they were identified in the SA cohort, indicates with high probability the pathogenicity of these five variants and supports their causative role in MADD. Haplotyping was performed on those patients from whom sufficient DNA could be obtained. Upon recruitment, patients and their families were invited to self-report their ethnicity and region of birth. Consequently, haplotyping was performed on ten patients of White SA ethnicity (P1–P6, P9–P11 and P13) and one patient of mixed ethnicity (P8) born in various geographical regions across SA (including the central, north-eastern, north-western, south-eastern, and south-western provinces). Apart from the siblings, P2 and P3, and their mother, P4, all patients were found to be unrelated down to the second degree. DNA samples from those patients harbouring the c.[1067G > A] variant (P1, P6, P8 and P9–P11), displayed variable lengths of a shared haplotype on one allele, with a minimal overlapping region of 7.2 Mb (Fig. 1 ). Of these samples, the two with a homozygous c.[1067G > A];c.[1067G > A] genotype (P6 and P8) exhibited homozygosity for the shared haplotype in the region of ETFDH . Similar results were obtained for the DNA samples of those patients harbouring the c.[1448C > T] variant [P1, P4 (including P2 and P3 due to their first-degree relation), P5, P9–P11 and P13], with a minimal overlapping region of 4 Mb (Fig. 2 ). Again, the two patients who had a homozygous c.[1448C > T];c.[1448C > T] genotype (P4 and P5), displayed homozygosity for the shared haplotype in the ETFDH region. These findings suggest the presence of two separate founder haplotypes on which the c.[1067G > A] and c.[1448C > T] variants arose, with that of the former in the White SA population. However, without access to additional control data from the same population(s), it is currently not possible to estimate the haplotype frequency with confidence.
Conclusion This study provides the first extensive clinical and biochemical profiles, along with the genetic aetiology, of MADD in the diverse and understudied SA population. Within the cohort investigated, we describe one novel (c.[287dupA*]) and four previously identified causal variants in ETFDH , with an autosomal recessive inheritance pattern. In addition, the data support the suspicion that MADD is more prevalent in the White population of SA [ 10 ]. We also demonstrate a distinct shared haplotype for each of the two most common variants (c.[1067G > A] and c.[1448C > T]), suggesting the existence of a founder for each, with that of c.[1067G > A] likely having arisen in the White SA population. Furthermore, we show that, depending on the variant(s) involved, it is possible to anticipate the clinical progression and treatment response of the patient, which underscores the need for genetic confirmation in SA. It is our belief, that this genotype–phenotype correlation of MADD in the SA population will assist physicians—who rarely encounter this disorder—to recognise and treat MADD in a more efficient manner. In addition, our study provides background for the subsequent genetic counselling of families and patient-specific treatment of local MADD cases. Altogether, the data reported support a policy of including MADD in the NBS program of SA.
Background Multiple acyl-CoA dehydrogenase deficiency (MADD) is an autosomal recessive disorder resulting from pathogenic variants in three distinct genes, with most of the variants occurring in the electron transfer flavoprotein-ubiquinone oxidoreductase gene ( ETFDH) . Recent evidence of potential founder variants for MADD in the South African (SA) population, initiated this extensive investigation. As part of the International Centre for Genomic Medicine in Neuromuscular Diseases study, we recruited a cohort of patients diagnosed with MADD from academic medical centres across SA over a three-year period. The aim was to extensively profile the clinical, biochemical, and genomic characteristics of MADD in this understudied population. Methods Clinical evaluations and whole exome sequencing were conducted on each patient. Metabolic profiling was performed before and after treatment, where possible. The recessive inheritance and phase of the variants were established via segregation analyses using Sanger sequencing. Lastly, the haplotype and allele frequencies were determined for the two main variants in the four largest SA populations. Results Twelve unrelated families (ten of White SA and two of mixed ethnicity) with clinically heterogeneous presentations in 14 affected individuals were observed, and five pathogenic ETFDH variants were identified. Based on disease severity and treatment response, three distinct groups emerged. The most severe and fatal presentations were associated with the homozygous c.[1067G > A];c.[1067G > A] and compound heterozygous c.[976G > C];c.[1067G > A] genotypes, causing MADD types I and I/II, respectively. These, along with three less severe compound heterozygous genotypes (c.[1067G > A];c.[1448C > T], c.[740G > T];c.[1448C > T], and c.[287dupA*];c.[1448C > T]), resulting in MADD types II/III, presented before the age of five years, depending on the time and maintenance of intervention. By contrast, the homozygous c.[1448C > T];c.[1448C > T] genotype, which causes MADD type III, presented later in life. Except for the type I, I/II and II cases, urinary metabolic markers for MADD improved/normalised following treatment with riboflavin and L-carnitine. Furthermore, genetic analyses of the most frequent variants (c.[1067G > A] and c.[1448C > T]) revealed a shared haplotype in the region of ETFDH , with SA population-specific allele frequencies of < 0.00067–0.00084%. Conclusions This study reveals the first extensive genotype–phenotype profile of a MADD patient cohort from the diverse and understudied SA population. The pathogenic variants and associated variable phenotypes were characterised, which will enable early screening, genetic counselling, and patient-specific treatment of MADD in this population. Supplementary Information The online version contains supplementary material available at 10.1186/s13023-023-03014-8. Keywords Open access funding provided by North-West University.
Supplementary Information
Abbreviations American College of Medical Genetics and Genomics Automated Mass spectral Deconvolution and Identification System Creatine kinase Coenzyme A Dried blood spot Deoxyribonucleic acid Enzyme Commission number Electron transfer flavoprotein Electron transfer flavoprotein subunit alpha (gene) Electron transfer flavoprotein subunit beta (gene) Electron transfer flavoprotein-ubiquinone oxidoreductase (gene) Electron transfer flavoprotein-ubiquinone oxidoreductase Flavin adenine dinucleotide (reduced) Mitochondrial fatty acid β-oxidation Glutaric aciduria type II Genome Aggregation Database Genome Reference Consortium Human Build Human Heredity & Health in Africa Health and Care Research Wales Human embryonic kidney 293 cell line Health Research Authority Horse radish peroxidase International Centre for Genomic Medicine in Neuromuscular Diseases Multiple acyl-CoA dehydrogenase deficiency Multiple acyl-CoA dehydrogenase deficiency-disease severity 3 Megabase(s) Mitochondrial inner membrane protein MPV17 Medical Research Council Number of patients Newborn screening National Health Laboratory Service National Health Service National Institute for Health and Care Research National Research Foundation North-West University North-West University-Health Research Ethics Committee Patient Polymerase chain reaction-restriction fragment length Riboflavin South Africa(n) South African Medical Research Council Sodium dodecyl sulfate-polyacrylamide gel electrophoresis Stellenbosch University University College London University of Cape Town United Kingdom University of Pretoria United States of America Whole exome sequencing Acknowledgements We would like to thank the patients and their families, as well as all the referring physicians for their assistance. We would also like to acknowledge the staff of the Centre for Human Metabolomics, NWU (Potchefstroom, South Africa), Central Analytical Facilities, SU (Stellenbosch, South Africa), Macrogen Europe (Amsterdam, The Netherlands), and UCL Genomics (UCL Great Ormond Street Institute of Child Health, London, UK) for service provision, including data acquisition and -analysis. This work was supported by a Medical Research Council (MRC) strategic award to establish an International Centre for Genomic Medicine in Neuromuscular Diseases (ICGNMD; MR/S005021/1), the National Research Foundation (NRF) of South Africa (121311 and PMDS22080748827) as well as the South African Medical Research Council (SAMRC). Author contributions MB: Conceptualisation, Design, Data Acquisition, Data Analysis, Data Interpretation, Article Drafting, Article Revision; IS: Conceptualisation, Design, Data Acquisition, Data Analysis, Data Interpretation, Article Drafting, Article Revision; MD: Data Acquisition, Data Analysis, Data Interpretation, Article Drafting, Article Revision; MS: Data Acquisition, Data Analysis, Data Interpretation, Article Drafting, Article Revision; BCV: Data Acquisition, Data Analysis; GvdW: Data Acquisition, Data Analysis, Data Interpretation; CS: Data Acquisition, Data Analysis, Data Interpretation; KN: Data Acquisition, Data Analysis, Data Interpretation; FH: Data Acquisition, Data Analysis, Data Interpretation; SM: Data Acquisition, Data Analysis, Data Interpretation; RM: Data Interpretation; RWT: Data Interpretation; KP: Data Acquisition, Data Analysis, Data Interpretation; MRF: Data Acquisition, Data Analysis, Data Interpretation; JV: Data Acquisition, Data Analysis, Data Interpretation, Article Drafting; The ICGNMD Consortium: Data Acquisition, Data Analysis, Data Interpretation, Article Revision; RJAW: Design, Data Interpretation, Article Drafting, Article Revision; FHvdW: Conceptualisation, Design, Data Acquisition, Data Analysis, Data Interpretation, Article Drafting, Article Revision. The submitted work has been approved by all contributing authors. Funding Open access funding provided by North-West University. This work was supported by a Medical Research Council (MRC) strategic award to establish an International Centre for Genomic Medicine in Neuromuscular Diseases (ICGNMD; MR/S005021/1). MB is supported in part by the National Research Foundation (NRF) of South Africa (121311 and PMDS22080748827) as well as the South African Medical Research Council (SAMRC). RM and RWT are funded by the Wellcome Centre for Mitochondrial Research (203105/Z/16/Z), the Mitochondrial Disease Patient Cohort (UK) (G0800674), the Medical Research Council International Centre for Genomic Medicine in Neuromuscular Disease (MR/S005021/1), the Lily Foundation, the UK NIHR Biomedical Research Centre for Ageing and Age-related Disease award to the Newcastle upon Tyne Foundation Hospitals NHS Trust, and the UK NHS Highly Specialised Service for Rare Mitochondrial Disorders of Adults and Children. RWT also receives support from the MRC (MR/W019027/1), Mito Foundation, and the Pathological Society (UK). Funders had no role in the study design, data collection and analysis, data interpretation, decision to publish, or preparation of the manuscript. Availability of data and materials Previous data and samples were made available by the Centre for Human Metabolomics (NWU), SU, and UCT. New samples were collected with the help of paediatric and adult neurologists via Steve Biko Academic Hospital, Tygerberg Hospital, and Red Cross War Memorial Children’s Hospital. The datasets generated and/or analysed during the current study are not publicly available due to the data sharing policy of the ICGNMD study, but are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The patients in this study were enrolled as part of the ICGNMD study with ethical approval numbers 19/LO/1796 [Health Research Authority (HRA) and Health and Care Research Wales (HCRW)], NWU-00966-19-A1 and NWU-00966-19-A1-01 [Health Research Ethics Committee (NWU-HREC) of the Faculty of Health Sciences, North-West University], 296/2019 (Faculty of Health Sciences Research Ethics Committee, University of Pretoria), B19/01/002 (Health Research Ethics Committee, Stellenbosch University), and 605/2020 (Faculty of Health Sciences Human Research Ethics Committee, University of Cape Town), following informed consent/assent. Consent for publication The informed consent/assent accounts for the publication of our research data are available upon request. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Orphanet J Rare Dis. 2024 Jan 14; 19:15
oa_package/87/45/PMC10789041.tar.gz
PMC10789042
0
Background The fast-growing high-throughput sequencing technology has made DNA and RNA sequencing more efficient and accessible, resulting in a large collection of multi-omics data which makes molecular profiling possible. Due to the heterogeneity in cancer and the complexity of the biological processes, employing multi-omics sequencing data are crucial to more accurate cancer classification and tumor profiling. Many researchers have proposed methods that incorporate multi-omics data for either cancer type classification or cell type clustering [ 1 – 11 ]. These methods show that utilizing multi-omics data improves performance, and provides a better understanding of the key pathophysiological pathways across different molecular layers [ 12 ]. A typical multi-omics data generated from DNA and RNA sequencing usually consists of mRNA expression, microRNA (miRNA) expression, copy number variation (CNV), and DNA methylation [ 13 ]. The difference in data distributions across each omic, and the complex inter-omics and intra-omic connections (certain omic can act as a promotor or suppressor to genes) add more challenges to developing an integrative multi-omics classification method for cancer molecular subtypes. Recent studies have shown that cancer taxonomy based on molecular subtypes can be crucial for precision oncology [ 13 , 14 ]. An accurate cancer molecular subtype classifier is crucial for early-stage diagnosis, prognosis, and drug development. Traditional cancer taxonomy is based on its tissue origin. In 2014, The Cancer Genome Atlas (TCGA) Research Network proposed a new clustering method for cancers based on their integrated molecular subtypes that share mutations, copy-number alterations, pathway commonalities, and micro-environment characteristics instead of their tissue of origin [ 13 ]. They found 11 subtypes from 12 cancer types. In 2018, they applied the new taxonomy method to 33 cancer types and found 28 molecular subtypes [ 15 ]. The new cancer taxonomy provides a better insight into the heterogeneous nature of cancer. With the recent development in deep learning models, data-driven models benefit from the powerful feature extraction capability of deep learning networks in many fields [ 16 – 19 ]. Most multi-omics integrative models employ an early fusion approach that aggregates multi-omics data (mainly by concatenation) and then applies a deep neural network as a feature extractor; or a late fusion approach that first extracts features from each omic by deep neural networks and then aggregates extracted features as inputs to the classification network. For efficient implementation of multi-omics integrative models, convolutional neural networks (CNNs) are widely used [ 20 ]. Traditional deep neural networks are based on the assumption that the inner structure of the data is in Euclidean space [ 21 ]. Because of the complex interactions across many biological processes, such data structure is not a proper representation of bio-medical data, and researchers proposed graph-based data structures to tackle this limitation. In 2016, a graph convolution network (GCN), ChebNet, was proposed [ 16 ]. It uses the Chebyshev polynomial as the localized learning filter to extract the graph feature representation. In 2017, Petar Velickovic et al. proposed a graph attention network (GAT) that overcomes GCN’s disadvantage of dependence on the Laplacian eigenbasis [ 22 ]. GAT uses masked self-attention layers to enable nodes to attend over their neighborhoods’ features [ 22 ]. With the recent growing interest in the graph neural network, many graph-based classification methods have been proposed in the bio-medical field. To utilize the power of graph-structured data, Ramirez et al. proposed a GCN method to use intra-omic connections, protein-protein interaction networks, and gene co-expression networks. The model achieves a 94.71% classification accuracy for 33 cancer types and normal tissue on TCGA data [ 23 ]. To use the intra-omic connection across multiple omics, Wang et al. proposed MOGONET, a late-fusion GCN-based method that integrates multi-omics data for bio-medical data classification. And they achieve 80.61% accuracy on breast cancer subtype classification with BRCA dataset [ 5 ]. To compensate for the limitation of GCN, that it only extracts local representation on the graph, Li et al. proposed a parallel-structured GCN-based method that utilizes a gene-based prior knowledge graph for cancer molecular subtype classification [ 1 ]. There are also other ways to structure the graph. Wang et al. proposed a GCN-based method that uses a KNN-generated cell-cell similarity graph for single-cell sequencing data classification [ 24 ]. Since the introduction of GAT in 2017, it has gained more and more interest. Shanthamallu et al. proposed a GAT-based method, GrAMME, with two variations that use a supra-graph approach and late-fusion approach to extract features from a multi-layer graph with intra-omic connections only for classification in social science and political science datasets [ 25 ]. On the other hand, Kaczmarek et al. proposed a multi-omics graph transformer to utilize an inter-omics connection only graph, the miRNA-gene target network, for cancer classification on 12 cancer types from the TCGA data [ 7 ]. There are three common disadvantages of these approaches. First, most of them consider only one kind of connections in their model, either inter-omics or intra-omic connections. They do not aim to utilize both inter-omics and intra-omic connections for more effective feature extraction. Second, they only consider one kind of GNN models, either GCN or GAT. We find that GAT and GCN have their strength in different scenarios as shown in our experiments. Different graph layers are preferred for different tasks even with datasets in a similar domain. Third, most of these methods have not been tested on a more complex classification task. They are used for classification based on the cell-of-origin taxonomy such as cancer type classification and have not been applied to a more complex classification task such as cancer molecular subtype classification, which is more useful for diagnosis, prognosis, and treatment. Inspired by our previous work on the cancer molecular subtype classification based solely on intra-omic connections, we aim to develop a multi-omics integrative framework that exploits the powerful data aggregation property of GCN or GAT models (depending on the situation) and utilizes both the intra-omic network and the inter-omics network for more precise classification. Our goal is to build an accurate, robust, and efficient multi-omics integrative predictive model to classify these cancer molecular subtypes. In this work, we propose a general framework that can be used with any graph neural networks as the feature extractor, incorporate both gene-based and non-gene-based prior biological knowledge (primarily miRNA), and learn a knowledge graph consisting of both intra-omic and inter-omics connections. We apply the proposed model to classify cancer molecular subtypes and breast cancer molecular subtypes. We choose breast cancer as it is one of the most common and lethal cancers with a large number of samples in TCGA. It can be categorized into four major molecular subtypes based on the gene expression of the cancer cells, and breast cancer subtypes have significant impacts on the patient’s survival rates [ 26 ]. Our experimental results show the proposed method outperforms both the graph-based and CNN-based state-of-the-art methods. Our contributions in this study are (i) a novel generalized GNN-based multi-omics integrative framework for cancer molecular subtype classification, (ii) a supra-graph approach that can incorporate both intra-omic and inter-omics prior biological knowledge in the form of graphs, (iii) a representation of multi-omics data in the form of heterogeneous multi-layer graph, and (iv) a comparative analysis of GCN and GAT based models at different combinations of omics and different graph structures.
Method and materials The overview of the proposed framework structure is shown in Fig. 1 . The input data for the proposed framework is shown as a graph structure on the leftmost side. The data consists of three omics, mRNA expression (orange boxes), copy number variation (CNV) (yellow boxes), and miRNA (green boxes). The details of the network structure are discussed in the following Network Section. The proposed framework consists of 4 major modules: Module (1) a linear dimension-increase neural network, Module (2) a graph neural network (GNN), Module (3) a decoder, and Module (4) a shallow parallel network. Any kind of graph neural network can be used in Module 2. In this study, we focus on graph convolutional network (GCN) and graph attention network (GAT), which are two major kinds of GNN. Experiments about the effect of the decoder and the shallow parallel network modules are discussed in our ablation study. Network We build a heterogeneous multi-layer graph based on the prior biological knowledge, i.e. gene-gene interaction (GGI) network from BioGrid and miRNA-gene target network from miRDB [ 27 , 28 ]. Inspired by the meta-path and supra-graph approach for the multi-layered network models [ 25 , 29 ], we build a supra-graph with miRNA-miRNA meta-paths. A miRNA-miRNA meta-path is defined as if two miRNA nodes are connected to the same gene node from the GGI network and miRNA-gene network. An example of how we construct the supra-graph is shown in Fig. 2 . Meta-paths are shown as dotted lines in the figure. The adjacency matrix of the supra-graph is an matrix, where N is the number of genes and M is the number of miRNA. Every node in the graph is assumed to be self-connected, thus the diagonal elements of the adjacency matrix in the study are 1. The adjacency matrix of the supra-graph is shown in Eq. ( 1 ). where , , and . We also construct four different kinds of graphs other than supra-graph in our ablation study and apply them to five input combinations of omics: mRNA, miRNA, mRNA + miRNA, mRNA+CNV, mRNA + MiRNA + CNV, to test the effect of the different graphs on the model performance. The four different graphs are defined as follows. Only Gene-based Nodes When the input combination of omics is mRNA or mRNA+mRNA+CNV ( ), the graph is built with the GGI network, . Only miRNA-based Nodes When the input combination of omics is miRNA ( ), the graph is built with only miRNA meta-path network, . Only Intra-class Edges The graph only contains GGI network and miRNA meta-path network. Only Inter-class Edges The graph only contains miRNA-gene target network. The input graph is denoted as a tuple , where V is the set of nodes, E is the set of edges, and is the node attributes. The prior knowledge is incorporated into the model through the supra-graph defined above. In the supra-graph, nodes consist of both gene-based nodes and miRNA-based nodes, and edges are assigned by the adjacency matrix. Each gene-based node has a node attribute of a vector consisting of both gene expression and CNV data, . Each miRNA-based node has a node attribute as a scalar, . The gene-based nodes and miRNA-based nodes are fed through a linear dimension-increase layer, denoted as Module 1 in Fig. 1 to achieve the same node attribute dimension, , where F is the increased node attribute dimension. Graph neural network: convolution-based As mentioned before, any graph neural network can be used in the GNN module. We use ChebNet [ 16 ] to implement the GCN in this study. The supra-graph adjacency matrix introduced in the previous network section is first Laplacian normalized to as expressed in Eq. ( 4 ). where is an identity matrix, and the degree matrix is a diagonal matrix. The eigen decomposition form of can be obtained as where is a matrix of n orthonormal eigenvectors of , therefore . And is the eigenvalue matrix [ 16 ]. After transforming the graph on the Fourier domain, the learning filter can be approximated by a K th-order Chebshev polynomial. The convolution on the graph by such localized learning filter, can be expressed in Eq. ( 6 ). where is the features of j -th sample, , and with and . K is a hyper-parameter, where in our study. A max-pooling layer with is used to reduce the number of nodes and one layer of fully connected network is used to transform the learned local feature representation to a vector of length 64 for each sample, . Graph-neural network: attention-based GAT aims to solve the problem of GCN’s dependence on Laplacian eigenbasis of the graph adjacency matrix [ 22 ]. The updated node attributes are first passed through a linear transformation by a learnable weight, denoted as , where F is the updated node attribute dimension and is the intended output dimension for this GAT layer. Then, the self-attention coefficients for each node can be calculated as Eq. ( 7 ). where represents the importance of node j to node i and are the node attributes for node i , j . Such attention score is only calculated for , where NB ( i ) is all the first-order neighbor nodes around node i . The method normalizes the attention score by a softmax layer of and uses LeakyReLU as the activation function as express in Eq. ( 8 ). The output for each node can be expressed as Eq. ( 9 ). A multi-head attention mechanism is used to stabilize the attention score. In our study, the number of heads is 8. Similar to the GCN-based GNN module, the output is then passed through a max-pooling layer and a transformation layer to obtain the local graph representation, . Decoder and shallow parallel network As shown in Fig. 1 , the decoder is a two-layer fully connected network that is used to reconstruct the node attributes on the input graph. To compensate the localization property of either GCN or GAT layer in the GNN module, we use a parallel shallow fully connected network. Since the prior knowledge graphs have many limitation [ 1 ], we may neglect some global patterns in the data when extracting features based on the graph structure only. A shallow two-layer fully connected network is able to learn the global features of the data while ignoring the actual inner structure of the data. These two modules help the framework to better extract the overall sample feature representation. The effect of including vs. excluding these two modules is discussed in detail in the Ablation Study Section. The input of the parallel network is the updated node attributes, and the output global representation of the sample, is in the same dimension as the local feature representation from the GNN module, . and are then concatenated and passed through a classification layer for prediction. Loss function In the proposed framework, we define the loss function L as a linear combination of three loss functions in Eq. ( 10 ). where , and are linear weights, is the standard cross-entropy loss for the classification results, is the mean squared error for the reconstruction loss when the decoder is included, and is the squared norm of the model parameters to penalize the number of parameters to avoid overfitting. is defined as where is the flattened feature vector of j -th sample and is the corresponding reconstructed vector. We denote as the vector consists of all parameters in the model and the is defined as
Results and discussion We apply the proposed model to two different classification problems. The first is cancer molecular subtype classification on the TCGA Pan-cancer dataset and the second is breast cancer subtype classification on the TCGA breast invasive carcinoma (BRCA) dataset [ 15 , 30 ]. Data and experiment settings The TCGA Pan-cancer RNA-seq data, CNV data, miRNA data, and molecular subtype labels are obtained from the University of California Santa Cruz’s Xena website [ 31 ]. We only keep samples that have all three omics data and molecular subtype labels, and collect 9,027 samples in total. We use 17,946 genes that are common in both the gene expression data and the CNV data, and 743 miRNAs. The total number of molecular subtypes is 27 and there is a clear imbalance among these 27 classes as shown in Fig. 3 . All samples from class 24 are excluded from the study due to the lack of miRNA data. For BRCA subtype classification, there are 981 samples in total with 4 subtypes as shown in Table 1 . For the experiments on both datasets, 80% of the data is used for training, 10% is used for validation, and 10% is used for testing. All classes are present in the test set. All expression values are normalized within their own omics. We select the top 700 genes ranked by gene expression variances across the samples, and the top 100 miRNAs by miRNA expression variance. Results are averaged from five individual trials. The details of the model structure and hyperparameters are disclosed in the appendix. The model is implemented using Pytorch Geometric Library. Baseline models We selected four state-of-the-art models [ 1 , 7 , 23 , 25 ] as baseline models to evaluate the performance of the proposed approach. These four baseline models are implemented within the proposed framework in two forms, one is with the original structure, and the other is with some modifications to accommodate the multi-omics data. The details of all graph-based baseline implementation configurations are shown in Table 2 . We also included a fully-connected neural network (FC-NN) as a Euclidean-based baseline model. Conventional machine learning methods, such as Random Forest and SVM are not included in the scope of this study because they do not scale well to the multi-omics data as mentioned in our previous work [ 1 ]. Fully-connected neural network (FC-NN) The FC-NN is one of the widely used deep learning model for data in Euclidean space. The implemented structure is the same as the parallel structure. The input data is passed through a dimension-increase layer and then flattened. The flattened data is passed through three hidden layers and a softmax layer for classification. GCN models by Ramirez et al. The GCN model on cancer type classification is designed for gene expression data with intra-omic connections only [ 23 ]. The implementation of the original structure and the modified structure is a GCN model with no regularization modules. Multi-omics GCN models by Li et al. The multi-omics GCN model on cancer molecular subtype classification is designed for gene expression and CNV data with intra-omic connections only [ 1 ]. The implementation of both structures is a GCN model with a decoder and a parallel structure as shown in Table 2 . GrAMME Since GrAMME is not designed for cancer type classification [ 25 ], we modified the original structure for multi-omics data. GrAMME is designed for a GAT model with intra-omic connections only. The implementation is a GAT model with no regularization modules. Multi-omics GAT by Kaczmarek et al. The multi-omics graph transformer on 12 cancer type classification is designed for gene expression and miRNA data with inter-omics connections only [ 7 ]. As shown in Table 2 , the main difference between multi-omics GAT and GrAMME is the construction of the graph. Performance on classification For both classification tasks, the results of the proposed model and the baseline models are shown in Table 3 . The proposed model with GAT layers outperforms all the baseline models for both tasks in all four metrics and the proposed model with GCN layers achieves third for the pan-cancer classification, and second for the breast cancer subtype classification. For the task of pan-cancer molecular subtype classification, the additional omic data in the modified structure improve the model performance in all three cases of the baseline model with the original structure vs. the baseline model with the modified structure. For the same task, the multi-omics GCN model with the decoder and parallel structure shows superior performance among all the baseline models that utilize GCN layers. And GrAMME, which utilizes intra-omic connections, performs better than GAT models that utilize inter-omics connections. GrAMME is the best-performing one among the baseline models for the pan-cancer task. Overall, we see the proposed model achieves the best performance for the classification task on the complex pan-cancer molecular subtype classification in all four metrics and we can conclude that more omics improve the performance of models, and the models with more restriction modules or GAT layers have better performance. For breast cancer subtype classification, the overall trend is slightly different from that in the previous task. In most cases of including more omics, the performance of the models shows little or no improvement. We believe it is due to the nature of breast cancer taxonomy. The subtype is based on the expression level of multiple proteins. Thus, it makes the breast cancer subtype to be more closely related to the gene expression omic than the pan-cancer molecular subtype does. Such characteristic of the breast cancer subtype makes the model only using gene expression data perform very well such as the original GCN model. However, the proposed model still outperforms any baseline models by a large margin in all four metrics. Ablation study We conduct an ablation study to evaluate the effects of different numbers of genes, different training set splits, different combinations of modules within the model, and different combination of omics and graphs on the performance of the proposed model. Different numbers of genes We trained the proposed model and all baseline models at the 300 and 500 genes for pan-cancer molecular subtype classification and 300, 500, 1000, 2000, and 5000 genes for breast cancer subtype classification. The limitation of the test scope on pan-cancer classification is due to the computation constraints caused by its large number of samples. As shown in Table 4 , increasing the number of gene nodes improves the performance of all models. FC-NN model demonstrates great improvement in performance as the number of genes increases. And the proposed model with the GAT layer outperforms the baseline models at both numbers of genes. The accuracy and F1 scores of the proposed model and the baseline models for BRCA subtype classification are shown in Fig. 4 . The proposed model with GAT performs best when the number of genes is smaller than 1000 and the proposed model with GCN performs best when the number of genes is larger than 1000. The proposed GAT-based model yields the best result with an accuracy of 88.9% and an F1 score of 0.89 when using 700 genes; and the proposed GCN-based model yields the best result with an accuracy of 90.1% and an F1 score of 0.90 when using 5000 genes. The detailed results are shown in the supplementary file (Additional file 1 ). The performance of the proposed model with GAT deteriorates beyond 1000 genes, but the performance of the proposed model with GCN continues to rise as the number of genes grows beyond 1000 genes. All GAT-based baseline models show similar deterioration around 1000 genes. We think the high computation cost of the GAT-based model can cause it to perform worse on a large graph than on a small graph. Overall, we can conclude that the proposed model with GCN layers scales better than that with GAT layers at a large number of genes. In the process of testing the models on a large graph, we also find that a GAT-based model is more stable on a smaller learning rate compared to a GCN-based model. We believe it is caused by GAT’s high computation costs since a high learning rate may cause the model to be stuck in a local optimum. Overall, we see the proposed model achieves the best performance and scales well with a larger number of genes. We can also conclude that more genes and more omics mostly improve the performance of models, the models with more modules have better performance, and GAT-based models perform better with smaller graphs while GCN-based models scale better at larger graphs. Different training set split To examine the performance of the proposed model on a complex dataset with a smaller training set, we tested the model on the Pan-cancer dataset using three different training set splits. This approach was taken to mimic situations where only a smaller labeled dataset is available in the real world. The training set splits were set at , , and , with corresponding testing set splits of , , and . Throughout these tests, the validation set split was consistently kept at . As shown in Table 5 , the proposed model with the GAT layer exhibits a slight performance deterioration at and training set splits. However, it displays a more pronounced decline in classification accuracy at . In contrast, the proposed model with the GCN layer demonstrates consistent and robust performance across all three training-validation-testing splits. However, its classification accuracy is lower than that of the model with the GAT layer at and training set splits. Therefore, we can conclude that the proposed model with the GAT layer achieves superior performance compared to the model with the GCN layer when the training set is relatively small. However, the model with the GCN layer outperforms at a very small training set ( ). Overall, the proposed model with the GCN layer offers more robust classification performance with smaller training sets. Different combinations of modules To examine the effect of different modules within the proposed model, we test three different variants of the proposed model for the Pan-cancer molecular subtype classification. All variants of the proposed model are trained with all three omics data at 300, 500, and 700 genes. The proposed model without the decoder acts as a parallel structured GNN model, the proposed model without the parallel structure acts as a graph autoencoder model, and the proposed model without both the decoder and the parallel structure acts as a graph-classification GNN model. As shown in Table 6 , models without the parallel structure perform poorly compared to those without the decoder at any number of genes in general. It shows that the parallel structure plays an important role in feature extraction, which also demonstrates the benefit of including both local features and global features. When the graph size is small (300 genes), the model without the decoder and the parallel structure performs more poorly compared to those with either component. However, when the graph size is large enough (500 genes and 700 genes), the model without the decoder and the parallel structure performs relatively the same compared to those with either of the component. We believe the extra information in the large graph compensates for the loss in performance caused by the exclusion of either the decoder or the parallel structure. Different combination of omics and graphs To test the effect of different choices of omics and different graphs, we generate five different combinations of omics. The five combinations of omics are mRNA, miRNA, mRNA + CNV, mRNA + miRNA, and mRNA + CNV + miRNA. For mRNA + miRNA and mRNA + CNV + miRNA, two different variants of graphs are also tested. All models are conducted for Pan-cancer molecular subtype classification, and trained with 500 genes except for only miRNA omic, which contains only 100 miRNA nodes. As shown in Table 7 , the best-performing setting is mRNA + CNV + miRNA with intra-omic edges for both GAT-based and GCN-based models. The worst-performing setting is miRNA, which has the smallest graph size and information. Models on mRNA + CNV perform better than those on mRNA + miRNA, but adding miRNA to mRNA + CNV (mRNA + CNV + miRNA setting) still improves the model performance. Models with intra-omic graph performs slightly better than models with inter-omics graph. The performance difference across different settings is the same for both GAT-based and GCN-based models.
Results and discussion We apply the proposed model to two different classification problems. The first is cancer molecular subtype classification on the TCGA Pan-cancer dataset and the second is breast cancer subtype classification on the TCGA breast invasive carcinoma (BRCA) dataset [ 15 , 30 ]. Data and experiment settings The TCGA Pan-cancer RNA-seq data, CNV data, miRNA data, and molecular subtype labels are obtained from the University of California Santa Cruz’s Xena website [ 31 ]. We only keep samples that have all three omics data and molecular subtype labels, and collect 9,027 samples in total. We use 17,946 genes that are common in both the gene expression data and the CNV data, and 743 miRNAs. The total number of molecular subtypes is 27 and there is a clear imbalance among these 27 classes as shown in Fig. 3 . All samples from class 24 are excluded from the study due to the lack of miRNA data. For BRCA subtype classification, there are 981 samples in total with 4 subtypes as shown in Table 1 . For the experiments on both datasets, 80% of the data is used for training, 10% is used for validation, and 10% is used for testing. All classes are present in the test set. All expression values are normalized within their own omics. We select the top 700 genes ranked by gene expression variances across the samples, and the top 100 miRNAs by miRNA expression variance. Results are averaged from five individual trials. The details of the model structure and hyperparameters are disclosed in the appendix. The model is implemented using Pytorch Geometric Library. Baseline models We selected four state-of-the-art models [ 1 , 7 , 23 , 25 ] as baseline models to evaluate the performance of the proposed approach. These four baseline models are implemented within the proposed framework in two forms, one is with the original structure, and the other is with some modifications to accommodate the multi-omics data. The details of all graph-based baseline implementation configurations are shown in Table 2 . We also included a fully-connected neural network (FC-NN) as a Euclidean-based baseline model. Conventional machine learning methods, such as Random Forest and SVM are not included in the scope of this study because they do not scale well to the multi-omics data as mentioned in our previous work [ 1 ]. Fully-connected neural network (FC-NN) The FC-NN is one of the widely used deep learning model for data in Euclidean space. The implemented structure is the same as the parallel structure. The input data is passed through a dimension-increase layer and then flattened. The flattened data is passed through three hidden layers and a softmax layer for classification. GCN models by Ramirez et al. The GCN model on cancer type classification is designed for gene expression data with intra-omic connections only [ 23 ]. The implementation of the original structure and the modified structure is a GCN model with no regularization modules. Multi-omics GCN models by Li et al. The multi-omics GCN model on cancer molecular subtype classification is designed for gene expression and CNV data with intra-omic connections only [ 1 ]. The implementation of both structures is a GCN model with a decoder and a parallel structure as shown in Table 2 . GrAMME Since GrAMME is not designed for cancer type classification [ 25 ], we modified the original structure for multi-omics data. GrAMME is designed for a GAT model with intra-omic connections only. The implementation is a GAT model with no regularization modules. Multi-omics GAT by Kaczmarek et al. The multi-omics graph transformer on 12 cancer type classification is designed for gene expression and miRNA data with inter-omics connections only [ 7 ]. As shown in Table 2 , the main difference between multi-omics GAT and GrAMME is the construction of the graph. Performance on classification For both classification tasks, the results of the proposed model and the baseline models are shown in Table 3 . The proposed model with GAT layers outperforms all the baseline models for both tasks in all four metrics and the proposed model with GCN layers achieves third for the pan-cancer classification, and second for the breast cancer subtype classification. For the task of pan-cancer molecular subtype classification, the additional omic data in the modified structure improve the model performance in all three cases of the baseline model with the original structure vs. the baseline model with the modified structure. For the same task, the multi-omics GCN model with the decoder and parallel structure shows superior performance among all the baseline models that utilize GCN layers. And GrAMME, which utilizes intra-omic connections, performs better than GAT models that utilize inter-omics connections. GrAMME is the best-performing one among the baseline models for the pan-cancer task. Overall, we see the proposed model achieves the best performance for the classification task on the complex pan-cancer molecular subtype classification in all four metrics and we can conclude that more omics improve the performance of models, and the models with more restriction modules or GAT layers have better performance. For breast cancer subtype classification, the overall trend is slightly different from that in the previous task. In most cases of including more omics, the performance of the models shows little or no improvement. We believe it is due to the nature of breast cancer taxonomy. The subtype is based on the expression level of multiple proteins. Thus, it makes the breast cancer subtype to be more closely related to the gene expression omic than the pan-cancer molecular subtype does. Such characteristic of the breast cancer subtype makes the model only using gene expression data perform very well such as the original GCN model. However, the proposed model still outperforms any baseline models by a large margin in all four metrics. Ablation study We conduct an ablation study to evaluate the effects of different numbers of genes, different training set splits, different combinations of modules within the model, and different combination of omics and graphs on the performance of the proposed model. Different numbers of genes We trained the proposed model and all baseline models at the 300 and 500 genes for pan-cancer molecular subtype classification and 300, 500, 1000, 2000, and 5000 genes for breast cancer subtype classification. The limitation of the test scope on pan-cancer classification is due to the computation constraints caused by its large number of samples. As shown in Table 4 , increasing the number of gene nodes improves the performance of all models. FC-NN model demonstrates great improvement in performance as the number of genes increases. And the proposed model with the GAT layer outperforms the baseline models at both numbers of genes. The accuracy and F1 scores of the proposed model and the baseline models for BRCA subtype classification are shown in Fig. 4 . The proposed model with GAT performs best when the number of genes is smaller than 1000 and the proposed model with GCN performs best when the number of genes is larger than 1000. The proposed GAT-based model yields the best result with an accuracy of 88.9% and an F1 score of 0.89 when using 700 genes; and the proposed GCN-based model yields the best result with an accuracy of 90.1% and an F1 score of 0.90 when using 5000 genes. The detailed results are shown in the supplementary file (Additional file 1 ). The performance of the proposed model with GAT deteriorates beyond 1000 genes, but the performance of the proposed model with GCN continues to rise as the number of genes grows beyond 1000 genes. All GAT-based baseline models show similar deterioration around 1000 genes. We think the high computation cost of the GAT-based model can cause it to perform worse on a large graph than on a small graph. Overall, we can conclude that the proposed model with GCN layers scales better than that with GAT layers at a large number of genes. In the process of testing the models on a large graph, we also find that a GAT-based model is more stable on a smaller learning rate compared to a GCN-based model. We believe it is caused by GAT’s high computation costs since a high learning rate may cause the model to be stuck in a local optimum. Overall, we see the proposed model achieves the best performance and scales well with a larger number of genes. We can also conclude that more genes and more omics mostly improve the performance of models, the models with more modules have better performance, and GAT-based models perform better with smaller graphs while GCN-based models scale better at larger graphs. Different training set split To examine the performance of the proposed model on a complex dataset with a smaller training set, we tested the model on the Pan-cancer dataset using three different training set splits. This approach was taken to mimic situations where only a smaller labeled dataset is available in the real world. The training set splits were set at , , and , with corresponding testing set splits of , , and . Throughout these tests, the validation set split was consistently kept at . As shown in Table 5 , the proposed model with the GAT layer exhibits a slight performance deterioration at and training set splits. However, it displays a more pronounced decline in classification accuracy at . In contrast, the proposed model with the GCN layer demonstrates consistent and robust performance across all three training-validation-testing splits. However, its classification accuracy is lower than that of the model with the GAT layer at and training set splits. Therefore, we can conclude that the proposed model with the GAT layer achieves superior performance compared to the model with the GCN layer when the training set is relatively small. However, the model with the GCN layer outperforms at a very small training set ( ). Overall, the proposed model with the GCN layer offers more robust classification performance with smaller training sets. Different combinations of modules To examine the effect of different modules within the proposed model, we test three different variants of the proposed model for the Pan-cancer molecular subtype classification. All variants of the proposed model are trained with all three omics data at 300, 500, and 700 genes. The proposed model without the decoder acts as a parallel structured GNN model, the proposed model without the parallel structure acts as a graph autoencoder model, and the proposed model without both the decoder and the parallel structure acts as a graph-classification GNN model. As shown in Table 6 , models without the parallel structure perform poorly compared to those without the decoder at any number of genes in general. It shows that the parallel structure plays an important role in feature extraction, which also demonstrates the benefit of including both local features and global features. When the graph size is small (300 genes), the model without the decoder and the parallel structure performs more poorly compared to those with either component. However, when the graph size is large enough (500 genes and 700 genes), the model without the decoder and the parallel structure performs relatively the same compared to those with either of the component. We believe the extra information in the large graph compensates for the loss in performance caused by the exclusion of either the decoder or the parallel structure. Different combination of omics and graphs To test the effect of different choices of omics and different graphs, we generate five different combinations of omics. The five combinations of omics are mRNA, miRNA, mRNA + CNV, mRNA + miRNA, and mRNA + CNV + miRNA. For mRNA + miRNA and mRNA + CNV + miRNA, two different variants of graphs are also tested. All models are conducted for Pan-cancer molecular subtype classification, and trained with 500 genes except for only miRNA omic, which contains only 100 miRNA nodes. As shown in Table 7 , the best-performing setting is mRNA + CNV + miRNA with intra-omic edges for both GAT-based and GCN-based models. The worst-performing setting is miRNA, which has the smallest graph size and information. Models on mRNA + CNV perform better than those on mRNA + miRNA, but adding miRNA to mRNA + CNV (mRNA + CNV + miRNA setting) still improves the model performance. Models with intra-omic graph performs slightly better than models with inter-omics graph. The performance difference across different settings is the same for both GAT-based and GCN-based models.
Conclusion In this study, we propose a novel end-to-end multi-omics GNN framework for accurate and robust cancer subtype classification. The proposed model utilizes multi-omics data in the form of a heterogeneous multi-layer graph, which is the supra-graph built from GGI network, miRNA-gene target network, and miRNA meta-path. While GNNs have been previously employed for genomics data analysis, our model’s novelty lies in the utilization of a heterogeneous multi-layer multiomics supra-graph. The supra-graph not only incorporates inter-omics and intra-omic connections from established biological knowledge but also integrates genomics, transcriptomics, and epigenomics data into a single graph, providing a novel advancement in cancer subtype classification. The proposed model outperforms all four baseline models for cancer molecular subtype classification. We do a thorough comparative analysis of GAT and GCN-based models at different numbers of gene settings, different combinations of omics, and different graphs. Comparing the proposed model to the baseline models, it achieves the best performance for cancer molecular subtype classification and BRCA subtype classification. The proposed model with GAT layers performs better than that with GCN layers at smaller-size graphs (smaller than 1000 genes). However, the performance of the GAT-based model deteriorates as the size of the graph grows beyond a certain threshold. On the other hand, the performance of the GCN-based model continues to improve as the size of the graph grows. Therefore, we can conclude that a GAT-based model is more suitable on a smaller graph, where it has a higher feature extraction ability and its computation cost isn’t that high yet. By studying the effect of different modules within the proposed model and different combinations of omics, we find the addition of a decoder and the parallel structure, and including other omics improves the performance of the proposed model. The benefit of using parallel structure outweighs that of decoder, especially on smaller-size graphs, and the benefit of adding CNV is higher than that of adding miRNA. We also find that using a graph with only intra-omic edges yields a better performance than using a graph with only inter-omics edges, which agrees with the results from the previous study [ 7 ]. The proposed model also has some limitations. We investigate only two well-established and widely adopted GNN models. New models are emerging with the recent blooming of studies in GNN models. As the size of the graph grows or more omics are added, GAT-based models become more sensitive to parameters and take a much longer time to train. It is our future research direction to overcome such limitations. The proposed model for cancer subtype classification depends on labeled data, which is costly to annotate and difficult to obtain in the real world. Exploring unsupervised learning for cancer subtype detection is also a direction we aim to pursue in our future research. In summary, incorporating gene-based and non-gene-based omic data in the form of a supra-graph with inter-omics and intra-omic connections improves the cancer subtype classification. The GAT-based model is preferable on smaller graphs than the GCN-based model. GCN-based model is preferable when dealing with large and complex graphs.
Background The recent development of high-throughput sequencing has created a large collection of multi-omics data, which enables researchers to better investigate cancer molecular profiles and cancer taxonomy based on molecular subtypes. Integrating multi-omics data has been proven to be effective for building more precise classification models. Most current multi-omics integrative models use either an early fusion in the form of concatenation or late fusion with a separate feature extractor for each omic, which are mainly based on deep neural networks. Due to the nature of biological systems, graphs are a better structural representation of bio-medical data. Although few graph neural network (GNN) based multi-omics integrative methods have been proposed, they suffer from three common disadvantages. One is most of them use only one type of connection, either inter-omics or intra-omic connection; second, they only consider one kind of GNN layer, either graph convolution network (GCN) or graph attention network (GAT); and third, most of these methods have not been tested on a more complex classification task, such as cancer molecular subtypes. Results In this study, we propose a novel end-to-end multi-omics GNN framework for accurate and robust cancer subtype classification. The proposed model utilizes multi-omics data in the form of heterogeneous multi-layer graphs, which combine both inter-omics and intra-omic connections from established biological knowledge. The proposed model incorporates learned graph features and global genome features for accurate classification. We tested the proposed model on the Cancer Genome Atlas (TCGA) Pan-cancer dataset and TCGA breast invasive carcinoma (BRCA) dataset for molecular subtype and cancer subtype classification, respectively. The proposed model shows superior performance compared to four current state-of-the-art baseline models in terms of accuracy, F1 score, precision, and recall. The comparative analysis of GAT-based models and GCN-based models reveals that GAT-based models are preferred for smaller graphs with less information and GCN-based models are preferred for larger graphs with extra information. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-023-05622-4. Keywords
Supplementary Information
Acknowledgements Not applicable. Author contributions BL obtained the TCGA data and network data. BL designed the new method and analyzed the results. BL and SN drafted the manuscript and revised the manuscript together. Both authors have approved the final manuscript. Funding This work is supported by the National Science Foundation (NSF) under grant No. 1942303, PI: Nabavi. Availability of data and materials TCGA Pan-cancer dataset and TCGA BRCA dataset are both obtained from Xena database (https://xenabrowser.net), the detailed link for TCGA Pan-cancer dataset is ( https://xenabrowser.net/datapages/?cohort=TCGA Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Bioinformatics. 2024 Jan 15; 25:27
oa_package/49/aa/PMC10789042.tar.gz
PMC10789043
0
Background Human genetics, population-scale biobanks, and cancer genome sequencing have identified thousands of genetic variants associated with disease [ 1 , 2 ]. However, the rate of discovery of such variants vastly exceeds our ability to understand and experimentally model their functional effects. High-throughput CRISPR-mediated pooled screening for phenotype [ 3 ] or coupled to single-cell transcriptomics [ 4 ] offers a powerful way to assess the effects of thousands of genetic perturbations. However, it is mainly limited to knockouts or manipulation of expression level using CRISPR interference or CRISPR activation since the guide RNA (gRNA) is used as a proxy of cell genotype and thus the efficiency of the perturbation must be very high. This makes it very challenging to screen for single nucleotide variants, since base editing, prime editing, or homology-directed repair (HDR) efficiency is rarely high enough [ 5 ], is highly variable between different genomic sites and cell types, and can lead to undesirable editing byproducts such as bystander mutations, insertions/deletions, or heterozygous edits. Even in those cases where base or prime editor screens have been successful [ 6 – 9 ], it is not possible to distinguish cells containing a non-functional gRNA that has not edited the genome from cells with a functional gRNA that have successfully introduced a benign edit that does not have an effect on cell phenotype. This means that benign variants cannot be accurately classified without simultaneous genotyping of the cells. It is possible to directly sequence genomic edits during flow cytometric [ 9 ] or life-death [ 10 ]-based phenotypic selection, allowing SNVs to be screened with these readouts, but this is difficult to apply to transcriptomic readouts. A number of methods have been developed to allow the coupling of the genotype and phenotype of single cells. These fall into two broad categories: those that amplify the whole genome and transcriptome from a single cell [ 11 – 17 ] or those that directly read out genotype from the RNA [ 18 – 21 ]. The first class is often plate-based, limiting their scalability, with the exception of two recent studies that either use split pool barcoding [ 17 ] or droplet microfluidics [ 16 ] to increase the number of cells that can be assayed. While these techniques are useful for discovering natural variation and its effect on the transcriptome, they are not ideal for perturbation screens due to the cost of whole-genome sequencing and the relatively high allele dropout rate, making it difficult to accurately call SNVs, especially heterozygotes. Even in the best example, allele dropout rates are around 20–25% [ 13 ], with high coefficients of variation across the genome, and the higher throughput methods show even higher variability [ 16 ]. One method, TARGET-seq [ 22 ], uses targeted amplification of DNA and achieves low allele dropout (around 10%), but this is only possible in plates due to the need for a large dilution step after cell lysis and thus not scalable to tens or hundreds of thousands of cells. The second class of methods relies on the direct detection of variants within the RNA, using short [ 18 – 20 ] or long read sequencing [ 21 ] to capture variants at different locations within the transcript. While these methods require only limited adaptation of existing protocols and can be high-throughput, they are only possible for genes with high expression levels in order to capture sufficient transcripts from each cell. They are also blind to mutations that lose RNA expression such as nonsense or frameshift mutations that trigger nonsense-mediated decay, and it is difficult to accurately identify heterozygous mutants that show allele-specific expression. Importantly, non-coding variants that are not transcribed, such as those frequently identified from genome-wide association studies, are not accessible to this kind of technology. To address these limitations in scale, accuracy, and applicability to all SNVs, we developed a method, scSNV-seq, that uses transcribed genetic barcodes to couple targeted single-cell genotyping with transcriptomics to identify the edited genotype and transcriptome of each individual cell rather than predicting genotype from gRNA identity. This enables accurate high-throughput pooled screening for SNVs with single-cell “omics” readouts.
Methods gRNA library cloning to include PuroR barcode and iBAR barcode libraries To introduce the PuroR barcode (in the 5′ UTR of the puromycin resistance gene), a single-stranded ultramer containing NeoUTR3 [ 31 ] was amplified using KAPA to add Gibson arms and a 12N barcode in the reverse primer. After SPRI purification, the product was cloned using Gibson assembly into lentivector (Addgene #67,974) cut with XbaI and XhoI. After ethanol precipitation, 5 Gibson reactions were electroporated into supercompetent cells (Endura, Lucigen) and grown in liquid culture to give a coverage of around 100 million barcodes. gRNA with iBAR barcodes were introduced into the PuroR library by amplifying the gRNA library tiling JAK1 [ 23 ] (Twist, 2000 guides, 1055 of which map to JAK1 with the remainder being guides targeting intergenic regions, essential genes, or non-targeting controls) to include a 6N randomized iBAR barcode in the primer. After a nested PCR, the gRNA iBAR library was cloned by Gibson into the PuroR library cut with BbsI and BamHI. After ethanol precipitation, 2 Gibson reactions were transformed into supercompetent cells and grown to give a coverage of around 40 million events. All primers are detailed in Additional file 3 : Table S2. Base editing screens For base editing experiments, we derived a clonal line of HT-29 cells expressing a base editor (cytidine BE3-NGG) under a doxycycline-inducible promoter [ 23 ] and introduced the lentiviral gRNA library tiling JAK1 with PuroR and iBAR barcodes as described above. We used an infection rate of ~ 30% to minimize the introduction of multiple gRNAs in one cell and selected infected cells with 2 μg/ml puromycin (Thermo Fisher Scientific). Cells were maintained in 0.5 μg/ml puromycin for the duration of the experiment to maintain gRNA expression. Base editing was induced by the addition of doxycycline (1 μg/ml; Sigma Aldrich) for 72 h. After editing, we bottlenecked a subset of these edited cells (15,000 cells) and also used FACS [ 23 ] to select LoF (50,000 cells) to ensure we captured representative phenotypes in our bottlenecked populations. After expansion, these cells were both loaded onto the Chromium X (4 lanes, aiming to recover 60,000 cells per lane) for transcriptomic experiments (see below for further details) and were also further bottlenecked (8000 cells) for the genotyping plus transcriptomic experiments. After further expansion, these cells were single-cell genotyped with the Tapestri machine (Mission Bio, according to the manufacturer’s instructions), using 4 reactions, up to 10,000 cells per reaction and using a custom panel of amplicon sequences (Additional file 3 : Table S2) spanning JAK1 exons and promoter region, as well as the gRNA plus iBAR barcodes and PuroR barcodes. The same population of cells was also loaded onto the Chromium X (2 lanes, aiming to recover 60,000 cells per lane). For all transcriptomics experiments, the base editor was induced again for 24 h as we have found it necessary to have expression of Cas9 to stabilize the gRNA transcripts and improve gRNA detection in single cells. We stimulated cells with IFN-γ (400 U/ml; Thermo Fisher Scientific) for 16 h before processing cells. We used the 5′HT kit (10X Genomics), and cDNA libraries were prepared according to the manufacturer’s instructions. We performed direct gRNA capture by spiking in a scaffold-specific RT primer before loading, and after the cDNA amplification, we performed a nested PCR from the small SPRI fraction to produce a library for sequencing both the gRNA and the iBAR barcode. We also spiked in a puromycin resistance gene-specific RT primer and carried out an analogous nested PCR in order to produce a PuroR barcode library (primer sequences in Additional file 3 : Table S2). Sequencing was performed on the NovaSeq 6000 (Illumina). Data analysis of single-cell base editor screen without genotyping (non-genotyped large BE experiment) Processing and quality control We used Cell Ranger 7.0.1 to obtain UMI counts for gRNA and mRNA and for cell calling. For quality control, we removed low outliers for the total count, low outliers for the number of detected features, and high outliers for the percentage of counts from mitochondrial genes using the scater [ 32 ] Bioconductor package, obtaining 155,429 cells (non-genotyped large BE experiment). gRNA calling We developed a robust method to call gRNAs and other barcodes in cells from (UMI) counts using a probabilistic model of mixtures of skewed normal distributions with 3 components. We considered all UMI counts above a minimum threshold of 2 in all cells. Then, we used the mixture model to group them into 3 clusters, 1 cluster for ambient background noise and 2 clusters for signal counts, to allow for a bimodal distribution of signal counts. For robust gRNA assignment and to exclude undetected multiple gRNA assignments in a cell, we defined 2 thresholds for UMI counts: a lower threshold—UMI counts below this threshold mean a 90% probability of being in the ambient cluster—and an upper threshold—UMI counts below this threshold correspond to a 10% probability of being in the ambient cluster. A gRNA was then called in a cell if UMI counts for 1 gRNA are above the upper threshold and no other gRNAs have UMI counts above the lower threshold. We obtained 43,639 cells from this robust assignment of one gRNA and one iBAR per cell, which we used for downstream analysis. Using only cell barcodes with a unique gRNA and iBAR assigned to them also removed most doublets, as these would have 2 gRNAs. Dimensionality reduction and clustering First, genes that are differentially expressed (DE) for at least one gRNA (with at least ten cells assigned to it) compared to cells with non-targeting gRNAs are identified using the Wilcoxon rank-sum test [ 33 ]. Then, we performed principal component analysis (PCA) on the data, subset to the DE genes and the genes in the JAK-STAT pathway. Louvain clustering [ 34 ] was performed on a neighborhood graph using the ten nearest neighbors for each cell, based on the low-dimensional representation obtained by the PCA (Additional file 1 : Fig. S1b). Two larger meta-clusters (Fig. 1 a, referred to as WT (wild-type) and LoF (loss-of-function) are formed by grouping clusters by the similarity of their transcriptomes (see dendrogram in Additional file 1 : Fig. S1b) and by the percentage of cells with non-targeting gRNAs in the cluster (Additional file 1 : Fig. S1c). Differential expression analysis for LoF gRNAs gRNAs for which at least 70% and at least 3 cells are in the LoF cluster were assigned to the LoF group. Differential analysis was performed between all cells of the LoF group and all cells with non-targeting gRNAs using the Wilcoxon rank-sum test [ 33 ] (Fig. 1 c). The Wilcoxon rank-sum test is a standard non-parametric test that compares for each gene how often its expression is higher for the LoF group compared to the cells with non-targeting gRNAs. Genes more highly or lowly expressed significantly often at FDR level of 0.1 are highlighted in Fig. 1 c. The area under the curve (AUC) is the proportion of times that the expression of a gene is higher for the LoF group than in a corresponding cell of the non-targeting group, where corresponding refers to being the same quantile within the respective group. Therefore, AUC < 0.5 means downregulation in the LoF group and AUC > 0.5 upregulation. Using a non-parametric approach like AUC is more appropriate and robust for cases where a set of cells cannot be assumed to follow a parametric distribution like a Gaussian or a negative Binomial distribution. Here, we cannot assume cells of the same barcode have been perturbed to follow the parametric distribution, as the cells may have been impacted to different degrees. An extreme example of this is the SoF mutants (Fig. 2 f). Experiment with genotyping: analysis of scDNA-seq modality The Tapestri DNA Pipeline On-prem was used for QC, cell barcode correction, alignment, and cell calling, using as the reference the hg38 genome with pKLV2 added (Additional file 3 : Table S2). For each cell MissionBio barcode identified as a cell by the pipeline (34,801), variant calling was performed using GATK HaplotypeCaller [ 35 ]. gRNA, iBAR, and puroR counts were computed for each cell barcode, using the reads for pKLV2 from the aligned bam files. Then, gRNAs, iBARs, and puroRs were assigned to cells using the same gRNA calling method as described above for the scRNA-seq modality. We obtained 13,102 cells with a unique puroR barcode robustly assigned, 10,869 cells with a gRNA + iBAR combination robustly assigned, and 10,112 cells with both unique puroR and unique gRNA + iBAR assigned, i.e., 77% of cells with a unique puroR barcode assigned were also assigned both gRNA and iBAR, and 93% of all cells with unique gRNA + iBAR were assigned a unique puroR (Additional file 1 : Fig. S2a). This showed that while the detection of the puroR barcode was better for the scDNA modality, gRNA + iBAR and puroR assignments agreed almost perfectly for cells with a robust gRNA assignment. It allows us to map puroR barcodes to gRNA + iBAR, to facilitate analysis for the scRNA-seq modality, where we used cells with only iBAR + gRNA assigned and without puroR, as iBAR + gRNA detection was much better than for puroR. We established this correspondence between puroR on the one hand and gRNA + iBAR on the other hand for all puroRs that only occurred paired with one gRNA + iBAR and paired with that gRNA + iBAR for at least 2 cells. By using only cells with confidently assigned unique barcodes, we avoid including doublets and cells with multiple gRNAs, as well as droplets mistakenly identified as cells. Groups of cells from the same parent cell (barcode groups) were identified as groups that either share the same gRNA-iBAR combination and the same puroR. For cases where either of the barcodes could not be called in a cell, the assignment to groups was performed on the basis of the barcode called (iBAR + gRNA or puroR). We obtained 332 unique barcodes with at least 3 cells and with puroR and iBAR + gRNA confidently assigned. The smaller number of gRNAs represented compared to the large BE experiment resulted from deliberate bottlenecking. In fact, only 501 of the gRNAs were present with at least 1 cell for the scDNA modality (290 with at least 2 cells, 184 with at least 10 cells). Genotypes were then called on a per barcode group basis, to allow robust genotyping for single-cell data, which have higher noise levels than pooled data and may be affected by allele dropout as well as distortion of genotype calling because of ambient counts. First, we subsetted cell genotypes to C- > T and A- > G mutations (for gRNAs on the reverse strand) and removed frequent mutations occurring in more than 10% of the barcodes, as we assumed that they were not caused by the gRNAs. We called genotypes for barcode groups with at least 3 cells. We used the following computational method to assess for each barcode group whether a genotype can be called robustly (callability) and to call the genotype: For each position in the genome, a variant was called if it was present on at least one allele in at least 2 cells from the group comprising at least 50% of the cells and if a majority of cells with the variant have this variant on the same number of alleles. This relatively low threshold of 50% reflects the fact that it is unlikely that more than 2 cells and more than 50% of the cells of a barcode group have a miscalled mutation by chance and limits the impact of dropout and missed mutations on genotype calling at the level of barcode-groups. A barcode group was called WT, if for each position, no more than 1 cell (or 0 cells if < 10 cells per barcode group) has a mutation on any number of alleles. The accuracy of this approach of genotype calling at the barcode-group level is shown in Fig. 2 d. At this level of robustness and accuracy, we were able to call genotypes for 233 barcodes (Fig. 1 d, e, Additional file 2 : Table S1), out of 332 barcodes with at least 3 cells identified overall (72%), with a total of 9908 cells. For barcodes with at least 3 cells, we found no significant dependence of the callability of the genotype on cell number (Wilcoxon rank sum test, p = 10.3%). Consequences were assigned to edits on the barcode group level using VEP [ 36 ], restricting to MANE select proteins. Edits in the JAK1 promoter region (chr1:64,964,978–64,967,543) were labeled as promoter [ 23 ]. For several edits for a genotype, we call the most severe consequence, where stop codon/start lost > splice variant > missense variant > promoter/intron > synonymous. Detailed genotype calls per barcode with consequences and additional analysis results can be found in Additional file 2 : Table S1. Experiment with genotyping: analysis of scRNA-seq modality This section describes the process of the scRNA-seq modality for the smaller and bottlenecked experiment that was combined with the genotyping. Basic processing and gRNA calling Basic processing and gRNA calling were performed in the same way as for the non-genotyped data. iBAR and puroR calling was performed as follows: first, a list of all possible iBARs was created, and a list of puroRs was obtained from the puroR calling at the scDNA level. These lists were used as input in the cellranger pipeline, to obtain UMI counts for iBARs and puroRs in the same way as for gRNAs. Finally, iBARs were called in cells using the same method as for gRNAs. Dimensionality reduction was also performed in the same way as for the non-genotyped data set. We obtained 26,779 cells with a confidently assigned unique gRNA and iBAR. A total of 18,978 of these cells had a iBAR-gRNA combination present among the barcode groups with confident genotype assignment from the DNA modality (200 barcodes, median number of cells per barcode group 14, mean number 95, Fig. 1 d). Mapping genotypes to the scRNA-seq modality Integration with non-genotyped data set To compare the genotyped to the larger non-genotyped data set at the level of UMAPs and clusters, we used mutual nearest neighbours [ 37 ] for data integration and, based on the integrated PCA representation, assigned to each cell in the genotyped data set the UMAP coordinates of its nearest neighbor in the non-genotyped data set (Fig. 1 e), and the most frequent cluster among its 10 nearest neighbors in the non-genotyped data set (Fig. 1 h). For the clusters in Fig. 1 h, a cell was filtered out if it was the only cell with a specific barcode within a cluster, to denoise possible errors in barcode assignment for the scRNA-seq data. Correlation of differential expression across barcodes Differential expression was performed for the barcode groups with confidently assigned genotypes and with at least 10 cells for the scRNA-seq modality (114 barcodes). Figure 2 a shows the correlations of differential gene expression of each barcode to cells with both WT-genotypes and non-targeting gRNAs. The differential expression compared to the non-targeting cells with WT genotypes was computed for each gene and each barcode with at least 10 cells. Then, we computed the correlation across the AUCs obtained by this differential expression analysis, including the computation of the correlation genes significantly differentially expressed for at least one barcode. Diffusion and pathway scores Diffusion maps [ 28 ] were used to identify trajectories in the data. The first diffusion component, which we identified as the trajectory towards full LoF of JAK1, was named diffusion score. The pathway score for the JAK-STAT pathway (Additional file 1 : Fig. S3a) was computed using the PROGENy tool [ 29 ]. Estimation of false-negative and false-positive genotype calls We estimated the accuracy of our computational approach to genotyping at the barcode level using stop codons (which we can assume to lead to LoF) and WT (which cannot be LoF). We estimated the number of false positive genotype calls by examining the number of barcodes called as stop codons or splice variants, but with a diffusion score indicative of not LoF. Similarly, false negatives were estimated by considering the number of barcodes called as WT, but with a LoF phenotype (Fig. 2 d). False positives and negatives for predicted rather than actually called phenotypes were estimated using predicted genotypes, excluding those gRNAs targeting the JAK1 promoter or UTR region and not covered by an amplicon. Characterization of SoF variants We explored heterogeneity of LoF level of homozygous missense variants by means of density plots for the diffusion scores of all barcodes with missense variants, including variants with low impact (low diffusion score indicating no LoF benign), intermediate diffusion scores (indicating SoF), and high impact (high-score missense) mutations (Fig. 2 f). The plots (one density plot for each barcode) are ordered vertically by the mean diffusion score across the cells with the barcode. Barcodes with intermediate diffusion scores are highlighted by a purple box. A second, smaller, purple box highlights one additional barcode, to illustrate that this barcode has the same genotype as one of the barcodes in the first box. The variants highlighted by the boxes are characterized by lower FACS scores and higher proliferation scores (SoF). Specific gene regulation differences between SoF and full-impact missense mutations were identified as those either upregulated significantly for SoF compared to full-impact and not downregulated for SoF compared to benign missense variants (AUC > 0.45) or downregulated significantly for SoF compared to full impact and not upregulated for SoF compared to benign missense variants (AUC < 0.55, Additional file 1 : Fig. S3d). These cutoffs distinguish these genes from those that are upregulated compared to high-score missense mutations and downregulated compared to benign missense mutations, i.e., their gene expression is on a progressive trajectory between benign and full LoF (area highlighted in yellow in Additional file 1 : Fig. S3d).
Results and discussion We used a previously described [ 23 ] cytosine base editor screen in HT-29 cells with gRNAs tiling across the JAK1 gene to establish our method. We have phenotypic data on the response of each variant to interferon gamma (IFN-γ), which triggers cell death and induction of PD-L1 and MHC-I expression, both of which are blocked by loss of JAK1 function [ 23 ]. Interrogated JAK1 variants can inform the genetic basis of immunological disorders and mechanisms of cancer resistance to anti-tumor immunity. Single-cell transcriptomics of base edited cells after IFN-γ treatment showed that cells fell into two broad clusters (Fig. 1 a). To assign functions to each cluster, we assigned gRNAs to each cell (Additional file 1 : Fig. S1a) and predicted the resulting edits (Additional file 1 : Fig. S1d). We identified the two clusters as JAK1 loss of function (LoF) or not LoF by merging smaller clusters based on gene expression using the prevalence of cells with non-targeting gRNAs (NT-gRNA) in each cluster (Additional file 1 : Fig. S1b, c). Stop codons and splice variants were predominantly contained in the LoF cluster, with WT, synonymous, and intronic variants in the not LoF cluster (Fig. 1 b, Additional file 1 : Fig. S1e). This classification was confirmed by comparison with the results of previous screens for growth (proliferation score, Additional file 1 : Fig. S1f) or induction of PD-L1 and MHC-I (FACS score) in the presence of IFN-γ (Additional file 1 : Fig. S1g) [ 23 ]. Analysis of differential gene expression between the two clusters showed a strong enrichment for components of the IFN-γ signaling pathway (Fig. 1 c), including JAK1 itself, IFNGR1 , JAK2 , IRF9 , STAT1 , STAT2 , and STAT3 , and downstream effectors such as IL15 , IL15R1 , CCND1 , CCND3 , and SOCS3 . STAT1 was one of the most downregulated transcripts in JAK1 LoF cells, suggesting a positive feedback loop may maintain STAT1 mRNA expression in the presence of JAK1 signaling [ 24 ]. Also, the regulatory subunit of phosphoinositide-3-kinase (PIK3R1) was highly upregulated in the JAK1 LoF cells, consistent with extensive cross-talk between IFN-γ and PI3K signaling pathways [ 25 ]. We next performed targeted single-cell genotyping to identify the precise mutations introduced in JAK1 within each cell. To couple the genotype to the transcriptome, the cells used for this screen had transcribed genetic barcodes introduced by lentivirus on the same vector as the gRNA library (Fig. 1 d). We introduced two independent barcodes to compare their effectiveness and to increase the sensitivity of barcode detection. This showed that the majority of cells had both barcodes detectable (Additional file 1 : Fig. S2a). One barcode was in the 5′ untranslated region of the puromycin resistance gene (puroR BC), and the second was within the first loop of the gRNA (iBAR BC) [ 26 ]. Barcodes were highly complex (the “ Methods ” section), and each transduced cell was thus marked with a unique barcode. Both barcodes can be read out in targeted single-cell genotyping simultaneously with amplicons tiling across the JAK1 gene, as well as single-cell transcriptomics using targeted enrichment of the transcribed barcode sequences (the “ Methods ” section). Although our single-cell genotyping method has low allele dropout rates of around 10% [ 27 ], there is inherent noise in single-cell genotyping resulting from amplification from only 2 copies of the genome. In order to understand how to accurately genotype these triploid cells, we bottlenecked the population severely to obtain multiple daughter cells from each edited cell, all of which are marked by the same barcode. When analyzing genotyping from single cells, we frequently see multiple heterozygous edits per cell which are not present when looking at the consensus genotype of barcode groups of 3 or more cells (Additional file 1 : Fig. S2b). Thus, we believe these are due to errors in the single-cell genotyping, which can be overcome by considering multiple cells within a single barcode group and that we can confidently call genotypes with a minimum of 3 cells per barcode. Based on our data, we would suggest genotyping each barcode across 10 cells to ensure most barcodes have > 3 cells and measuring transcriptome with ~ 50 cells depending on the strength of phenotype meaning that in the order of a thousand variants can be assayed in a single experiment. These variants can be within a single gene or spread across hundreds of sites across the genome. Using the above criteria, in our data, we were able to call 233 barcodes with confident genotypes that were represented by 18,978 cells in the transcriptomics analysis (average 81 cells/barcode) (the “ Methods ” section, Fig. 1 d, e), and these barcodes were used in all subsequent analyses. For 25 gRNAs, we saw different barcodes for the same gRNA, resulting from multiple independent editing events (Additional file 1 : Fig. S2c). When the actual genotypes were compared with those predicted from the gRNA sequence, only 50% of genotypes were exactly as predicted (Fig. 1 f), although this was improved to 71% when analyzed at the protein level due to degeneracy in codon usage (Fig. 1 f, Additional file 1 : Fig. S2d, 2e). Of the 29% with functional consequences different from the predicted ones, 48.4% had heterozygous edits, 45.2% were unedited, and 6.5% had a different functional consequence. The most frequent edits were homozygous (160 of 233 barcodes) followed by heterozygous edits on 1 (73 barcodes) or 2 alleles (30 barcodes) (Additional file 1 : Fig. S2e, 2f). Most homozygous edits were within the predicted base editing window (66%, Additional file 1 : Fig. S2g, h), with 8% of these also showing homozygous edits outside the window (Additional file 1 : Fig. S2h). These results are important for interpreting base editing screens where genotype is inferred from sgRNA identity, since a large proportion of edits are not as predicted. Analysis of the transcriptome of these genotyped cells showed that there was an improvement in the classification of stop codon or splice variant mutations into the correct (LoF) cluster and WT cells into the not LoF cluster when considering actual genotypes (Fig. 1 g, h), compared to using the gRNA as a proxy of genotype. A small number of cells (56) with stop codon mutations were still assigned to the not LoF cluster. However, when considering barcode groups consisting of > 3 cells, all stop codon mutations are in the LoF cluster (Additional file 1 : Fig. S2i). This highlights the benefits of analyzing the data in terms of barcode groups and suggests the incorrectly classified single cells are likely due to misassignment of barcodes in the 10 × experiment. Notably, missense mutations present for a barcode group in the not LoF cluster can be unambiguously defined as mutations that do not result in a loss of JAK1 function, rather than gRNAs that do not edit, and can therefore be used to assign these variants of unknown significance (VUS) as true benign mutations. Similarities between the transcriptomic changes resulting from the different mutations separated barcodes into two main groups (Fig. 2 a), those containing predominantly LoF mutations (stop codon, splice variant, some missense) or not LoF (WT, synonymous, some missense). We used diffusion maps [ 28 ] to identify trajectories in the data (Fig. 2 b), and the first diffusion component accurately reflected the trajectory between not LoF and LoF mutations (diffusion score, the “ Methods ” section). We confirmed this by comparison with JAK-STAT pathway activity (Additional file 1 : Fig. S3a) [ 29 ]. The transcriptomic changes caused by the mutations split into two main clusters when ordered by diffusion score (Additional file 1 : Fig. S3b) and correlated well with the differential expression of JAK-STAT pathway genes (Additional file 1 : Fig. S3c). WT and synonymous variants had very low diffusion scores, stop codon or splice variants had high diffusion scores, and missense mutations were bimodally distributed between the two (Fig. 2 c). Barcodes with genotyped homozygous stop codon mutations were universally (100%, 12 out of 12) classified with high diffusion scores, and all 77 barcodes with WT genotypes except one (> 98%) were classified with low diffusion scores (Fig. 2 d). Out of the 15 barcodes called as homozygous splice variants, 93% (14) had high diffusion scores. Therefore, out of the 104 barcodes that were called with either a WT or a definite LoF genotype (stop/splice), 26 were true positives (definite LoF phenotype-high diffusion score), 1 was false positive (called as splice variant, but low diffusion score), one was false negative (precision 96%, recall 96%). This shows that our genotyping pipeline using > 3 cells per barcode is highly effective and has a very low rate of incorrect genotype calls. This compares to 28 predicted stop/splice with 8 false positives and 4 false negatives (precision 78%, recall 88%) using the predicted genotypes. The benefit of genotyping is illustrated in two examples where we had the same gRNA associated with two different barcodes and where the genotype of these barcodes was different (Fig. 2 e). In the first, both barcodes had a homozygous edit at chromosome 1 position 64834625, but only the barcode that was additionally edited at position 64834624 showed a LoF phenotype, indicating that this mutation or the combination of the two together was causing the loss of JAK1 function. In the second example, only the homozygous edit at position 64857751 showed a LoF phenotype, whereas the heterozygous edit did not. Taken together, these observations demonstrate the utility of genotyping editing events to unambiguously interpret variant functions, even in a screen optimized for very high base editing activity. Some of the missense mutations had a diffusion score between the WT and LoF values, suggesting an intermediate phenotype (Fig. 2 c, f). In our previous screen, these gRNAs had strong effects in the proliferation assay (prolif.) but weaker effects on PD-L1 and MHC-I protein expression (FACS, Fig. 2 f), suggesting they could be a separation of function (SoF) variants [ 23 ]. Closer analysis revealed that cells with these cell barcodes (and thus deriving from the same parent cell) were distributed across the diffusion score range. This shows that for these variants, there is a stochastic response to IFN-γ, with some cells responding as normal, others not at all, and some with an intermediate effect. This may help to explain the difference between their long-term effects on cell growth (prolif, Fig. 2 f) and their immediate effects on protein expression (FACS, Fig. 2 f), since growth integrates across time, whereas protein expression is a snapshot of their immediate response. SoF variants showed differential expression of IRF9, a key regulator of IFN-γ signaling, that may control the threshold of transcriptional response between WT, SoF, and LoF (Additional file 1 : Fig. S3d). These observations would not be possible without genotyping and single-cell analysis.
Results and discussion We used a previously described [ 23 ] cytosine base editor screen in HT-29 cells with gRNAs tiling across the JAK1 gene to establish our method. We have phenotypic data on the response of each variant to interferon gamma (IFN-γ), which triggers cell death and induction of PD-L1 and MHC-I expression, both of which are blocked by loss of JAK1 function [ 23 ]. Interrogated JAK1 variants can inform the genetic basis of immunological disorders and mechanisms of cancer resistance to anti-tumor immunity. Single-cell transcriptomics of base edited cells after IFN-γ treatment showed that cells fell into two broad clusters (Fig. 1 a). To assign functions to each cluster, we assigned gRNAs to each cell (Additional file 1 : Fig. S1a) and predicted the resulting edits (Additional file 1 : Fig. S1d). We identified the two clusters as JAK1 loss of function (LoF) or not LoF by merging smaller clusters based on gene expression using the prevalence of cells with non-targeting gRNAs (NT-gRNA) in each cluster (Additional file 1 : Fig. S1b, c). Stop codons and splice variants were predominantly contained in the LoF cluster, with WT, synonymous, and intronic variants in the not LoF cluster (Fig. 1 b, Additional file 1 : Fig. S1e). This classification was confirmed by comparison with the results of previous screens for growth (proliferation score, Additional file 1 : Fig. S1f) or induction of PD-L1 and MHC-I (FACS score) in the presence of IFN-γ (Additional file 1 : Fig. S1g) [ 23 ]. Analysis of differential gene expression between the two clusters showed a strong enrichment for components of the IFN-γ signaling pathway (Fig. 1 c), including JAK1 itself, IFNGR1 , JAK2 , IRF9 , STAT1 , STAT2 , and STAT3 , and downstream effectors such as IL15 , IL15R1 , CCND1 , CCND3 , and SOCS3 . STAT1 was one of the most downregulated transcripts in JAK1 LoF cells, suggesting a positive feedback loop may maintain STAT1 mRNA expression in the presence of JAK1 signaling [ 24 ]. Also, the regulatory subunit of phosphoinositide-3-kinase (PIK3R1) was highly upregulated in the JAK1 LoF cells, consistent with extensive cross-talk between IFN-γ and PI3K signaling pathways [ 25 ]. We next performed targeted single-cell genotyping to identify the precise mutations introduced in JAK1 within each cell. To couple the genotype to the transcriptome, the cells used for this screen had transcribed genetic barcodes introduced by lentivirus on the same vector as the gRNA library (Fig. 1 d). We introduced two independent barcodes to compare their effectiveness and to increase the sensitivity of barcode detection. This showed that the majority of cells had both barcodes detectable (Additional file 1 : Fig. S2a). One barcode was in the 5′ untranslated region of the puromycin resistance gene (puroR BC), and the second was within the first loop of the gRNA (iBAR BC) [ 26 ]. Barcodes were highly complex (the “ Methods ” section), and each transduced cell was thus marked with a unique barcode. Both barcodes can be read out in targeted single-cell genotyping simultaneously with amplicons tiling across the JAK1 gene, as well as single-cell transcriptomics using targeted enrichment of the transcribed barcode sequences (the “ Methods ” section). Although our single-cell genotyping method has low allele dropout rates of around 10% [ 27 ], there is inherent noise in single-cell genotyping resulting from amplification from only 2 copies of the genome. In order to understand how to accurately genotype these triploid cells, we bottlenecked the population severely to obtain multiple daughter cells from each edited cell, all of which are marked by the same barcode. When analyzing genotyping from single cells, we frequently see multiple heterozygous edits per cell which are not present when looking at the consensus genotype of barcode groups of 3 or more cells (Additional file 1 : Fig. S2b). Thus, we believe these are due to errors in the single-cell genotyping, which can be overcome by considering multiple cells within a single barcode group and that we can confidently call genotypes with a minimum of 3 cells per barcode. Based on our data, we would suggest genotyping each barcode across 10 cells to ensure most barcodes have > 3 cells and measuring transcriptome with ~ 50 cells depending on the strength of phenotype meaning that in the order of a thousand variants can be assayed in a single experiment. These variants can be within a single gene or spread across hundreds of sites across the genome. Using the above criteria, in our data, we were able to call 233 barcodes with confident genotypes that were represented by 18,978 cells in the transcriptomics analysis (average 81 cells/barcode) (the “ Methods ” section, Fig. 1 d, e), and these barcodes were used in all subsequent analyses. For 25 gRNAs, we saw different barcodes for the same gRNA, resulting from multiple independent editing events (Additional file 1 : Fig. S2c). When the actual genotypes were compared with those predicted from the gRNA sequence, only 50% of genotypes were exactly as predicted (Fig. 1 f), although this was improved to 71% when analyzed at the protein level due to degeneracy in codon usage (Fig. 1 f, Additional file 1 : Fig. S2d, 2e). Of the 29% with functional consequences different from the predicted ones, 48.4% had heterozygous edits, 45.2% were unedited, and 6.5% had a different functional consequence. The most frequent edits were homozygous (160 of 233 barcodes) followed by heterozygous edits on 1 (73 barcodes) or 2 alleles (30 barcodes) (Additional file 1 : Fig. S2e, 2f). Most homozygous edits were within the predicted base editing window (66%, Additional file 1 : Fig. S2g, h), with 8% of these also showing homozygous edits outside the window (Additional file 1 : Fig. S2h). These results are important for interpreting base editing screens where genotype is inferred from sgRNA identity, since a large proportion of edits are not as predicted. Analysis of the transcriptome of these genotyped cells showed that there was an improvement in the classification of stop codon or splice variant mutations into the correct (LoF) cluster and WT cells into the not LoF cluster when considering actual genotypes (Fig. 1 g, h), compared to using the gRNA as a proxy of genotype. A small number of cells (56) with stop codon mutations were still assigned to the not LoF cluster. However, when considering barcode groups consisting of > 3 cells, all stop codon mutations are in the LoF cluster (Additional file 1 : Fig. S2i). This highlights the benefits of analyzing the data in terms of barcode groups and suggests the incorrectly classified single cells are likely due to misassignment of barcodes in the 10 × experiment. Notably, missense mutations present for a barcode group in the not LoF cluster can be unambiguously defined as mutations that do not result in a loss of JAK1 function, rather than gRNAs that do not edit, and can therefore be used to assign these variants of unknown significance (VUS) as true benign mutations. Similarities between the transcriptomic changes resulting from the different mutations separated barcodes into two main groups (Fig. 2 a), those containing predominantly LoF mutations (stop codon, splice variant, some missense) or not LoF (WT, synonymous, some missense). We used diffusion maps [ 28 ] to identify trajectories in the data (Fig. 2 b), and the first diffusion component accurately reflected the trajectory between not LoF and LoF mutations (diffusion score, the “ Methods ” section). We confirmed this by comparison with JAK-STAT pathway activity (Additional file 1 : Fig. S3a) [ 29 ]. The transcriptomic changes caused by the mutations split into two main clusters when ordered by diffusion score (Additional file 1 : Fig. S3b) and correlated well with the differential expression of JAK-STAT pathway genes (Additional file 1 : Fig. S3c). WT and synonymous variants had very low diffusion scores, stop codon or splice variants had high diffusion scores, and missense mutations were bimodally distributed between the two (Fig. 2 c). Barcodes with genotyped homozygous stop codon mutations were universally (100%, 12 out of 12) classified with high diffusion scores, and all 77 barcodes with WT genotypes except one (> 98%) were classified with low diffusion scores (Fig. 2 d). Out of the 15 barcodes called as homozygous splice variants, 93% (14) had high diffusion scores. Therefore, out of the 104 barcodes that were called with either a WT or a definite LoF genotype (stop/splice), 26 were true positives (definite LoF phenotype-high diffusion score), 1 was false positive (called as splice variant, but low diffusion score), one was false negative (precision 96%, recall 96%). This shows that our genotyping pipeline using > 3 cells per barcode is highly effective and has a very low rate of incorrect genotype calls. This compares to 28 predicted stop/splice with 8 false positives and 4 false negatives (precision 78%, recall 88%) using the predicted genotypes. The benefit of genotyping is illustrated in two examples where we had the same gRNA associated with two different barcodes and where the genotype of these barcodes was different (Fig. 2 e). In the first, both barcodes had a homozygous edit at chromosome 1 position 64834625, but only the barcode that was additionally edited at position 64834624 showed a LoF phenotype, indicating that this mutation or the combination of the two together was causing the loss of JAK1 function. In the second example, only the homozygous edit at position 64857751 showed a LoF phenotype, whereas the heterozygous edit did not. Taken together, these observations demonstrate the utility of genotyping editing events to unambiguously interpret variant functions, even in a screen optimized for very high base editing activity. Some of the missense mutations had a diffusion score between the WT and LoF values, suggesting an intermediate phenotype (Fig. 2 c, f). In our previous screen, these gRNAs had strong effects in the proliferation assay (prolif.) but weaker effects on PD-L1 and MHC-I protein expression (FACS, Fig. 2 f), suggesting they could be a separation of function (SoF) variants [ 23 ]. Closer analysis revealed that cells with these cell barcodes (and thus deriving from the same parent cell) were distributed across the diffusion score range. This shows that for these variants, there is a stochastic response to IFN-γ, with some cells responding as normal, others not at all, and some with an intermediate effect. This may help to explain the difference between their long-term effects on cell growth (prolif, Fig. 2 f) and their immediate effects on protein expression (FACS, Fig. 2 f), since growth integrates across time, whereas protein expression is a snapshot of their immediate response. SoF variants showed differential expression of IRF9, a key regulator of IFN-γ signaling, that may control the threshold of transcriptional response between WT, SoF, and LoF (Additional file 1 : Fig. S3d). These observations would not be possible without genotyping and single-cell analysis.
Conclusion In summary, we present scSNV-seq, a technique that allows the direct linkage of genotype to whole-transcriptome readout in high-throughput single-cell perturbation screens. We demonstrate its effectiveness in a base editor mutagenesis screen across JAK1 to classify LoF missense variants. Importantly, it allows us to identify benign variants or variants with an intermediate phenotype (Additional file 2 : Table S1) which would otherwise not be possible. The methodology is applicable to any other methods for introducing variation such as HDR, prime editing [ 30 ], or saturation genome editing [ 10 ] since it does not rely on gRNA identity to infer genotype. Our method relies on lentiviral barcoding of dividing cells and so cannot be applied to tissue samples or post-mitotic cell types. However, due to the single-cell readout, it can be applied in a cell-type and state-specific manner and to primary cells such as T cells, B cells, hematopoietic stem cells, keratinocytes, and fibroblasts that can be transduced and expanded, but where the inability to clone cells normally prevents analysis of engineered SNVs. The rich phenotypic readout of the whole transcriptome for each perturbation classifies variants based on transcriptional signatures, enabling comparison to perturbations in disease. We believe scSNV-seq will be invaluable for screening the functional significance and downstream effects of the growing list of coding and non-coding variants identified from human genetics analyses such as GWAS and cancer genome sequencing.
CRISPR screens with single-cell transcriptomic readouts are a valuable tool to understand the effect of genetic perturbations including single nucleotide variants (SNVs) associated with diseases. Interpretation of these data is currently limited as genotypes cannot be accurately inferred from guide RNA identity alone. scSNV-seq overcomes this limitation by coupling single-cell genotyping and transcriptomics of the same cells enabling accurate and high-throughput screening of SNVs. Analysis of variants across the JAK1 gene with scSNV-seq demonstrates the importance of determining the precise genetic perturbation and accurately classifies clinically observed missense variants into three functional categories: benign, loss of function, and separation of function. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-024-03169-y. Keywords
Supplementary Information
Acknowledgements The authors would like to thank the DNA sequencing and flow cytometry facilities within Scientific Operations at the Wellcome Sanger Institute. Peer review information Veronique van den Berghe was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Review history The review history is available as Additional file 5 . Authors’ contributions S.E.C., A.R.B., and M.A.C. conceived the project. M.E.S. and Q.W. performed the computational analysis. S.E.C., M.A.C., and A.M.G. performed the wet lab experiments. A.R.B., M.J.G., and J.C.M. supervised the project. S.E.C., M.A.C., M.E.S., and A.R.B. drafted the manuscript with contributions from other authors. Author’s Twitter handle @mattcoelho3 (Matthew A. Coelho). Funding This research was funded by the Wellcome Trust Grant 206194 and Open Targets (OTAR2061). M.E.S. is supported by the Wellcome Trust (220442/Z/20/Z). Schematics were created with BioRender.com. For the purpose of open access, the author has applied a CC BY public copyright license to any author-accepted manuscript version arising from this submission. Availability of data and materials The sequencing data sets supporting the conclusions of this article are available in the European Nucleotide Archive (ENA) [ 38 ] repository with the accession ERP133355. Sample information and accession numbers are described in Additional file 4 : Table S3. Code is available on GitHub [ 39 ] ( https://github.com/MarioniLab/scSNV-seq ) under an open-source GPL-3.0 license and processed data files and the version of the source code used for the manuscript on Zenodo [ 40 ] (10.5281/zenodo.10418435) under a CC-BY-4.0 license. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests M.J.G. has received research grants from AstraZeneca, GlaxoSmithKline, and Astex Pharmaceuticals and is a founder and advisor for Mosaic Therapeutics. J.C.M. has been an employee of Genentech since September 2022. A.R.B is a founder and consultant for EnsoCell since August 2023.
CC BY
no
2024-01-16 23:45:34
Genome Biol. 2024 Jan 15; 25:20
oa_package/55/98/PMC10789043.tar.gz
PMC10789044
38221619
Introduction Enteral nutrition (EN) is defined as the provision of nutrients via a tube directly to the gastrointestinal tract, to patients who are incapable of fulfilling their nutritional needs orally [ 1 ]. It is a standardised process that incorporates a multidisciplinary team (MDT) of physician, nurse, dietitian, and pharmacist [ 2 – 4 ]. This process involves comprehensive nutritional assessment, accurate prescription, proper administration, and frequent monitoring and re-evaluation according to the patient’s condition [ 5 , 6 ]. The management process of EN typically involves the interaction between a team of health care practitioners consisting of physician, nurse specialist, dietitian, and pharmacist [ 7 ]. In this MDT, each member has varying responsibilities according to their specialities and practice. Effective communication and collaborative efforts among the MDT are crucial for achieving optimum health outcomes [ 3 ]. The physician’s role relies on the overall understanding of the patient’s medical condition, diagnostics, prognostics, and medical treatment, as well as coordinating the medical team [ 8 , 9 ]. Dietitians play a central part in the provision of EN to patients, as their role starts from the development and implementation of EN hospital protocols [ 8 , 9 ]. In addition, the dietitians are responsible for the assessment of patient’s nutritional status and needs as well as recommending and overseeing the EN feeding plan that attains to patients’ needs patients’ needs [ 8 , 9 ]. Nurses being the closest to the patients, have crucial responsibilities and play a major role in feeding delivery along with other medical treatments [ 8 , 10 ]. Their responsibilities begin with nutritionally screening patients, inserting and assessing the placement of the feeding tube, ensuring proper handling of the feeding formula, and performing hygiene and care procedures such as water flushing and sputum suction [ 11 ]. During the process of oral care with sputum suction the EN should be withheld to prevent choking. The nurses are the key persons responsible for re-starting the feed promptly after finishing this process to avoid the risk of underfeeding [ 11 ]. Furthermore, nurses are also responsible for implementing the EN plan through feeding the patient, assuring adequate nutritional delivery, and frequently monitoring the patient’s tolerance [ 10 ]. Al-Sayaghi et. al. has indicated that critical care nurses demonstrated a low level of knowledge and responsibility regarding EN [ 12 ]. This highlights the urgent need of studies to understand the nurse’s perception about EN and the barriers to achieve adequate feeding. In addition, nurses must be engaged to be part of the multidisciplinary nutritional support team with clear roles and responsibilities. In previously published work it has been reported that around 60% of the prescribed caloric intake for patients in the ICU is not being delivered via the enteral route due to avoidable barriers that consequently result in either failure or delay in achieving optimal nutritional goal [ 13 – 16 ]. Since nurses are continuously involved with patients' care plan it is crucial to understand and investigate their perception regarding the barrier to EN delivery in the ICU [ 13 – 16 ]. Since nurses are continuously involved with patients' care plan it is crucial to understand and investigate their perception regarding the barrier to EN delivery in the ICU. Thus, this study was conducted to investigate the perception of the nurses working in adult and paediatric intensive care settings regarding the EN barriers, compare the perception of the nurses working in adults and those working in paediatric ICUs regarding EN barriers and finally, we also aim to identify the factors that influenced their perception regarding these EN barriers.
Methods Study design, participants and data collection The design of the current study was cross-sectional, where all nurses currently practicing in adult or paediatric ICUs working in both working in governmental or private hospitals in Saudi Arabia were invited to participate in the study. All other healthcare professionals were excluded. The data was collected through online survey between 15 October 2021 and January 2022. The survey was initially promoted on various social media platforms (e.g., WhatsApp and Twitter). Then, a Chain-referral sampling was performed where nurses known to the investigators from all regions of the kingdom were contacted to achieve adequate convenience sample of nurses working in intensive care settings. Assessment of EN barriers as perceived by the nurses The tool used to assess the perception of the nurses regarding the EN barriers in intensive care settings was adapted from Cahill et al. (2016) [ 17 ]. Two experts in the field of nutrition support modified and validated the tool. Detailed description of the tool used in this study can be found in Zaher et al. (2022) [ 18 ]. In brief, two questions concerning the demographic characteristics of the participants were added, and some questions were rewritten to improve clarity. The survey was finally reviewed and adjusted based on the feedback received from the nurses involved in the pilot testing of the survey. The data collected from the nurses who participated in the pilot testing was excluded from the current analysis. The survey consisted of two parts, the first part collected information about the demographics of the participants and the second part included 24 potential EN barriers where the nurses were asked to rate the items’ importance as EN barrier on a scale from 1 (not at all important), 2 (slightly important), 3 (important), 4 (Fairly important) to 5 (very important). The survey included Five domains to categories the EN barriers: The first domain included two questions about the guidelines and recommendations, the second domain included seven questions about EN delivery to patients, the third domain included three questions about the intensive care resources, the fourth domain included seven questions about the attitudes and behaviours of critical care providers, and finally, the fifth domain included five questions about dietitian’s resources in ICUs. The result of the Cronbach's alpha test indicated a good internal reliability of the tool (0.944). Statistical analysis Data was analysed using the “Statistical Package for Social Sciences” software version 28 (SPSS Inc.) (SPSS 28, SPSS Inc., Chicago, IL, USA). To assess the data normality of the continuous variables we used Shapiro–Wilk test. Data was presented as Frequencies and percentages to describe the data. Continues variables were presented as mean ± standard deviation (SD). Mean (± SD) was calculated to determine the most and least important EN barriers as perceived by the nurses included in the study. A total Likert rating score of the 24 EN barriers included in the survey was calculated for each participant to be used in further statistical tests. Mann–Whitney U test and Kruskal–Wallis test were performed to compare the perceptions of the nurses regarding the EN barriers based on the participants characteristics such as their work, settings, sex, region, and educational levels. A stepwise linear regression analysis was performed to identify factors that influenced the perceptions of the nurses who participated in the study regarding the EN barriers encountered in intensive care settings. The outcome variable in the regression model was the total Likert rating score of the 24 EN barriers. Multiple independent variables were added to the models including the gender, educational level, work setting, years of experience, type of health care facility and region.
Results A total of 136 nurses working in adult and paediatric ICUs across Saudi Arabia participated in this study. Most of the participants were females ( n = 103, 75%). Most of the responses were received from the Western region, and their mean years of work experience in intensive care settings was 4.1 ± 3.06 years. The characteristics of the study participants are presented in Table 1 . We calculated the mean (± SD) and the median (IQR) to determine the most and least important barriers to EN as perceived by the nurses working in adult and paediatric ICUs. The results showed that the most important barrier was “ Frequent displacement of feeding tube, requiring reinsertion” [ 3.29 ± 1.28, 3 (2–5)] followed by “ Delays in initiating motility agents in patients not tolerating enteral nutrition ” [3.27 ± 1.24, 3 (2–4.75)] which were both included in the “delivery of EN” domain. The third most important barrier was “ Enteral formula not available on the unit”. [3.27 ± 1.24, 3 (2–4.75)] which was included in the “ICU/PICU resources” domain. On the other hand, the least important barriers as reported by the nurses were “ non-ICU physicians (i.e., surgeons, gastroenterologists) requesting patients not be fed enterally ” [2.94 ± 1.24, 3 (2–4)] preceded by “ Nurses failing to progress feeds as per the feeding protocol” [3.00 ± 1.25, 3 (2–4)]; both barriers were included in the “critical care providers attitude and behaviour” domain. A Kruskal–Wallis test was then performed to compare the difference in Likert ratings score of the 5 domains, and no statistical difference was recorded between the score of the 5 domains ( P -value = 0.284) (Table 2 ). A series of Mann–Whitney U tests were performed to compare the perceptions of the nurses who participated in the study regarding the importance of each item as a barrier to EN based on their work setting (adult or paediatric ICU). No significant differences were recorded between the responses of the nurses working in adult ICUs and those working in PICUs except for one item “ Nutrition therapy not routinely discussed on patient care rounds” , P -value = 0.038, (Table 2 ). We then compared the responses of the study participants regarding the importance of each item as barrier to EN based on their gender. Our results showed that the responses of the participants statistically varied based on their gender for the following items; “ Lack of feeding protocol in place to guide the initiation and progression of enteral nutrition in your institution”, ( P -value = 0.029 ), “Delay in physician ordering initiation of enteral nutrition”, ( P -value = 0.022 ), “Delays in initiating motility agents in patients not tolerating enteral nutrition”, ( P -value = 0.044), “In resuscitated, hemodynamically stable patients, other aspects of patient care still take priority over nutrition”, (P-value = 0.016 ), “Nutrition therapy not routinely discussed on patient care rounds”, ( P -value = 0.033), “Non-ICU physicians (i.e., surgeons, gastroenterologists) requesting patients not be fed enterally”, ( P -value = 0.008) and “Waiting for the dietitian to assess the patient”, ( P -value = 0.003), (Table 3 ). Kruskal Wallis test was performed to compare the perceptions of the nurses regarding the importance of each item as a barrier to EN according to the region that they are based in. The results showed that the responses of the participants statistically varied between regions for the following items; “ Not enough nursing staff to deliver adequate nutrition”, ( P -value = 0.034), “Enteral formula not available on the unit”, ( P -value = 0.037) both items included in ICU/PICU resources. A statistical difference was also recorded between the responses of the participants for the item “ General belief among ICU team that provision of adequate nutrition does not impact on patient outcome” , ( P -value = 0.044) . The results also showed that the responses of the participants statistically varied based on their educational level for the following items; “ Current scientific evidence supporting some nutrition interventions is inadequate to inform practice”, (P-value = 0.01), “Delay in physician ordering initiation of enteral nutrition”, ( P -value = 0.025), “Delays in initiating motility agents in patients not tolerating enteral nutrition”, ( P -value = 0.027), “ Dietitian not routinely present on weekday patient rounds”, ( P -value = 0.031), “ There is not enough time dedicated to education and training on how to optimally feed patients”, ( P -value = 0.021). The total Likert rating scores of the 24 items were calculated for each participant. The results showed that participants had a mean ± SD Likert rating score of 76.44 ± 20.10. A stepwise linear regression analysis was performed to identify factors influencing the nurses’ perceptions regarding EN barriers in intensive care settings. In the regression model the total Likert rating scores of the 24 items was used as the outcome variable, while the independent variables in the model were the characteristics of the participants including gender, work settings, years of experience, educational level, the region of the kingdom where they are practicing and the type of health care facility they worked in. The regression analysis indicated that gender was the only variable statistically influenced the total Likert rating scores of the participants ( r = -0.213, p -value = 0.013). The female participants appeared to have higher Likert rating scores compared to male participants (Table 4 ). In the sub-analysis of the cohort working in adults intensive care units, the region statistically influenced the total Likert rating scores of the participants ( r = -0.275, p -value = 0.012), (Table 4 ).
Discussion The present study aimed to investigate nurses’ perception toward EN barriers in adult and paediatric intensive care settings in Saudi Arabia and to explore the factors influencing their perception. Most of the included nurses in this study were females. The most important reported barriers were the ones associated with EN delivery and with resources’ availability in critical care settings. While the least important barriers were the ones related to the critical care provider attitudes and behaviours. However, the absence of routine discussion of nutritional therapy during ward rounds was the only barrier that was significantly different between nurses working in adult ICUs and those working in PICUs. Moreover, the results of the univariate analysis showed the nurses' responses to some barriers statistically varied according to sociodemographic characteristics. Overall, findings from the multi-linear regression analysis showed that gender was the only variable that statistically influenced the overall rating scores of the nurses’ perception of EN barriers. Female nurses appeared to have higher rating scores of perceived EN barriers than males. The geographical region of the workplace also influenced the total rating scores of perceived barriers particularly for nurses practicing in adult ICU. Identifying barriers related to EN delivery is essential to optimize nursing practice in critical care settings, which will help in achieving patients’ nutrient requirements and caloric targets. In this study, one of the main barriers indicated by the nurses is the issue of frequent tube displacement and reinsertion, which could lead to prolonged periods of feeding interruptions. According to recent studies, the most frequently reported causes of EN interruptions in patients admitted to ICU settings are diagnostic tests (i.e., radiological procedures and gastric residual volume (GRV) evaluation) and problems with feeding tubes [ 19 , 20 ]. The increased number of EN disruption episodes was shown to be associated with a higher mortality rate [ 20 ]. The issue of delaying the initiation of motility medications in patients not tolerating EN was also identified as a main barrier in this study. This barrier was ranked as one of the top ten EN barriers by an earlier investigation of ICU nurses working in North American countries [ 21 ]. Gastrointestinal dysmotility is common among patients in the ICU [ 22 ], which can make EN feeding difficult to deliver. However, earlier administration of motility agents is recommended for effective EN therapy in the ICU [ 23 ]. Another identified barrier in the current study that can have a significant impact on patient care is the unavailability of appropriate EN formulations in the unit. Findings from similar studies reported that resource availability in terms of formula availability is a commonly perceived barrier to EN practice by nurses [ 24 ]. Such barriers can contribute to suboptimal delivery of EN and hinder patient recovery. Overall, critically ill patients in ICUs are at a significant risk of acquiring malnutrition, which is linked to a worsened clinical prognosis [ 25 ]. Regarding the critical care provider attitudes and behaviours toward EN practice, two barriers related to this domain were reported in this study as least important. Institutional-related factors can play a role in enhancing EN practices in the ICU. A supportive ICU workplace that values and prioritizes nutritional care can positively influence nurses' attitudes and behaviours toward EN [ 26 ]. This may include having established protocols and guidelines, promoting nursing education regarding EN's effect on patient outcomes, and supporting interdisciplinary collaboration to facilitate consistent and evidence based EN practices [ 26 ]. While inconsistent practices such as non-ICU physicians requesting patients not to be fed via EN can hinder nurses' ability to provide optimal EN care. On the other hand, there is a degree of variation in nurses’ perspectives toward some EN barriers between adult ICU and PICU nurses. In this study, the absence of routine discussion of nutritional therapy during ward rounds was the only barrier that was significantly different between nurses working in adult ICUs and those working in PICUs. In general, the patient population in the PICU is considered heterogeneous, therefore, it is recommended to implement individualized nutrition support that is based on the patient's baseline characteristics and requirements [ 27 ]. Indeed, the level of interprofessional collaboration and communication regarding EN practices could differ between both settings. Based on findings from an international nursing survey, only a few PICUs have an established multidisciplinary nutritional support team [ 28 ]. The availability of a nutritional support team may benefit nurses’ education in nutrition and help facilitate comprehensive nutritional guidance and decision-making [ 28 ]. Nurses’ perceived barriers to EN practice in critical care setting is considered multifactorial and could vary across different hospitals, thus, understanding the factors associated with these barriers is highly important. In this study, gender was found to influence nurses' perceptions of EN barriers. Traditionally, nursing has been a female-dominated profession, which explains why most of the participants who completed the survey were females. However, the small number of male nurses ( n = 32) included in the study is still considered statistically acceptable to measure the impact of gender differences on the perception score of EN barrier. Previous studies have shown no difference between both genders regarding the overall score of perceived barriers [ 29 , 30 ]. On the contrary, the present study reported that female nurses perceived more EN barriers than male nurses. While both male and female nurses might acquire the same level of knowledge and skills, societal expectations may result in male nurses being perceived as more confident in managing practice-related barriers and consequently perceiving fewer barriers [ 31 ]. Nevertheless, the awareness of female nurses with the perceived EN barriers observed in the present study might be developed through their clinical experience, frequent application of evidence-based practice, or continuous involvement in lifelong learning. According to Silberman et al., the provision of EN continuous education program led to a considerable improvement in the knowledge of EN practice among ICU nurses [ 32 ]. It might also be attributed to the fact that the nutrition practice is usually female dominant and therefore female nurses might have more interest and awareness in the nutrition-related practice [ 33 ]. Another demographic factor influencing nurses’ perception of EN barriers was the geographical region of nurses’ workplaces. Nurses working in smaller regions of Saudi Arabia (i.e., Southern and Northern regions) perceived more EN barriers that relate to the availability of staff for EN delivery and the availability of EN formula. This is consistent with the national trend of the nursing shortage, which is considered one of the challenges that Saudi Arabia is experiencing [ 34 ]. Although specific regional challenges might contribute to the variation in perceived EN barriers, findings from regional studies are missing. The nursing practice in Saudi Arabia has usually been facing several challenges including the shortage issue of nurses in smaller regions [ 35 ]. Additionally, the limited availability of well-established tertiary hospitals with enough medical resources like enteral formulas in smaller regions of Saudi Arabia could result in more challenges faced by nurses working in critical care settings. Darawad et al. found that nurses in large educational hospitals indicated fewer barriers than nurses from private hospitals [ 29 ]. Thus, institutional-related factors could influence nurse’s perception of EN barriers. Currently, the Saudi Vision 2030 program is having an impact on advancing the nursing profession across all regions via the ongoing changes and transformation in the country’s healthcare system [ 36 ]. To our knowledge, this study is considered the first one to investigate nurses’ perceptions regarding EN barriers in critical care settings in Saudi Arabia. Also, it is considered the first study to explore the difference between the perception of the nurses working in adult ICUs and those working in PICUs. Because nursing in Saudi Arabia is considered a developing healthcare profession with a high shortage rate of local nurses [ 37 ], previous reports concerning nursing practice in critical care settings were focused on investigating other more apparent barriers than the one concerning EN, which involved nurses’ perception regarding pain management [ 38 ], pressure injury prevention [ 39 ], shift handover and communication practice [ 40 ], and patient advocacy [ 41 ]. However, the most reported EN barriers in this study (i.e., barriers related to EN delivery and availability of formula) were relatively aligned with the international trend of nurses’ perceptions of EN barriers in ICU [ 42 , 43 ]. Nevertheless, a few limitations were found to be associated with the present study. Even though the study included nurses from several healthcare sectors in Saudi Arabia, the small sample size limits the generalizability of its findings. Future research should use a bigger sample size. Also, the high percentage of females and nurses residing in the western region who responded to the survey, might further limit the generalizability of the results. Therefore, study findings may not accurately represent the diverse perspectives of nurses working in critical care settings in Saudi Arabia. Additionally, the use convenient sampling via social media platforms could be a potential selection bias, however, according to the national statistics it is estimated that over than 80% of the Saudi population have internet access and use social media [ 44 , 45 ]. Another limitation is related to the study design, which only was able to assess nurses' perceptions of EN barriers without evaluation of associated patient outcomes. Future studies in this area should try to include patient outcomes (e.g., length of ICU stay, EN complications, and whether target nutritional requirements are met) and correlate it with nurses’ perceived EN barriers. This will help to better understand the key EN barriers that need improvement by the nursing practice in critical care settings.
Conclusion In conclusion, numerous barriers exist in the nursing practice of EN in adult and paediatric critical care settings. Such barriers can impede the effective implementation and delivery of EN, compromising patient outcomes. It is crucial for the healthcare workforce in Saudi Arabia to address these barriers by providing ongoing education and training to nurses, improving staffing levels for local nurses across all regions, improving gender distribution, and ensuring a supportive environment in hospitals (e.g., supporting interdisciplinary collaboration) for optimal nutritional care. This will enable nurses to overcome these barriers and deliver optimal EN to critically ill patients. While EN is a crucial aspect of nursing care in both adult ICU and PICU settings, there are distinct differences in the barriers encountered by nurses. Understanding such differences is important for implementing future strategies for units that needed the most help in prioritizing EN delivery. Moreover, sociodemographic factors could influence the nursing practice of EN. By recognizing and addressing these factors, healthcare organizations across Saudi Arabia can create an environment that facilitates the effective implementation of EN protocol in the ICU.
Background The management process of Enteral Nutrition (EN) typically involves the interaction between a team of health care practitioners. Nurses being the closest to the patients, have crucial responsibilities and play a major role in feeding delivery along with other medical treatments. This study was conducted to investigate the perception of the nurses working in adult and paediatric intensive care Units (ICUs) regarding the EN barriers and identify the factors that influenced their perception. Methods The data in this cross-sectional study was collected via online survey between 15 October 2021 and January 2022. All nurses working in adult or paediatric ICUs across Saudi Arabia were eligible to participate. The tool used for the data collection was adapted from Cahill et al. (2016) and then reviewed and modified by the researchers. The survey collected information about the demographics of the nurses, and it included 24 potential EN barriers where the participants were asked to rate their importance on a scale from 1 to 5. Descriptive statistics were performed to describe the variables, univariant analysis were performed to compare the perceptions of the nurses regarding the EN barriers based on their characteristics followed by stepwise linear regression analysis. Results A total of 136 nurses working in adult and paediatric ICUs were included in this study. The results showed that the most important barriers as perceived by the nurses was “ Frequent displacement of feeding tube, requiring reinsertion” [ 3.29 ± 1.28], “ Delays in initiating motility agents in patients not tolerating enteral nutrition ” [3.27 ± 1.24] and “ Enteral formula not available on the unit”. [3.27 ± 1.24]. Our results showed that the responses of the participants statistically varied based on their work settings, gender, region, and educational level for some items in the survey (P-value ≤ 0.05). In the regression analysis, gender was the only variable statistically influenced the total Likert rating scores of the participants ( r = -0.213, p -value = 0.013). Conclusion This study identified several barriers that exist in the nursing practice of EN in critical care settings. There are distinct differences in the perception of the nurses to these barriers based on their characteristics. Understanding such differences is important for implementing future strategies for units that needed the most help in prioritizing EN delivery. Keywords
Acknowledgements Grateful acknowledgment to the nurses who participated in the study for completing the survey. Authors’ contributions SZ contributed to the conception and design of the research, performed the statistical analysis and drafted the manuscript. FS contributed to data collection and drafting the manuscript, S A supervised the overall work and drafted the manuscript. All authors revised and approved the final draft of the manuscript. Funding None to declare. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committee at Taibah University (Certificate no. 2020/57/204/CLN). A participant information sheet was included in the first page of the online survey. Participants' informed consent was obtained by including a mandatory question confirming that they agreed to participate in the study. Informed consent for publication was also obtained from participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Nurs. 2024 Jan 15; 23:42
oa_package/3f/e6/PMC10789044.tar.gz
PMC10789045
0
Background From March 2020 onwards, many people infected with SARS-COV-2 who were never hospitalised during the acute phase of the disease presented with persisting symptoms three or more months after symptom onset. At the beginning of pandemic, little attention was paid to mild or moderate symptoms. There was only a single story about what COVID-19 was: a potentially deadly respiratory disease [ 1 ]. People with mild or moderate COVID-19 who developed persistent symptoms were invisible in the eyes of the health system and their immediate surroundings. They gathered through social media in a number of countries to raise awareness about their condition in the scientific community (who were sceptical about its existence) and began to produce knowledge about it [ 2 ] before the first scientific study was published [ 3 ]. Thus, the first studies were created based on self-reported data. This condition, referred to as Long COVID by patients [ 4 ] and renamed as Post-COVID Condition by the World Health Organisation (WHO) [ 5 ], has been estimated to affect 10–50% of people infected with SARS-COV-2 depending on the initial clinical spectrum of infection [ 6 – 8 ]. Long COVID has been described as a multisystemic condition [ 9 ] with many fluctuating symptoms at different levels of intensity over time which causes different levels of episodic (or long-term) impairment on a person’s ability to do normal day-to-day activities [ 9 , 10 ]. Long COVID constitutes a long-term condition or evolution of COVID-19 independent of the severity of the acute disease [ 11 ]. However, the mechanisms related to the persistence of symptoms are unknown being the main hypothesis investigated: persistence of virus, chronic inflammation with blood clotting, existence of autoantibodies, microbiota dysbiosis, tissue damage and dysfunctional neurological signalling [ 12 – 18 ]. Other studies have found that low cortisol levels may be a biomarker for Long COVID [ 19 ]. Although there are many ongoing studies trying to find a specific biomarker for Long COVID, as yet there is no consistent evidence available. Long COVID has been described to be more prevalent in women than in men and at about middle age [ 2 , 9 , 20 , 21 ]. Specifically, some articles point out that Body Mass Index (BMI), female sex, increasing age and having comorbidities [ 22 ] are risk factors for Long COVID. Other studies report that the presence of five symptoms such as fatigue, headache, dyspnoea, hoarse voice and myalgia at the first week of the disease can also be risk factors for Long COVID [ 20 , 22 , 23 ]. As some studies point out, however, gender differences may not only be related to differences in the prevalence and symptomatology of the condition but also to broader social and cultural factors that affect how individuals are perceived and treated by others [ 24 ]. Some studies have described symptoms, categorised them in domains, grouped them in clusters and then observed their evolution over time, suggesting the existence of different phenotypes which can help to identify the mechanisms involved and also different care needs [ 21 , 25 – 27 ]. Long COVID symptoms can be identified through the reporting of symptoms recorded by health professionals in the EHR or by symptoms self-reported by people affected by Long COVID through public participation, as this study does [ 28 ]. Some studies have identified different trajectories of the evolution of post-COVID-19 conditions. For example, one study identified three trajectories: “high persistent symptoms,” “rapidly decreasing symptoms,” and “slowly decreasing symptoms” [ 29 ]. Another study found that COVID-19 symptoms persisted for 1 year after illness onset, even in some individuals with mild disease, and that female sex and obesity were associated with symptoms persistence [ 30 ]. There are studies that have identified the evolution of symptoms and trajectories over time. However, little is known about the study of symptom evolution since the onset of the disease. Access to this information is only possible in studies conducted since the beginning of the SARS-CoV-2 pandemic. Thus, much has been described about the symptoms of Long COVID but there is still much to learn about the evolution of persistent COVID-19 symptoms, also known as post-COVID conditions (PCCs) or Long COVID. This study aims to add knowledge about Long COVID symptoms and their evolution over time and to highlight the co-participatory research work between patients and primary care professionals.
Methods Design It consists in a retrospective cohort of adults. Study population This study was co-created with people belonging to the Long COVID group in Catalonia [ 31 ] that involved participants with Long COVID symptoms in Catalonia (Spain). Inclusion criteria were being ≥18 years old, living in Catalonia and having symptoms that lasted more than 3 months after suspected or confirmed (by a positive Polymerase Chain Reaction or Rapid Antigen Test) SARS-COV-2 infection and agreeing to participate and confirming their availability to answer surveys. People who had been hospitalised in an ICU (Intensive Care Unit) were excluded. The 3 month inclusion criterion was based on the available information provided by Greenhalgh et al. in August 2020 [ 32 ]. Recruiting was performed through people belonging to the Catalan Long COVID group through social media (Twitter, blog, WhatsApp group) and by snowball sampling. It was publicised through a webinar for primary care professionals (doctors, nurses, social workers) working for health providers in Catalonia to recruit more participants. Recruiting was opened on 3rd December 2020 and closed on 30th June 2021. However, cases that were diagnosed during the first and second wave were also collected. People were asked to report their symptoms at the first 21 days from symptom onset (baseline), at 22–60 days and at ≥3 months from the initial diagnosis. These cut-off points were based on available studies in 2020, about the average time for recovery from mild COVID-19 and the cut-off point used by patient led reports [ 33 ]. Data source This paper looks at the recruitment questionnaire of this study and the variables related to sociodemographic data, clinical data and symptoms out of 40 variables included in the questionnaire that supply information about various domains (not included in this analysis) such as quality of life, use of the health system and others. The variables were collected by a self-reported questionnaire initially performed by people affected based on their own questions about their condition and finally worked out together with a primary care doctor and a research group from the Institut Universitari d’Investigació en Atenció Primària (IDIAPJ Gol). A group belonging to the Col·lectiu d’Afectades i Afectats persistents per COVID-19 a Catalunya [ 31 ] participated in the design of the study and two of them in the discussions of the results, sharing their experiences and points of view and enriching each part of the project. Data were hosted on the REDCap (Research Electronic Data Capture) platform, allowing participants to enter their data while retaining anonymity and protection. REDCap is a secure, web-based software platform designed to collect data for research studies providing: 1) an intuitive interface for validated data capture; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages, and 4) procedures for data integration and interoperability with external sources [ 34 , 35 ]. Variables The main variable was symptoms. In total, 117 symptoms were collected; their attributes were YES/NO. Symptoms were gathered by systems and creating a new variable for each system: dermatological, ophthalmological, urological, sexually related, menstruation related, general (including fatigue and fever), rheumatologic, neurological (including headache and insomnia), digestive, gyneacological, neurocognitive, cardiac, respiratory, upper airway, ear, nose, and throat (ENT), disautonomic, olfactory and altered taste and smell based on clinical intuition. All of them were stratified by sex (women, men), age (18–34, 35–49, 50–64, ≥65 years) and wave. Information about what symptoms each system contains can be found in the Supplementary Data 1 (SD1). Co-variables were date of self-reporting of the initial questionnaire, and sociodemographic data such as sex, date of birth, weight, and height. Clinical data related to date of symptom onset, type of symptoms, previous comorbidities, previous treatments and diagnostic tests were also included. The dates of the pandemic waves were gathered from Ministerio de Sanidad data published in the reports by the Red Nacional de Vigilancia Epidemiológica (RENAVE) establishing the following periods: first wave from 13th March 2020 to 21st June 2020, second wave from 22nd June 2020 to 6th December 2020 and third wave from 7th December 2020 to 14th March 2021 [ 36 ]. Symptom perception evolution was self-reporting, and its variable was created through six graphics and definitions constructed by patients themselves and following the trends of the symptoms they had been experiencing and noting down in a diary since the beginning of these symptoms (Fig. S 1 ). Data analysis An initial descriptive analysis of the included population was performed using mean (standard deviation) and median (interquartile range) for quantitative variables and percentages for categorical variables. To assess differences between sex and age, the t-test or the U Mann-Whitney test for quantitative variables and the Chi-squared test for qualitative variables were performed. A Trend test was performed to assess differences between symptoms by system at the three cut-offs (Table S 1 and Fig. S 2 ). Stratified analysis for symptom length at < 21 days (baseline), 22–60 days and ≥ 3 months days was performed. To identify clusters of symptoms by Long COVID system, PCAmix [ 37 ] transformation of the data was performed prior to applying fuzzy c-means to reduce dimensionality. In this reduction, symptoms by systems, age and sex of the individuals were considered, leaving a total of four dimensions after applying the Karlis-Saporta-Spinaki criterion [ 37 , 38 ]. Fuzzy c-means is a soft clustering technique that relates the symptoms by system of each individual at each time point (i.e., < 21 days, 20–60 days, and ≥ 3 months) to a different cluster through membership probability [ 39 ]. Having each participant’s time point assigned to a cluster made it possible to draw each individual’s course in terms of patterns of system affection due to Long COVID over time. The number of clusters (from 2 to 8) and degree of fuzziness (from 1.1 to 1.8, per 0.1) were chosen through validation indices calculated 100 times in order to account for the random nature of the clustering initialisation. Once the clusters had been identified, symptoms by system at each time point were assigned to the cluster for which they had the highest membership probability. The clusters were described through the calculation of observed/expected ratios (OE ratios), which compares the prevalence of the symptom by system in each cluster with that in the study population. In addition, exclusivity was calculated as the percentage of records presented by each system divided by the total number of records with that system in the study population. A system with an OE > 2.5% or an exclusivity > 30% was considered as characteristic of the cluster and used to name the cluster. This approach has already been used in other studies [ 38 , 40 – 43 ]. R v 4.0.2. was used to conduct the clustering analysis.
Results From 1258 respondents, we excluded those who had less than 3 months from the beginning of symptom onset to the enrolment date ( n = 298), those who were missing a symptoms variable ( n = 5) and those who reported an end date of symptoms of less than 3 months from the symptom’s onset ( N = 47) (Fig. 1 ). Finally, 905 respondents who had symptoms for 3 or more months from symptom onset (80.3% women, 19.0% men and 0.7% non-binary) were included. Median age was 46.0 years, 57.1% had comorbidities and 51.8% reported not taking any chronic treatment. Median Body Mass Index (BMI) was 24.2%, a third of respondents were non-smokers (32.7%) and 37.1% did physical exercise 2–3 times a week before SARS-COV-2 infection. 3.3% (30 from the total of 905) of participants (4.6% of men and 3.02% of women) reported an end date of their symptoms, which showed a median of 184 days (p25-p75 of 156.2 days to 389.2 days) since the onset of symptoms higher in men (184 days) than in women (183 days). Characteristics of the self-reported cohort are presented in Table 1 and characteristics of the “end date of symptoms cohort” are in Table S 2 . A total of 117 symptoms were collected, analysed by sex and period (Table S 3 , S 4 , S 5 ) and subsequently gathered in 18 groups of symptoms to facilitate the analysis. Analysing the symptoms individually by time period, we found that the median number of symptoms per participant was 24 at baseline, 20 at 22–60 days and 16after 3 months, being higher in women at the three cut-offs than in men. Symptoms As shown in Fig. 2 , percentages of grouped symptoms are presented at baseline, 22–60 days and ≥ 3 months showing that most of the symptoms’ system frequency decreased over time, some remained almost the same, such as dermatological, disautonomic, urological and ENT, and others such as menstrual, sexual, gynaecological, and neurocognitive increased. General (including tiredness or fatigue, dysthermia, fever, general malaise, inappetence, weight loss, muscle pain, oral herpes) and neurologic symptoms were the most frequently reported by all respondents at all time cut-off points. By sex, at baseline the most frequent groups of symptoms in both sexes were the general (92.8% in women, 87.2% in men), followed by the neurologic ones in women (88%) and the respiratory (79%) ones in men. The big difference observed between sexes at all cut-offs was in dermatologic symptoms followed by olfactory symptoms, both of which were more frequent in women than in men (Table S 6 ). The evolution of symptoms by system is shown in Fig. 3 . By age, we found was that olfactory symptoms were widely reported at baseline for the 18–34 years group (68.5%), more than at any other age and that respondents aged 50–64 years old reported a major frequency of respiratory symptoms (83.5%) than other ages respondents). The most frequent symptoms reported at ≥3 months at age 50–64 were neurological (81.5%), while the most common symptoms in other age groups were the general ones. A significant finding is that the frequency of general symptoms at the three cut offs points was lower in those aged + 64 years than in any other age range (68.1%) (Table S 7 ). By wave, general symptoms were the most reported for the three waves at the three-time cut-off points for baseline, 22–60 days and ≥ 3 months, while neurocognitive symptoms increased their prevalence among the first and second waves in the three-time cut-off points. Olfactory symptoms were more frequent in the second (58.9%) and third (63.4%) waves in the first 21 days from symptom onset and their prevalence decreased by more than 10% over time in all waves at ≥3 months (Table S 8 ). We analysed symptoms by microbiological diagnostic testing and found no significant differences in symptoms between participants who had a positive RAT or PCR and those who did not, except for olfactory alterations that were more common during the first 21 days in those who had a positive test (63.6%) than in those who did not (50.8%), taste and smell alterations (53.9% of those who had a positive test and 39.3% of those wo hadn’t) (Table S 9 ). The self-reported symptom evolution of participants was included in the questionnaire. Figure 4 shows the representation of the self-perceptions of participants on symptom evolution over time. For both sexes and at all ages, the most frequent evolution was “Symptoms were of high intensity for the first 3-4 weeks and then persist, intensifying, in a cyclical way without disappearing completely” (36.8% in women and 31% in men) (Fig. 4 D). The second most frequent evolution was the one with no identified pattern (20.7% in women and 22.0% in men) by people affected at any age (Fig. 4 F), except for the 50–64 age group where the second most frequent evolution was high symptom intensity followed by a progressive decrease in their intensity until disappearance (Fig. 4 E). Clusters of symptoms Five clusters were identified and named according to the systems most predominantly affected based on the OE ratio and exclusivity of each cluster (Fig. 5 ): Multisystemic, Multisystemic – predominantly dysautonomous, Heterogeneous, Taste & smell, and Menstrual & sexual alterations. The explained variance and the loadings of the PCAMix transformation can be found at supplementary data, Fig. S 3 . Multisystemic and Multisystemic – predominantly dysautonomic were the most common clusters, gathering 29.8 and 21.1% of the records during the follow-up period, respectively. Heterogeneous , a cluster in which no single system is predominantly affected, gathered 18.5% of the records. It was followed by Menstrual & sexual alterations (15.6%), and Taste & smell (15.0%). Taste & smell and Multisystemic were the most common clusters at the beginning of the condition, while Heterogeneous and Multisystemic were more common after 3 months (see Fig. 6 ). The prevalence of all clusters except Taste & smell and Multisystemic – predominantly dysautonomic increased over time (see Fig. 6 ). Some clusters were more stable over time than others. For example, 76.1% of participants who started with Menstrual & sexual alterations remained in this same cluster > 60 days, while only 12% of participants in Taste & smell stayed in it and 32 and 33.8% of them changed to Heterogeneous and Multis ystemic , respectively. Participants gathered in Multisystemic mainly either remained in the same cluster (47.5%) or transitioned to Heterogeneous ( 29.2%). Similarly, participants with Multisystemic – predominantly dysautonomic affection mostly either transitioned to Multisystemic (33.8%) or remained in the same cluster (41.2%), while participants with a Heterogeneous affection either remained in it (43%) or transitioned to Multisystemic (35.1%) (see Fig. 7 ).
Discussion This study presents the evolution of persistent COVID-19 symptoms at three-time cut-off points in a cohort of 905 people in Catalonia. The key findings are as follows: 1) The pattern of symptom evolution observed at the three cut-off points (baseline, 22–60 days and ≥ 3 months) was a decrease in the frequency of many of the symptoms (digestive, upper respiratory tract, olfactory, ophthalmologic, respiratory, cardiac, rheumatologic, general, neurologic, disautonomic and taste and smell). 2) Neurocognitive, dermatological, ENT symptoms, gynaecological, sexual menstrual symptoms increased. 3) Urologic symptoms remained stable. 4) The most frequent clusters at baseline were Taste & smell and Multisystemic . 5) The most frequent cluster at ≥3 months was Multisystemic . We have examined the progression of COVID-19 symptoms towards long COVID-19, enabling the execution of a pertinent clinical investigation for the management of individuals in care and providing insights into the clinical course of long COVID-19. The data are similar to other studies reviewed. They show a predominance of women younger than men, who had more comorbidities, the most frequent being allergy, and report no previous treatments [ 9 , 21 , 44 – 47 ]. However, the women interviewees did not smoke and had an average “normal weight” BMI. These last two characteristics differ from those reported by other researchers [ 2 , 21 , 22 ]. Most of the women in our cohort caught the disease during the first wave (60.1%) and had a positive diagnostic test (PCR or RAT) at some point in its course (52.4%). Beyond 3 months of symptom onset, respondents reported a mean of 16 symptoms with a higher number of symptoms in women (17 symptoms) than in men (12 symptoms). This is similar to data from other studies which reported means of 13.76 and 55.9 symptoms per patient [ 2 , 21 , 48 ]. Some studies suggest that greater involvement in women may be related to a different expression of angiotensin converting enzyme 2 (ACE-2) or transmembrane protease serine 2 (TMPRSS2) receptors or to lower production of proinflammatory cytokines such as interleukin-6 (IL-6) in women after a viral infection [ 49 ]. However, the sex difference in our cohort might be due to greater involvement of women than men. It is known that women may be more able to express symptoms or allow themselves to express them more than men, whereas men are more restricted in expressing symptoms in order to conform to hegemonic masculinity patterns [ 50 – 54 ]. We also consider that the higher frequency of women’s participation in this study may have to do with the fact that women tend to look after their health more, as has been described in a number of studies [ 55 ]. The higher frequency of symptoms which are more difficult to refer for consultation, such as fatigue or brain fog, may mean that they are underestimated, especially in women (gender bias) when treating women with persistent symptoms which would not be found when treating a man reporting the same symptoms. General symptoms predominated in our cohort in both sexes in the first 21 days and in the cut off 22–60 days. Neurocognitive symptoms were more common in women. These results are similar to those reported in studies conducted in other countries [ 46 , 56 ]. After 3 months, general symptoms were the most frequent symptoms in women and neurological in men, but neurological symptoms were the seconds in frequency reported by women, most likely related to continued headache. These results are close to the ones found by Ballering et al., who describes as a core Long COVID symptoms those that in our cluster analysis will correspond to Multisystemic cluster and Multisystemic-predominantly disautonomic cluster [ 57 ]. Neurocognitive symptoms were predominant, especially in the 35–49 and in the 50–64 year ages groups, along with general and neurologic symptoms, which is consistent with the studies reviewed [ 2 , 21 , 26 , 44 , 56 , 58 ]. Furthermore, differences between men and women in the frequency of dermatological symptoms are striking across all time cut-off points in the study where they are more frequent in women. Some researchers point to a potential relationship between dermatological symptoms and systemic inflammation and between systemic inflammation and neurocognitive symptoms [ 59 ]. Olfactory symptoms were also more present in women than in men and persisted more over time in this group as reported in published meta-analyses [ 60 , 61 ]. Most of our cohort was infected in the first and second waves. It is noticeable that the frequency of olfactory symptoms during the first 21 days increased in the second and third waves compared to the first. A study following a cohort of individuals who experienced COVID-19 in Norway indicates that 16.6% of those infected during the first wave still had olfactory- and taste-related symptoms 1 year later [ 62 ]. Another study [ 27 ] including anosmia and dysosmia as part of the central neurological cluster indicated that this neurological cluster was the largest cluster in both the alpha and delta variants [ 27 ]. From a clinical point of view, it is important to know which clusters may be found in the acute phase of SARS-COV-2 infection and which patterns those initial symptoms and clusters follow over a number of time cut-off points while they persist. This can enable health professionals to better suspect and identify a Long COVID condition in clinical appointments by symptoms and cluster evolution at different moments in time. Learning about cluster trends might also help health systems to improve their delivery of care to Long COVID patients [ 63 ]. The clusters defined in our study are justified for two different reasons. Firstly, a mathematical validation to choose the clustering hyperparameters was performed: The number of clusters (from 2 to 8) and degree of fuzziness (from 1.1 to 1.8, per 0.1) was validated were chosen through by validation indices calculated 100 times in order to account for the random nature of the clustering initialisation. In addition, the most determinant conditions on each cluster were selected through the OE and the exclusivity. Secondly, the mechanisms by which long COVID-19 manifests are multiple, complex, and often overlap. The clusters obtained, such as the multisystemic one, are conditioned by various pathophysiological mechanisms, including Mast Cell Activation Syndrome), Myalgic Encephalomyelitis/Chronic Fatigue Syndrome, and Postural Orthostatic Tachycardia Syndrome. These are justified in the different clusters observed in this paper [ 63 ]. In our data, the most prevalent clusters observed were Multisystemic and Multisystemic-predominantly disautonomic. We noted that these clusters stabilised over time with either the second becoming part of the former or the former becoming part of the Heterogeneous group. Furthermore, the transitions over time of clusters might suggest a tendency towards unspecificity or heterogeneity of symptoms that could point to an improvement in symptoms or greater adaptation of people to the symptoms after a long period of experiencing them. Kenny et al. report that the most heterogeneous of the three clusters they found is the one that includes the most people and suggest that this heterogeneity may be a sign of recovery [ 26 ]. Contrary to our results, Whitaker et al. [ 64 ] identify two stable clusters over time, one of which includes fatigue, shortness of breath and chest pain or tightness and the other with a high prevalence of smell and taste disturbances [ 64 ]. Cluster changes over time underscore Long COVID’s multisystemic nature. Data analysed using cluster methodology indicate that there is no specific timeline for recovery from long COVID, as it appears to depend on individual risk factors, including psychological factors, and the severity and spectrum of symptoms experienced. Some studies indicate that the total time to complete symptom resolution reported in the literature for patients with long COVID is highly variable, with the average time to symptom resolution being 4 months in non-hospitalized patients and 9 months in those with more serious cases [ 29 , 65 , 66 ]. The menstrual cluster and menstrual symptoms increased across the three cut-off points probably because over time there are more cycles to assess the disturbance. Most of the reviewed studies on persistent COVID that feature clusters do not include symptoms relating to the menstrual cycle [ 21 , 26 , 44 , 61 , 67 – 69 ]. Those that did consider them found changes in the volume and duration of the cycle; some saw them as part of a heterogeneous group of genitourinary symptoms, where 62.5% of respondents reported disorders, while others included them in a group of gynaecological disorders which remained stable over time [ 2 , 70 , 71 ]. We included menstrual symptoms in our study at the request of the group of people affected and because the rest of the research team was concerned that this information was often downplayed in the medical setting. It also speaks to the need to make menstrual health visible and relevant to women’s health research as a public health issue and also as a matter of human rights [ 72 ]. Several studies examine the evolution and transitions over time of clusters, yet there are no common clusters across studies [ 2 , 21 , 26 , 44 , 64 , 68 , 73 , 74 ]. Between-study differences are due to the varying symptom classification, the analysis techniques used, and the number of people included in each study that shape the symptom clusters identified. These differences are also a result of the time at which symptoms are identified in relation to the initial disease [ 2 , 18 , 23 , 37 , 51 – 54 ]. This heterogeneity hampers comparison between studies. Thus, the evolution and transitions of long COVID-19 symptom clusters over time are complex and variable, with different trajectories and phenotypes being identified. Further research is needed to better understand the long-term implications of these symptoms and to guide monitoring and treatment strategies for individuals with long COVID-19. Strengths and limitations The study’s strengths include the fact that it is co-created and stems from a commitment made to the people in the Long COVID-19 group in Catalonia. The analyses have been differentiated by sex, whereas few studies have stratified persistent COVID results by sex [ 75 ]. Moreover, this is a longitudinal study that involves cluster analysis. The inclusion of menstrual symptoms is not described in many publications on persistent COVID and is one of this study’s strengths. Compared with hierarchical clustering, fuzzy c-means cluster analysis is less susceptible to outliers in the data, choice of distance measure and the inclusion of inappropriate or irrelevant variables [ 76 ]. Nevertheless, some disadvantages of the method are that there may be different solutions for each set of seed points and there is no guarantee of optimal clustering [ 77 ]. To minimise this shortcoming, we carried out 100 cluster realisations with different seed points to use the average result of all of them. In addition, although the method is not efficient when a large number of potential cluster solutions are to be considered, this was not the case of our study [ 76 ]. However, this study is not without limitations. Not least of them is the likelihood of recall bias since recruitment began in December 2020 and we also included individuals already infected in the first wave and therefore with retrospective data in this subgroup. The fact that this is a self-reported survey may be a limitation for some, although we think it values the experience of the affected person as a source of knowledge in addition to how a professional might subjectively assess an affected person’s narrative. The individuals included in the study were part of the social networks of activists, close people or contacts of contacts. We are aware that we have not been able to access all people with long COVID and that can introduce a selection bias. At the time of data collection this was a possible and feasible way. Two reasons account for this: 1) the limited number of face-to-face meetings due to outbreak restrictions 2) the limitation due to the physical conditions of the participants. Our sampling was performed by convenience and snowball sampling, with the advantages and disadvantages of this sampling strategy. The inclusion of people with an end date of symptoms in the main analysis could lead to a bias, but two things might be of consideration. On one hand, these people had more than 3 months of symptom evolution so, they were labelled as Long COVID. On the other hand, as there is no definition for “recovery” (relapses being a common evolution of the condition), we consider it was better to include them and follow them up in the second phase of the study to see if they relapsed or not. At the beginning of the pandemics, the lack of tests for non-hospitalised patients made it hard to confirm a SARS-COV-2 infection. Although the inclusion of people who never tested positive for SARS-COV-2 could be seen as a limitation, we see it as a matter of justice to people affected who had no access to the test. The gender imbalance can introduce biases and limit the generalizability of the study findings, as the experience of men with Long COVID may not be accurately reflected due to the lower number of men. We are aware that the selection of sex, age and systems as variables and no other variables such as comorbidities or disease severity provides one perspective of understanding Long COVID from multiple perspectives existing, such as the quality and relevance of the results are highly dependent on the input variables chosen by the analysis. The respondents were probably not representative of people with persistent COVID as most of them were members of the Long COVID-19 group in Catalonia, albeit the description of the characteristics of this group is also one of our study’s strengths. There may thus be a selection bias in the fact that many of the participants were recruited by the Long COVID-19 group in Catalonia and were more willing to participate in a study about their condition. So, replication of the study using different datasets and populations could be necessary to assess the generalizability of the results. Not having a control group of non-infected participants could alter the validation of the finding. Vaccination status and reinfection were not considered in our questionnaire. Recruitment started before the announcement of the vaccination programme (which started on 27th December 2020) in Spain. Vaccinated status and reinfection might be confounding factors when assessing the frequency of symptoms in those who reported symptom onset in 2021 [ 78 , 79 ].
Conclusions People with persistent COVID in our cohort reported general and neurological symptoms as the most frequent initial symptoms followed by respiratory symptoms in both women and men. Over time, neurocognitive symptoms displaced respiratory symptoms in women, while respiratory symptoms remained the third most frequent symptom group in men. The greatest differences between sex were found in dermatological and olfactory symptoms which were more frequent in women at all time cut-off points. In cluster analysis, evolution towards a more heterogeneous cluster over time might suggest stabilisation of the disease or adaptation to the symptoms. Heterogeneity of symptoms may render the clinical picture vague and indeterminate. This, coupled with potential gender bias, restricted access to diagnostic testing during the first wave and the change in current Spanish protocols for screening for SARS-COV-2 infection, may interfere with and hinder recognition of and care for people with persistent symptoms.
Background Around 10% of people infected by SARS-COV-2 report symptoms that persist longer than 3 months. Little has been reported about sex differences in symptoms and clustering over time of non-hospitalised patients in primary care settings. Methods This is a descriptive study of a cohort of mainly non-hospitalized patients with a persistence of symptoms longer than 3 months from the clinical onset in co-creation with the Long Covid Catalan affected group using an online survey. Recruitment was from March 2020 to June 2021. Exclusion criteria were being admitted to an ICU, < 18 years of age and not living in Catalonia. We focused on 117 symptoms gathered in 18 groups and performed cluster analysis over the first 21 days of infection, at 22–60 days, and ≥ 3 months. Results We analysed responses of 905 participants (80.3% women). Median time between symptom onset and the questionnaire response date was 8.7 months. General symptoms (as fatigue) were the most prevalent with no differences by sex, age, or wave although its frequency decreased over time (from 91.8 to 78.3%). Dermatological (52.1% in women, 28.5% in men), olfactory (34.9% women, 20.9% men) and neurocognitive symptoms (70.1% women, 55.8% men) showed the greatest differences by sex. Cluster analysis showed five clusters with a predominance of Taste & smell (24.9%) and Multisystemic clusters (26.5%) at baseline and _Multisystemic (34.59%) and Heterogeneous (24.0%) at ≥3 months. The Multisystemic cluster was more prevalent in men. The Menstrual cluster was the most stable over time, while most transitions occurred from the Heterogeneous cluster to the Multisystemic cluster and from Taste & smell to Heterogeneous . Conclusions General symptoms were the most prevalent in both sexes at three-time cut-off points. Major sex differences were observed in dermatological, olfactory and neurocognitive symptoms. The increase of the Heterogeneous cluster might suggest an adaptation to symptoms or a non-specific evolution of the condition which can hinder its detection at medical appointments. A carefully symptom collection and patients’ participation in research may generate useful knowledge about Long Covid presentation in primary care settings. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08954-x. Keywords
Supplementary Information
Abbreviations Severe Acute Respiratory Syndrome Coronavirus 2 coronavirus disease Intensive Care Unit Polymerase Chain Reaction Rapid Antigen Test World Health Organisation Emergency Health Room Fundació de Recerca en Atenció Primària de Salut Jordi Gol Research Electronic Data Capture Red Nacional de Vigilancia Epidemiológica Principal Component Analysis of Mixed Data Body Mass Index Ear, nose, throat Angiotensin Converting Enzyme 2 Transmembrane protease serine 2 Interleukin 6 Acknowledgements We would like to thank people affected with Long COVID in Catalonia for their participation in this project and their tenacity and also all the health professionals who collaborated in recruiting patients. Special acknowledgement goes to the Health Department in Catalonia for the initial funding of this study. Authors’ contributions GT, DP, CJA, VR, CV, AB and LMP participated in the design of the study.TL contributed to the data analysis. LC performed the cluster analysis, its interpretation and Figs. 5 , 6 , 7 . GT performed the main analysis, wrote the draft of the main manuscript text and prepared Figs. 1 , 2 , 3 , 4 , Table 1 and the supplementary data. All the authors participated in the critical review of the manuscript and approved the final version. Funding Funding was obtained from the Health Department in Catalonia and the project also received a research grant from the Carlos III Institute of Health, Ministry of Economy and Competitiveness (Spain), awarded on the call for the creation of Health Outcomes-Oriented Cooperative Research Networks (RICORS), with reference RD21/0016/0029, co-funded with European Union – NextGenerationEU funds. The study’s funders had no role in study design, data collection, data analysis, data interpretation or writing of the report. Availability of data and materials In accordance with current European and national law, the data used in this study are only available for the researchers participating in this project. Thus, we are not allowed to distribute the data or make them publicly available to other parties. The original REDCap questionnaire will be available under request. For further information, contact the corresponding author. Declarations Ethics approval and consent to participate This study follows all national and international regulations in the Declaration of Helsinki and Principles of Good Research Practice and was approved by the Clinical Research Ethics Committee of IDIAPJGol (20/165-PCV) on 1st October 2020. Anonymity and confidentiality of data were always ensured by the REDCap platform pursuant to Spain’s Data Protection and Digital Rights Safeguards Act 3/2018. The ethics committees of the Institut Universitari d’Investigació en Atenció Primària Jordi Gol i Gurina (IDIAPJGol) (code 20/165-PCV) approved the study protocol. All participants recruited in the study were fully informed about the study protocol and signed informed consent forms to participate. They consented to the use of their personal data for research and agreed to the applicable regulations, privacy policies and terms of use. Participant data was anonymised using a numerical order-based coding system and securely stored in a database. The study’s participants were directly involved in the design and analysis of the reported data. The corresponding author (DP) had full access to all data, while TL and LCRB had access to the raw data. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Infect Dis. 2024 Jan 15; 24:82
oa_package/9f/f4/PMC10789045.tar.gz
PMC10789046
0
Introduction Adverse drug event (ADE) is an injury caused by drug-related medical interventions, including non-preventable adverse drug reactions (ADRs) and preventable medication errors [ 1 , 2 ]. As one of the leading causes of deaths, hospitalizations, and increased treatment costs [ 3 – 7 ], the effective identification and supervision of ADEs has become a major concern of scientific research and health management departments. The fact that most ADEs are preventable makes the monitoring and reporting of ADEs particularly important [ 8 , 9 ]. The Global Trigger Tool (GTT), an active monitoring tool launched by the Institute for Health Care Improvement (IHI) in 2003 and revised in 2009, is able to detect medically related adverse events with six modules of “nursing”, “medication”, “surgical”, “intensive care”, “perinatal”, and “emergency” [ 10 ]. Compared with the spontaneous reporting system (SRS) and adverse event medical record review, GTT purposefully locates ADE-related content, thereby improving the efficiency and accuracy of case review [ 11 ]. Numerous studies have validated the accuracy and effectiveness of GTT in ADE monitoring. The translation and revision of the GTT White Paper are established to adapt to national circumstances, study population and healthcare facility. However, studies in special populations have mainly focused on elderly, pediatric, cancer, and intensive care unit (ICU) inpatients, and only one study has explored the applicability of GTT in obstetric populations [ 12 – 21 ]. Chinese society is currently dealing with challenges of declining and delaying fertility intentions of women at childbearing age, increasing infertility rates and aging of maternal [ 22 ]. The risk of maternal and infant exposure to medication during pregnancy increased alongside with the incidence of pregnancy complications [ 23 , 24 ]. In addition, there are significant changes in the pharmacokinetic profile of pregnant population [ 25 , 26 ]. A study in France [ 27 ] showed that ADRs were more common in pregnant patients than in non-pregnant patients. Among 53,426 ADRs documented in Sichuan Province between November 2016 and November 2017, a mere 1309 ADRs pertained to pregnant patients, constituting a mere 2.45% of the total [ 28 ]. In 2016, the International Network for Rational Use of Drugs (INRUD) / China Center Clinical Safety Medication Group recorded a total of 84 medication errors involving pregnant and lactating patients, accounting for 1.27% of the 6624 reported nationwide medication errors [ 29 ]. The limited efficiency of prevailing reporting methodologies, coupled with the few information-reporting members within INRUD China, particularly within women’s and children’s specialty hospitals, contributed to the relatively diminutive number of reported medication errors. As a result, ADEs in obstetric patients may has been potentially underestimated. In this study we devised a novel trigger tool with high-efficiency, leveraging the GTT which can be implemented to identify ADEs in obstetric inpatients retrospectively. Based on detectable results, the trigger tool could then be modified to align with Chinese obstetric inpatients.
Methods Literature search We systematically reviewed the literature spanning from January 1997 to October 2023, utilizing PubMed and CNKI database, employing the keywords “gestational trigger tool”, “trigger tool”, “gestational”, “obstetric”, “trigger tool”, “obstetrics”, and “pregnancy”. Our inclusion criteria comprised (1) specific trigger entries; (2) application of triggers in obstetric patients experiencing ADEs; and (3) incorporation of detection results. Upon scrutinizing the abstracts and results of the literature obtained through the search, we found that as of October 2023, only one GTT-based study was conducted in a maternal population, using the 44 triggers of the Swedish adaption and translation of GTT [ 21 ]. Therefore, triggers applied to general adult inpatients were included. Trigger extraction and revision Preliminary triggers were extracted from the included literature. Subsequently, guided by obstetric guidelines, ADEs among obstetric patients documented in the Chinese National Adverse Drug Reaction Monitoring System (NADRMS), prevalent ADEs associated with pharmaceutical interventions for special obstetric conditions, and in sight from the Williams Handbook of Obstetrics, our results underwent comprehensive evaluation by a review panel composed of pharmacists and physicians. Delphi experts investigation The Delphi method [ 30 ] was employed to administer an expert survey within the scope of this study. A cohort of 16 comprising obstetricians, neonatologists and pharmacists was randomly selected from healthcare facilities nationwide, following a process of informed and voluntary basis. The initial set of triggers underwent modifications based on expert recommendations, encompassing the rationale and interpretation of entry parameters. Following two rounds of revisions, triggers exhibiting high consistency among experts were retained. Retrospective records review The study was undertaken following the approval of the ethical review committee at Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital. A total of 300 discharged medical records from the aforementioned hospital, pertaining to the third quarter of 2018, was selected through a random sampling process. The specified inclusion criteria encompassed (1) medical records discharged between July 1, 2018, and September 30, 2018; (2) obstetric inpatients with a gestational age of ≥ 28 weeks; (3) individuals aged between 16 and 65 years; and (4) patients with a hospitalization duration exceeding 48 h. Exclusion criteria were applied to (1) cases lacking treatment-related medication records; and (2) instances where essential primary data from the inpatient medical records were absent. Our review panel was instituted in accordance with the guidelines outlined in the IHI white paper. Initial scrutiny of the foundational obstetric triggers was conducted by two junior pharmacists, followed by a comprehensive review by a senior pharmacist and a physician. Based upon the records, the panel appraised the presence of a positive trigger and ascertained the occurrence of an ADE, arriving at a consensus on these matter. The causality of the ADE was assessed according to the World Health Organization-Uppsala Monitoring Center (WHO-UMC) standards, including certain, probable/likely, possible, unlikely, conditional/unclassified, and inaccessible/unclassifiable as shown in Table 1 . Obstetricians and pharmacists conducted a comprehensive review of triggered items, judged symptoms based on the WHO-UMC causality categories, and categorized specific medical records (certain, probable/likely medical records) in the category of ADEs [ 31 ]. In accordance with Common Terminology Criteria for Adverse Events (CTCAE) 5.0, the severity of the ADE injuries was stratified into five levels [ 32 ], as presented in Table 2 . Flowchart of the study sample process and medical record review sheet for the application of the obstetric trigger tool are accessible in supplementary documents. The causality determination and severity classification were compared with the GTT and SRS to scrutinize and substantiate the effectiveness of the formulated obstetric triggers. Subsequently, the triggers underwent revision based on the outcomes of the review. Sensitivity assessment of the triggers was conducted through adverse events per 100 admissions, adverse events per 1000 patient days, and ADE detection rate; while specificity was appraised utilizing trigger PPV analysis (ADE detection frequency/trigger-positive trigger frequency). Statistical method Excel 2010 and SPSS 22.0 were used to analyze the data with descriptive statistics displaying frequencies, percentages, means and standard deviations. Regression analysis was performed to determine the correlation of the variables. Statistical significance was established when the P -value fell below 0.05.
Results Trigger extraction and revision We included 43 [ 4 , 8 – 11 , 13 , 16 , 20 , 21 , 33 – 66 ] articles based on our inclusion criteria, with almost half of them addressing the triggers recommended in the white paper. A total of 41 triggers were identified from various sources, including the articles, the white paper, physiologic changes during pregnancy, the common ADEs of drugs administered to obstetric patients for specific conditions, the Williams Handbook of Obstetrics [ 67 ], the obstetric guidelines, and a study of ADEs of obstetric patients [ 28 ]. 39 triggers (Table 3 ) were ultimately defined through two rounds of expert surveys involving modifications to the initially identified 41 triggers, organized into four distinct modules encompassing 12 triggers related to laboratory examinations, 9 related to medications, 14 related to symptoms, and 4 related to outcomes. Patient characteristics According to the inclusion and exclusion criteria, 300 eligible cases were systematically chosen through random selection, with an average age of 27.45 years, ranging from 18 to 43 years. The duration of hospitalization varied between 2 and 10 days, with an average length of stay recorded at 4.34 days. Within the cohort of 300 subjects, there were 115 instances of cesarean section, 162 cases of natural delivery, and 23 occurrences of fetal preservation. Among the latter, 8 cases were related to threatened premature delivery, 11 cases were associated with gestational cholestasis, 3 cases involved fetal growth restriction, and one case manifested abnormal liver enzymes during gestation. Triggers We conducted a comprehensive examination of 300 medical records utilizing aforesaid 39 triggers. Among these, 22 triggers (56.41%) yielded positive results, and 11 of them successfully identified ADEs. In total, 49 ADEs were reviewed, with only one case (0.33%) not triggering any of the designated entries during the evaluation process. Within the cohort of 300 obstetric inpatients, 120 exhibited positive triggers, resulting in a positive rate of 40.00%. Notably, a total of 154 triggers were identified as positive, indicating an average of 1.28 triggers per patient. The frequency of ADE detection amounted to 56 cases, yielding a PPV of 36.36%. The ratio of ADEs per 100 patients was 16.33 (95% CI, 4.19–17.81), while the ADEs per 1000 patients*days was 36.89 (96% CI, 32.72–41.07). The detailed results of trigger monitoring can be found in Table 3 . 49 ADE cases were identified through the implementation of obstetric triggers, with the detection rate of 97.96%. Additionally, 9 ADEs were identified using the 13 triggers recommended in the white paper, resulting in a detection rate of 18.37%. Concurrently, 7 ADE cases were reported in the SRS, with a corresponding ADE detection rate of 14.29%. The comparison results of these three methods are presented in Table 4 . As depicted in Table 5 , five triggers of the GTT were positive, yielding a positivity rate of 38.46%. 31 of the 300 obstetric inpatients had positive triggers, with a positive rate of 10.33%. Of these, 31 triggers were detected as positive, with an average of one trigger per patient. 10 ADEs were detected and the PPV was 32.26%. In comparison to the SRS and the GTT, the obstetric triggers exhibited a notably elevation in the ADE detection rate, positive trigger rate, and PPV value, confirming the specificity and sensitivity of the obstetric triggers. Characteristics of ADEs 49 cases of ADE were detected, with an incidence of 16.33%. The detected ADEs contained 10 categories primarily affecting the cardiovascular system (17 cases, 34.69%), gastrointestinal system (12 cases, 24.49%), female reproductive system (eight cases, 16.33%), and the fetus (seven cases, 14.29%). 15 distinct drug types were implicated, with the foremost three being medications for the reproductive system (31 cases, 52.54%), electrolyte drugs (primarily magnesium sulfate injection) (10 cases, 16.95%), and central nervous system drugs (10 cases, 16.95%). In accordance with the CTCAE5.0, 17 cases of ADEs were categorized as grade 1 (17/49, 34.6%), 27 cases as grade 2, (27/49, 55.10%), and 5 cases as level 3, (5/49, 10.20%). No instances of grade 4 or grade 5 severity were identified. Risk factors In our logistic regression analysis (Table 6 ), the variables of age, hospitalization duration (in days), the quantity of drugs administered, and whether a cesarean section or vaginal delivery was performed did not demonstrate statistical significance ( P > 0.05). Conversely, the variable representing the number of administered antimicrobials yielded statistical significance, aligning with previous literature suggesting that antimicrobial medication serves as a risk factor for ADEs [ 6 ]. Nevertheless, the regression coefficient β was negative, signifying a negative correlation with the incidence of ADEs, which may be attributed to the influence of the included risk factors in our study and a potentially inadequate sample size. Inadequate sample sizes may result in insufficient representation of ADE occurrences, consequently yielding less representative experimental results. Although the Mevik’s study showed that enlargement in sample size did not markedly increase the type and severity of ADEs, it did enhance the detection rate of ADEs [ 68 ]. Boxun Chen’s incorporation of triggers into the information system resulted in a more than fourfold increase in the ADE cases detected after one year compared to the period before the intervention [ 69 ]. Thus, for future analyses, it is imperative to expand the sample size to achieve a closer approximation to the actual incidence of ADEs. Additionally, consideration of other potential risk factors is warranted to comprehensively understand the complex dynamics influencing ADE occurrence.
Discussion Sample size We randomly selected 300 medical records for the purpose of our analysis, averaging 50 copies biweekly, surpassing the sample size outlined in the white paper. The Mevik’s [ 68 ] study indicated that augmenting the sample size produced no significant effect on the type and severity of ADE detection; however, it did contribute to an elevation in the detection rate of ADEs. In our current study, the ADE detection rate stood at 36.89 ADEs per 1000 patient days, a figure closely aligning with the rate of 39.3 ADEs per 1000 patient days in Mevik’s study [ 68 ], where 70 samples were drawn every two weeks (totaling 1680). Nevertheless, the inclusion of the medical records was concentrated in the third quarter, potentially introducing bias into our findings. Consequently, to conduct a comprehensive analysis of risk factors of ADEs in obstetric inpatient and to enhance the ADE detection rate, further expansion of our study’s sample size is warranted. ADE detection A total of 48 ADEs were detected through the established obstetric triggers, and the incidence of ADE was comparable to the findings reported in existing literature, ranging between 10% and 20%. The majority of the detected ADEs, constituting 89.8%, were characterized as mild-to-moderate injuries. Notably, there were no instances of ADEs resulting in permanent injuries or fatalities, which potentially attributed to the limitations imposed by the sample size. The predominant categories of identified ADEs were associated with the cardiovascular system, gastrointestinal system, and female reproductive system, accounting for 34.69%, 24.49%, and 16.33%, respectively. The cardiovascular system injuries were specifically manifested as elevated blood pressure and hypotension. Blood pressures that exceeded 140/90 mmHg were primarily due to the use of oxytocin and ergonovine, while hypotension was caused by magnesium sulfate. The application of prostaglandin drugs led to nausea, vomiting, diarrhea, or other gastrointestinal disturbances and excessive uterine shrinkage. This ADE-detection result was consistent with the clinical medication characteristics of obstetric inpatients and a study on the occurrence of ADRs in local obstetric patients [ 28 ]. However, in patients with high-risk pregnancies, the manifestations of complications are very similar to the symptoms of ADE caused by the therapeutic agents, posing challenges in discerning the presence of an ADE. For example, patients experiencing eclampsia and hemolysis, elevated liver enzymes and low platelets syndrome (HELLP syndrome) commonly exhibit symptoms such as headache, nausea and vomiting, which are also common adverse reactions attributed to uterotonics. The new-onset hypertension in the postpartum period may be attributed to postpartum pre-eclampsia, the administration of ergot derivatives for the prevention or treatment of postpartum hemorrhage (PPH), and/or the prolonged administration of high doses of non-steroidal anti-inflammatory drugs (NSAIDs) for postpartum analgesia [ 70 , 71 ]. The excessive and prolonged utilization of oxytocin, appliance of magnesium sulfateIn and anaesthetics increases the risk of weak uterine contractions leading to PPH, while pre-eclampsia and HELLP syndrome are significantly associated with postpartum haemorrhage [ 72 , 73 ]. Conditions such as placenta praevia, placental abruption and pre-eclampsia can lead to fetal distress due to diminished utero-placental blood flow, while inappropriate use of uterotonics and intrathecal administration of opioids for labour analgesia can induce tonic uterine contractions, which in turn can lead to fetal distress [ 74 ]. Another obvious side effect of misoprostol pertains to its induction of hyperthermia, with the degree of hyperthermia escalating proportionally to the administered misoprostol dosage. A randomized trial including patients with PPH showed that patients receiving 600 μg sublingual misoprostol and a standard contraction agent (contraction in 98% of patients) exhibited a threefold higher incidence of a temperature ≥ 38 °C (58% vs. 19%) compared to patients solely administered a standard contraction agent; the occurrence of a temperature ≥ 40 °C was 7% in the former group and < 1% in the latter [ 75 ]. Such cases necessitate a comprehensive evaluation encompassing medical history, physical examination, and laboratory assessments to discern potential pathological effects. The determination of the causal relationship between symptoms and medication will be adjudicated employing the WHO-UMC Causality Categories. Cases posing challenges in identification will be deliberated by a review panel, culminating in a consensus as the ultimate resolution [ 39 ]. Through the observation of doctors and nurses in the surgical setting, we ascertained that certain ADEs were unrecorded when the clinical manifestations were mild and did not necessitate specialized treatment. We therefore posit that it is imperative to augment the proficiency of medical personnel in recognizing and summarizing prevalent ADEs in the field of obstetrics. Validity of triggers 48 cases of adverse reactions were detected based on the established obstetric triggers, whereas only 7 cases were reported by the SRS within the same timeframe, underscoring the heightened monitoring performance of the obstetric triggers compared to the SRS. This result aligns with the conclusions reached by Classen et al. [ 4 ], which revealed that “the detection rate of ADE by trigger tool was about 10 times higher than the previous two methods (SRS and patient safety indicator monitoring).” Through two rounds of the Delphi method in our current study, the established obstetric triggers manifested a PPV of 36.36%, surpassing the sensitivity and specificity of both the SRS and GTT, therefore substantiating the effectiveness of the obstetric triggers. However, among the 39 triggers examined, 17 failed to activate, yielding a negative activation rate of 43.59%. The PPV for 5 triggers was 100%, while the trigger frequency fluctuated between a minimum of one occurrence and a maximum of 11. Thus, given the low positive trigger rates associated with certain triggers, further revision is required. Revision of the triggers A new round of trigger revisions was executed based on the results of the medical records review. Revision of untriggered trigger entries With respect to the untriggered items, M1 “protamine given”, S4 “bleeding”, S8 “thromboembolic events”, and S14 “oligohydramnios or oligohydramnios” were not triggered. These four entries (M1, S4, S8, and S14) have been omitted in alignment with a drug used during the perinatal period and its trigger probability. Revision of triggers reflecting a low PPV To improve trigger accuracy, we conducted revisions for triggers exhibiting a low PPV. In the course of reviewing medical records, we observed that the trigger “intravenous injection of calcium gluconate” was triggered 15 times, principally in patients with eclampsia or pre-eclampsia, without any corresponding ADE noted. Studies have shown that Ca 2+ stimulates neuromuscular excitement, promotes blood coagulation, and, when administered intravenously before cesarean section, diminished oxytocin levels, intraoperative and postoperative bleeding, thereby effectively prevented postpartum hemorrhage [ 76 , 77 ]. The condition of this trigger entry should therefore be defined as “intravenous injection of calcium gluconate and Mg > 5 mmol/L.” Among the enrolled patients, 115 underwent cesarean section; and those who did not experience flatulence in the initial two days post-surgery were administered keratin and/or lactulose to ameliorate constipation arising from the surgical procedure. As a result, only 2 ADEs were identified in the 36 triggers categorized under “laxative or stool softener given”. Subsequently, this category was revised as a trigger in “non-cesarean section patients who used laxatives or stool softeners”. After a comprehensive validation process, the modified triggers contained a collective sum of 35 items, inclusive of 12 laboratory tests, 8 medications, 11 symptoms, and 4 outcomes.
Conclusions The obstetric triggers established in this study were proven to be more sensitive and specific in the active monitoring of ADE among obstetric inpatients compared with SRS and GTT, and provided a benchmark for ADE monitoring among obstetric inpatients within medical institutions.
Background Pregnant women belong to the special population of drug therapy, and their physiological state, pharmacokinetics and pharmacodynamics are significantly different from the general population. Drug safety during pregnancy involves two generations, which is a hot issue widely concerned in the whole society. Global Trigger Tool (GTT) of the Institute for Healthcare Improvement (IHI) has been wildly used as a patient safety measurement strategy by several institutions and national programs, and the effectiveness had been demonstrated. But only one study reports the use of GTT in obstetric delivery until now. The aim of the study is to establish triggers detecting adverse drug events (ADEs) suitable for obstetric inpatients on the basis of the GTT, to examine the performance of the obstetric triggers in detecting ADEs experienced by obstetric units compared with the spontaneous reporting system and GTT, and to assess the utility and value of the obstetric trigger tool in identifying ADEs of obstetric inpatients. Methods Based on a literature review searched in PubMed and CNKI from January of 1997 to October of 2023, retrospective local obstetric ADEs investigations, relevant obstetric guidelines and the common adverse reactions of obstetric therapeutic drugs were involved to establish the initial obstetric triggers. According to the Delphi method, two rounds of expert questionnaire survey were conducted among 16 obstetric and neonatological physicians and pharmacists until an agreement was reached. A retrospective study was conducted to identity ADEs in 300 obstetric inpatient records at the Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital from June 1 to September 30, 2018. Two trained junior pharmacists analyzed the first eligible records independently, and the included records reviewed by trained pharmacist and physician to identify ADEs. Sensitivity and specificity of the established obstetric triggers were assessed by the number of ADEs/100 patients and positive predictive value with the spontaneous reporting system (SRS) and GTT. Excel 2010 and SPSS22 were used for data analysis. Results Through two rounds of expert investigation, 39 preliminary triggers were established that comprised four modules (12 laboratory tests, 9 medications, 14 symptoms, and 4 outcomes). A total of 300 medical records were reviewed through the obstetric triggers, of which 48 cases of ADEs were detected, with an incidence of ADEs of 16%. Among the 39 obstetric triggers, 22 (56.41%) were positive and 11 of them detected ADEs. The positive predictive value (PPV) was 36.36%, and the number of ADEs/100 patients was 16.33 (95% CI, 4.19–17.81). The ADE detection rate, positive trigger rate, and PPV for the obstetric triggers were significantly augmented, confirming that the obstetric triggers were more specific and sensitive than SRS and GTT. Conclusion The obstetric triggers were proven to be sensitive and specific in the active monitoring of ADE for obstetric inpatients, which might serve as a reference for ADE detection of obstetric inpatients at medical institutions. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-023-10449-z. Keywords
Research limitations There are still some limitations to the present study. First, one ADE eluded detection through obstetric triggers due to the inherent limitations of the GTT in identifying particular categories of ADEs [ 78 ]. GTT proves ineffectual in detecting the medication errors that are rarely documented in patient records. Second, the quality of the medical records greatly biased the results, as healthcare professionals may overlook ADEs deemed subjectively inconsequential. Moreover, the demographic composition of patients and medication habits within the specific hospital under study may have constrained the positive activation of triggers [ 33 ]. Consequently, the generalizability of the findings to other healthcare institutions may be limited, necessitating tailored modifications for individualized application in varied contexts. Forth, although the study team have received identical training through a medical record examination based on the IHI white paper, there exists variability among members in the identification and assessment of ADE and their respective severity ratings, which may be attributed to limitations inherent the investigator’s clinical experience and knowledge. Haukland EC hypothesized that awareness of the outcome and its severity, a phenomenon known as hindsight bias, may have led to an overestimation of both the quantity and severity of adverse events within the inpatient death sample [ 41 ]. Last, The IHI white paper recommended a review time of 20 min for each medical record. However, in cases where medical records were complicated due to comorbidities or prolonged hospital stays, the review duration needed to be extended, potentially uncovering additional ADEs in these specific medical records. This study still employs the manual perusal of medical records, which is inefficient and retrospective. The automated triggers facilitate the comprehensive detection of all electronic medical records, as opposed to relying on limited data samples, which holds the potential to enhance the assurance of drug safety and expedite the timely enhancement of clinical outcomes for inpatients. With the updating of guidelines and the burgeoning body of research, a periodic review and updating of the triggers featured in this study become imperative. It is noteworthy that our study merely included the literature collected in PubMed and CNKI, which introduce a potential bias, thereby impacting the overall representativeness of the findings. Contribution to the field statement In this study, the GTT was employed for the inaugural monitoring of ADEs in obstetric inpatients, marking a pioneering initiative both nationwide and worldwide. By investigating the occurrence of ADR in pregnant patients in Sichuan Province, the characteristics of ADR during pregnancy were comprehensively summarized, laying the foundation for an improvement in the local suitability of triggers. According to the physiologic characteristics of pregnant patients and specific obstetric drugs administered, triggers were then revised appropriately. We considered ADEs for the female reproductive system, fetus, and newborn, and established unique obstetric triggers. We postulated that the obstetric triggers were more suitable for the practical application of inpatients in a local department, and that the triggers could predict the occurrence of ADE more efficiently and accurately than other methods. Electronic supplementary material Below is the link to the electronic supplementary material.
Abbreviations adverse drug event alkaline phosphatase alanine aminotransferase activated partial thromboplastin time blood glucose Common Terminology Criteria for Adverse Events Deep venous thrombosis fasting blood glucose Free Thyroxine Global Trigger Tool hemolysis, elevated liver enzymes and low platelets syndrome Institute for Healthcare Improvement international normalized ratio International Network for Rational Use of Drugs National Adverse Drug Reaction Monitoring System neutrophil count non-steroidal anti-inflammatory drugs postprandial blood glucose Pulmonary thromboembolism Postpartum hemorrhage positive predictive value prothrombin time serum creatinine spontaneous reporting system thyroid peroxidase antibody thyroid stimulating hormone Total Thyroxine white blood cells World Health Organization-Uppsala Monitoring Center Acknowledgements The authors are thankful to their medical and pharmaceutical colleagues for providing meaningful advice on the establishment of the GTT and assisting in the collection of surveys. Author contributions All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Shan Wu, Qinan Yin, and Yuan Bian; and the first draft of the manuscript was prepared by Shan Wu, Qinan Yin, and Liuyun Wu; the verification and updating of the manuscript were conducted by Nan Yu and Yue Wu; the funds came from Yuan Bian and Junfeng Yan. All authors commented on previous versions of the manuscript, and read and approved the final manuscript. Funding This study was funded by the National Key Research and Development Program of China (2020YFC2005500), the Sichuan Science and Technology Plan Project (2022NSFSC0818), Clinical Research and Transformation Project of Sichuan Provincial People’s Hospital (No.2018LY09). Data availability All data generated or analyzed during this study are included in this published article. Declarations Ethics approval and consent to participate The experimental protocol was established, according to the ethical guidelines of the Helsinki Declaration and was approved by the Human Ethics Committee of Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital. Written informed consent was obtained from individual or guardian participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Health Serv Res. 2024 Jan 15; 24:72
oa_package/fe/18/PMC10789046.tar.gz
PMC10789047
0
Background Variation in human birth weight is associated with adverse perinatal health outcomes as well as long term health outcomes [ 1 ]. In particular, lower than average birth weight is associated with higher neonatal mortality and a higher risk of cardiovascular disease [ 2 ], type 2 diabetes [ 3 ] and hypertension [ 4 ] in adulthood. Understanding mechanisms that influence variation in birth weight could help identify targets for intervention to ensure healthy birth weight. Experimental studies in animal models and observational studies in humans have demonstrated links between higher fetal glucocorticoid exposure and lower birth weight [ 5 ]. Higher maternal cortisol levels are one potential source of increased fetal glucocorticoid exposure, with evidence of higher levels of both maternal plasma [ 6 ] and salivary [ 7 ] cortisol being associated with lower birth weight infants. Infants exposed to antenatal corticosteroids in a secondary analyses of a randomized controlled trial (RCT) of women at risk of preterm birth also have lower birth weight compared to those randomised to placebo, although this was in part related to also having a shorter gestation [ 8 ]. There are challenges to assessing the effect of maternal cortisol levels on offspring birth weight. There are several maternal characteristics that can confound the relationship between maternal cortisol and offspring birth weight, such as maternal smoking and body mass index (BMI) [ 9 ], which can be difficult or even impossible to fully account for in conventional observational studies. Also, whilst the RCT evidence was from a large and well conducted study and therefore unlikely to be biased by confounding, it was limited to women at risk of preterm birth only. Furthermore, it was not a direct test of the effect of maternal cortisol on birth weight and the lower birth weight in those randomized to corticosteroids was driven in large part by reduced gestational duration [ 8 ]. Mendelian Randomization (MR), uses genetic variants to probe the effect of modifiable exposures (e.g. maternal cortisol levels) on health outcomes (e.g. offspring birth weight) [ 10 ]. Given that genetic variation is randomised at conception, MR is less susceptible to being biased by variables that are observationally correlated with the exposure variable but independently impact the outcome via a mechanism independent of the mechanism being tested. We hypothesized that higher maternal plasma cortisol causes lower offspring birth weight and used MR to test this hypothesis. We used the most recent Genome Wide Association Study (GWAS) of fasting plasma cortisol levels [ 11 ] as the source of genetic variant associations with the exposure, and we used the GWAS of offspring birth weight in the Early Growth Genetics (EGG) Consortium and UK Biobank [ 12 ] to obtain estimates of maternal genetic effects on birth weight conditional on the fetal genotype. To investigate the plausibility of instrumental variable assumptions, we also tested the genetic variant’s association with cortisol in pregnancy in a European-ancestry birth cohort [ 13 ] and searched for potential sources of horizontal pleiotropy using an online database [ 14 , 15 ].
Methods We used two-sample MR to estimate the causal effect of maternal plasma cortisol on offspring birth weight [ 16 ]. This method involves using estimates of the single nucleotide polymorphism (SNP)-exposure associations (using SNPs that are robustly associated with the exposure, in this case plasma cortisol) as well as using SNP-outcome associations extracted from a pre-existing data set (in this case offspring birth weight). For each SNP, the SNP-outcome association is divided by the SNP-exposure association. Normally, these ratios would be pooled to give an estimate of the causative effect of the exposure on an outcome. For this study we were limited by the fact that only one genome-wide significant locus for plasma cortisol has been identified. The study design and different sources used are summarised in Fig. 1 . Data sources A summary of all the cohorts contributing to the GWAS summary statistics used in this study can be found in Table 1 . Genetic associations with plasma cortisol SNPs associated with circulating cortisol were identified from the most recent GWAS ( N =25,314) [ 11 ]. In total, 17 cohorts contributed to the GWAS, and usually measured circulating cortisol levels before 12pm (range 7am to 1pm) [ 11 ]. in which four SNPs within one locus (i.e. the SERPINA6/SERPINA1 locus) were associated with fasting plasma cortisol at genome wide significance ( p -value ≤ 5e -8 ) [ 11 ]. These four SNPs are in partial linkage disequilibrium (LD) with one another and we selected the SNP most strongly associated with circulating cortisol, rs9989237, as the genetic instrument for our main MR analysis [ 11 ]. Details of the identified SNPs are found in Additional file 1 (Additional Table 1). Genetic associations with birth weight For our second sample we used the latest maternal GWAS of offspring birth weight from the Early Growth Genetics (EGG) meta-analysis. A total of 406,063 participants contributed to the weighted linear model analyses (WLM, see below) to estimate maternal effects conditional on offspring genotype, and offspring effects conditional on maternal genotype (see Additional file 1 (Methods)). Of these participants, 101,541 were UK Biobank participants who reported their own birth weight and birth weight of their first child, 195,815 were UK Biobank and EGG participants with own birth weight data, and 108,707 were UK Biobank and EGG participants with offspring birth weight data [ 12 ].In the UK Biobank and EGG meta-analysis, birth weight was standardized within each of the cohorts so that birth weight in our analyses is measured in SD units and our results were initially the difference in mean birth weight in SD units. We converted these to a difference in mean birth weight in grams by using the SD of birth weight from an earlier EGG paper (1 SD of birthweight = 484g) [ 17 ]. Genetic associations with maternal pregnancy cortisol Cortisol levels in 892 mothers in the EFSOCH cohort [ 13 ] were assayed at 28 weeks gestation (Additional file 1 (Methods)). EFSOCH mothers were genotyped in three batches (one in Exeter, two in Bristol) using the Illumina Infinium HumanCoreExome-24 array, and when multiple genotyping batches are used for the same sample, bias can occur due to random differences between those participants assigned to one batch versus another (i.e., a batch effect) [ 18 ]. The association between the GWAS identified SNP and pregnancy cortisol in EFSOCH was adjusted for the genotyping chip to guard against batch effects. Data analyses Our main analysis was to estimate the effect of maternal plasma cortisol on offspring birth weight in the UK Biobank and EGG meta-analysis. In addition to this, we undertook analyses to assess instrumental variable assumptions, specifically to determine the strength of the cortisol instruments and to explore the possibility of horizontal pleiotropy in the cortisol instrument. Adjusting for the fetal genotype To avoid violating the third assumption of MR (i.e. that a genetic instrument affects the outcome only via the associated exposure) due to fetal genetic effects [ 10 ], we adjusted for the fetal genotype. For the main analysis, to ensure our analyses considered only the effect of the maternal genotype, and not the correlated fetal genotype, we used SNP-birth weight associations that had been adjusted for fetal genotype using a weighted linear model (WLM) [ 12 ]. The WLM is a method that was developed to combine data from disparate study designs to estimate conditional maternal and fetal genetic effects, similar to conditional genetic association analysis in genotyped mother-child pairs (see Additional file 1 (Methods) and references [ 12 , 19 ]). To verify the WLM-adjusted summary statistics, we also applied the SEM method to obtain the SNP maternal effect on offspring birth weight, adjusted for the fetal genotype using UK Biobank participants (own birth weight N = 186,810; offspring birth weight N = 162,827) and repeated the main MR analysis to check we obtained similar results. Main MR analyses We performed two-sample MR using the Wald ratio estimator [ 20 ], which was calculated by dividing the SNP’s effect on birth weight by the same SNP’s effect on circulating cortisol. Standard errors were calculated by dividing the standard error of the SNP’s effect on birth weight by the SNP’s effect on cortisol. This was done using SNP-outcome estimates from both the main WLM analysis and from our own SEM analysis. The resulting effect estimates from our MR analyses are reported per 1 SD of log-transformed plasma cortisol levels [ 11 ]. IVW analysis adjusting for between SNP correlations To maximise power, we performed an additional MR analysis incorporating the four SNPs in partial LD at the SERPINA6/SERPINA1 locus, as reported by Crawford et al [ 11 ]. Given those SNPs were partially correlated, we used a modified inverse variance weighted (IVW) analyses which accounts for the correlation across genetic instruments using the TwoSampleMR [ 21 ] and MendelianRandomisation [ 22 ] R packages and a correlation matrix of variants obtained from the 1000 genomes EUR reference panel via TwoSampleMR [ 21 ]. The correlation matrix of the R values used for this analysis is presented in Additional file 1 (Additional Table 2). Testing cortisol instrument strength An MR assumption is that the genetic instruments are robustly associated with the exposure. In two-sample MR, as undertaken here, weak instrument bias is expected to bias estimates towards the null in the absence of sample overlap. To test the strength of the genetic instruments for cortisol, we calculated the R 2 and F-Statistic for all four SNP-cortisol associations reported in the GWAS (see Additional file 1 (Methods) for further details). Testing cortisol instruments relevance to pregnancy The cortisol GWAS was performed in a non-pregnant, mixed sex population, therefore it is possible that the instruments detected do not predict variations in circulating cortisol during pregnancy, or if they do, this is with a different magnitude to what we assume when using the GWAS result. We therefore compared the association between SNP rs9989237 and fasting plasma cortisol levels measured in pregnancy in the EFSOCH cohort with the same results from the original GWAS (see Additional file 1 (Methods) for further details). Exploring the possibility of horizontal pleiotropy in the cortisol instrument Another core MR assumption is that any effect of the genetic instrument on the outcome is fully mediated by the exposure. If this assumption is violated, the genetic instrument is considered invalid and MR estimates could be biased. Numerous MR methods have been developed that are robust to the presence of invalid instruments e.g. MR-Egger [ 23 ], weighted-median [ 24 ], Radial MR [ 25 ]. However, these methods typically require that multiple genetic instruments from different loci are available for a particular exposure. Given that only one independent SNP was available for our analyses, we explored the plausibility of the assumption of no invalid instruments by assessing the specificity of our genetic instrument in a phenome-wide association (PheWAS) scan using data from the MR-Base platform [ 14 , 15 ], which has data from a wide range of GWAS that can be easily downloaded via R. To perform the scan, we downloaded every tested association between rs9989237 and an available GWAS variable using the “ieu-gwas-r” package [ 14 ], by specifying the p -value threshold at 1. This gave us 19,269 different variables in total. Though all of the variables associated with rs9989237 could result in pleiotropy, we decided to focus our attention on those variables whose p -value passed a Bonferroni threshold of 2.6e -06 .
Results Main results and sensitivity analyses The estimated effect of maternal circulating cortisol was a 50 (95% CI, -109 to 10) grams lower offspring birth weight per 1 SD higher log-transformed maternal circulating cortisol levels. When using all four SNPs in IVW analysis adjusted for correlation between SNPs, the result was similar (-33 (95% CI, -77 to 11). Using the SEM to adjust for the fetal genotype gave similar results (-75 (95% CI, -141 to -9)). All effect estimates are shown in Fig. 2 . SNP validation Instrument strength and relevance in pregnancy Using the data from the largest available GWAS, we estimated that the SNP used in the main analyses (rs9989237) explained ~0.2% of the variation in cortisol and had an F-statistic of 62. The R 2 values and F-statistics for the other SNPs are shown in Table 2 . In the EFSOCH study [ 13 ], the mean value of women’s fasting plasma cortisol was 1,010 nmol/l (SD; 233 nmol/l) or 3 log-transformed nmol/l (SD; 0.1 log-transformed nmol/l). The SNP used in our main analyses had a considerably (2-fold) weaker association with women’s fasting plasma cortisol levels in pregnancy than seen in the main GWAS of non-pregnant women and men (0.04 (95% CI, -0.07 to 0.16) vs 0.09 (95% CI, 0.07 to 0.10)), though given the small sample size the estimate was imprecise with very wide confidence intervals that included the GWAS point estimate and the null (see Fig. 3 ). Possibility of the instrument influencing birth weight through horizontal pleiotropy In total, 11 variables were associated at Bonferroni significance with rs9989237, and a further 1,516 variables were nominally associated with rs9989237. These associations with the cortisol increasing variant included higher levels of SERPINA1 (beta = 0.123, p = 4.09e -18 ), 39S ribosomal protein L33 (beta = 0.252, p = 2.82e -17 ), PH and SEC7 domain-containing protein 1 (beta = 0.200, p = 2.24e -11 ) and Histidine (beta = 0.026, p = 3e -07 ), as well as lower levels of Albumin (beta = -0.034, p = 1.18e -28 ), Synaptosomal-associated protein 25 (beta = -0.18, p = 1.78e -09 ) and sex-hormone binding globulin (SHBG) both with (beta = -0.005, p = 3.4e -07 ) and without (beta = -0.005, p = 1.7e -06 ) adjustment for body mass index (BMI), and in male only GWAS of SHBG (with BMI adjustment, beta = -0.007, p = 2.3e -06 ; without BMI adjustment, beta = -0.008, p = 6.6e -07 ). See Table 3 for details of the Bonferroni significant associations and Additional file 1 (Addition Table 3 ) for details for all nominally significant results.
Discussion We used two-sample MR with a single genetic variant to investigate the effect of maternal plasma cortisol on offspring birth weight. The results of the main analysis, the IVW analysis adjusted for between variant correlation and the SEM analysis were all directionally consistent with the observational association of higher maternal cortisol associating with lower offspring birth weight. However, all three methods of analysis used, provided imprecise estimates, which included values that are potentially of importance, as well as small or zero mean differences. For example, the 50 to 75 gram reductions in birth weight in both the main and SEM secondary analysis, respectively, together with their higher 95% confidence interval levels (both higher than 100g) are likely to be of clinical importance, whereas the lower confidence intervals (of an increase in 10 grams in the main analysis and a decrease of 9 grams in the SEM) are unlikely to be so. Therefore, the evidence of an effect of maternal cortisol on birth weight is uncertain and larger studies are required to identify whether maternal cortisol levels are a modifiable target for supporting healthy fetal growth and hence birth weight. That said, the point estimate for the association between the main genetic variant and cortisol measured in pregnancy may be considerably smaller than that seen in the original GWAS, which could mean our results are biased towards the null. In addition, with just one independent genetic variant we were unable to explore horizontal pleiotropy, using conventional two-sample MR methods and our MR PheWAS suggested that the cortisol increasing variant also related to lower mean levels of SHBG which could result in biased estimates. A systematic review of the associations of maternal pregnancy cortisol with a range of offspring outcomes identified three studies that explored the association with offspring birth weight [ 26 ]. Two of the studies examined associations of maternal saliva cortisol and with birth weight in small numbers (70 and 55 participants). One study, which included 2810 participants, explored the association of maternal serum cortisol with birth weight [ 6 ]. Several estimates from the study suggested an inverse association with mean birth weight (ranging from a mean difference of -0.94 (95% CI, -1.75 to -0.12) to -0.07 (95% CI, -0.23 to 0.08) grams per nmol/l), which is directionally consistent with our findings. That study was not our own data, and it used different units of analyses, therefore we cannot directly compare the findings with our MR estimates. Further evidence of an inverse effect of maternal plasma cortisol on offspring birth weight came from a large ( N = 1,858), well conducted RCT of antenatal corticosteroids in mothers at risk of preterm birth, found that randomization to antenatal corticosteroids was associated with lower offspring birth weight (mean difference -113.1 (95% CI, -187 to -41.17) grams) compared to placebo [ 27 ]. A secondary analysis of that RCT found that at least two thirds of the association could be explained by shorter gestational duration, though an effect was still detected (mean difference -33.5 (95% CI,-66.3 to -0.7) grams) [ 8 ]. Neither study reported the change in circulating corticosteroids in the mothers randomised to antenatal corticosteroid treatment compared to placebo, hence these findings cannot be compared with our MR results in the way we have previously compared MR and RCT results [ 28 ]. Lower birth weight has been associated with higher circulating cortisol in later life [ 29 ]. It is therefore possible that pregnant women with higher cortisol levels may have been smaller at birth and that an association between maternal cortisol and offspring birth weight could arise via the correlation between maternal and offspring size at birth. The birth weight effects of maternal genetic variants considered in our analyses were adjusted for the correlation with fetal genetics [ 12 ], so while this possibility remains to be investigated, it would not have influenced our results. A recent MR study on the effect of cortisol on birth weight, which has been published as part of a PhD thesis only (thus not peer-reviewed), found evidence of higher maternal cortisol leading to lower birth weight (-19 (95% CI, -34 to -7) grams per 1 log-transformed SD of cortisol). This was directionally consistent, but with a considerably weaker and more precisely estimated effect than we found. This study used an older, smaller GWAS for selecting genetic instruments than we used in this study [ 30 , 31 ], which identified different genetic instruments, and used different methods to prepare the variables to adjust for between SNP correlations [ 32 ]. Strengths and limitations This study used a large genome-wide data set of offspring birth weight, the UK Biobank and EGG meta-analyses [ 12 ]. However, the UK Biobank and EGG meta-analyses did not adjust for gestational duration, and as maternal cortisol has been associated with gestational duration in observational studies [ 33 ], this could be an alternative mechanism by which cortisol effects birth outcomes. We used a number of novel MR techniques to measure the effect of an exposure on an outcome when only a single locus is available. Additionally, we were able to partially validate the effect of the genetic instrument on maternal pregnancy cortisol using data from the EFSOCH cohort [ 13 ]. There are two important limitations to our study which relate to the genetic instruments for cortisol. First, despite using results from the largest GWAS to date of cortisol in our main analyses we only had one genetic instrument. Nonetheless, we chose the SNP with the strongest association with cortisol ( R 2 = 0.2%, F-statistic = 62) for the main analysis. Furthermore, we had near identical results when combining all four genome wide associated SNPs and controlling for their correlation. However, we cannot rule out weak instrument bias resulting in an underestimate of the causative effect [ 16 ]. We were not able to undertake conventional sensitivity analyses that are more robust to potential bias due to unbalanced horizontal pleiotropy [ 10 ]. The association of the genetic instrument with SHBG, albumin and histidine in MR-Base (at a p -value ≤2.6e -6 ) might indicate pleiotropic effects of our genetic instrument that may have biased our results. SHBG is produced in the liver and binds to steroid hormones, as does corticosteroid-binding globulin [ 34 ], which the SERPINA1/A6 locus encodes [ 11 ]. SHBG has been observed to be negatively associated with insulin resistance, type 2 diabetes and gestational diabetes (a cause of higher mean birth weight [ 35 ]) even after adjusting for BMI [ 36 ]. As the cortisol raising allele was associated with lower circulating levels of SHBG, this could result in masking pleiotropy, meaning our results are an underestimate of a true, stronger inverse effect. Circulating albumin levels are widely seen as a marker of protein sufficiency (lower levels, less sufficient), and low maternal albumin levels have been associated with lower offspring birth weight [ 37 ]. Histidine is a precursor to the inflammatory compound histamine [ 38 ], and higher maternal circulating levels of histidine have been shown to be associated with lower offspring birth weight in previous MR studies [ 39 ]. As the cortisol raising allele was associated with lower albumin levels and higher histidine levels, it could be that the suggestive evidence of a negative effect of the cortisol raising allele on birth weight is due, at least in part to pleiotropy, meaning our results could be biased. Additionally, our genetic instrument was associated with the expression of three proteins, none of which (to the best of our knowledge) has been found to be directly associated with birth weight in humans. In our PheWAS, we used a Bonferroni corrected p -value threshold, which is common in PheWAS exploring potential multiple causal effects of an exposure (e.g. 19,269). However, one could argue that when exploring bias this is less appropriate and we should not make this correction, or at least have a less stringent approach, as here the aim is to be as rigorous as possible in exploring potential biases [ 40 ]. Larger GWAS of circulating cortisol levels are needed to identify additional independent genetic instruments. Our results assume that the effect of the genetic instrument on cortisol observed in the GWAS is the same as that during pregnancy. If the true effect in pregnancy is closer to what we observe in the EFSOCH pregnancy sample, then our MR analyses may be biased towards the null. Further evidence that the genetic instrument may not be valid in pregnancy comes from our PheWAS analysis, which shows the effect of rs9989237 on SHBG is stronger in men than women. However, the EFSOCH population sample is limited ( N = 892; all in relative health) and the confidence intervals of the estimate captured the GWAS reported cortisol association. Despite this potential mitigation, the 2-fold difference between the GWAS reported cortisol association and the EFSOCH pregnancy cortisol association means there is legitimate concern that the SERPINA1/A6 locus is a weak instrument for pregnancy cortisol, leading to bias.
Conclusions In conclusion, we found some evidence that higher maternal plasma cortisol may cause lower birth weight. Despite using the largest GWAS of cortisol to date, we only had one independent genetic locus and considering the potential sources of bias discussed above, more investigations are needed to make robust conclusions about the effect of maternal pregnancy cortisol on offspring birth weight.
Background Observational studies and randomized controlled trials have found evidence that higher maternal circulating cortisol levels in pregnancy are associated with lower offspring birth weight. However, it is possible that the observational associations are due to residual confounding. Methods We performed two-sample Mendelian Randomisation (MR) using a single genetic variant (rs9989237) associated with morning plasma cortisol (GWAS; sample 1; N = 25,314). The association between this maternal genetic variant and offspring birth weight, adjusted for fetal genotype, was obtained from the published EGG Consortium and UK Biobank meta-analysis (GWAS; sample 2; N = up to 406,063) and a Wald ratio was used to estimate the causal effect. We also performed an alternative analysis using all GWAS reported cortisol variants that takes account of linkage disequilibrium. We also tested the genetic variant’s effect on pregnancy cortisol and performed PheWas to search for potential pleiotropic effects. Results The estimated effect of maternal circulating cortisol on birth weight was a 50 gram (95% CI, -109 to 10) lower birth weight per 1 SD higher log-transformed maternal circulating cortisol levels, using a single variant. The alternative analysis gave similar results (-33 grams (95% CI, -77 to 11)). The effect of the cortisol variant on pregnancy cortisol was 2-fold weaker than in the original GWAS, and evidence was found of pleiotropy. Conclusions Our findings provide some evidence that higher maternal morning plasma cortisol causes lower birth weight. Identification of more independent genetic instruments for morning plasma cortisol are necessary to explore the potential bias identified. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-024-06250-3. Keywords
Supplementary Information
Abbreviations Randomized Control Trial Mendelian Randomization Genome Wide Association Study Early Growth Genetics Single nucleotide polymorphism Acknowledgements This research has been conducted using the UK Biobank Resource under application number 7036. We would like to thank the participants and researchers from the UK Biobank who contributed or collected data and the families that took part in EFSOCH. We are grateful to the Genetics of Complex traits team at the University of Exeter, for their assistance in learning the methods and navigating the study data. The authors would like to acknowledge the use of the University of Exeter high-performance computing (HPC) facility in carrying out this work. This research was funded in part by the Wellcome Trust [Grant number WT220390]. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. Authors’ contributions M-CB, DAL, RMF and RMR designed this study, with WDT further developing the design. NMW and DME supervised the development and running of the Structural Equation Model of birth weight to estimate conditional maternal and fetal genetic effects. ATH contributed to the collection and management of EFSOCH data, and TJM oversaw the collection, extraction, preparation and measurement of the EFSOCH pregnancy circulating cortisol data. WDT, RMF, DAL wrote the analysis plan, and WDT undertook most of the analyses with support from JT, M-CB, RB, ARW, NMW, DME, RMF and DAL. WDT wrote the first draft of the paper with support from RMR, M-CB, RMF and DAL; all authors read and made critical revisions to the paper. WDT, RMF, M-CB and DAL act as guarantors for the papers integrity Funding This study was supported by the US National Institute of Health (R01 DK10324), the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 669545, the British Heart Foundation (CS/16/4/32482 and AA/18/7/34219) and the NIHR Biomedical Centre at the University Hospitals Bristol NHS Foundation Trust and the University of Bristol. The Exeter Family Study of Childhood Health (EFSOCH) was supported by South West NHS Research and Development, Exeter NHS Research and Development, the Darlington Trust and the Peninsula National Institute of Health Research (NIHR) Clinical Research Facility at the University of Exeter. Genotyping of the EFSOCH study samples was funded by Wellcome Trust and Royal Society grant 104150/Z/14/Z. WDT is supported by the GW4 BIOMED DTP awarded to the Universities of Bath, Bristol, Cardiff and Exeter from the UK Medical Research Council (MRC). M-CB was supported by a UK MRC Skills Development Fellowship (MR/P014054/1). RMF and RNB were funded by a Wellcome Trust and Royal Society Sir Henry Dale Fellowship (WT104150). RMF is supported by a Wellcome Senior Research Fellowship (WT220390). M-CB, RMF and DAL work in / are affiliated with a unit that is supported by the University of Bristol and UK Medical Research Council (MC_UU_00011/6). DAL is a NIHR Senior Investigator (NF-0616-10102). ATH is supported by a NIHR Senior Investigator award and also a Wellcome Trust Senior Investigator award (098395/Z/12/Z). TJM is supported by a National Institute of Health Research Senior Clinical Lectureship (ICA-SCL-2016-02-003). RMR acknowledges the support of the British Heart Foundation (RE/18/5/34216). NMW is funded by an Australian National Health and Medical Research Council Investigator Grant (APP2008723). DME is funded by Australian National Health and Medical Research Council Senior Research Fellowships (APP1137714).The funders had no role in the design of the study, the collection, analysis, or interpretation of the data; the writing of the manuscript, or the decision to submit the manuscript for publication. The views expressed in this paper are those of the authors and not necessarily those of any funder. Availability of data and materials Our study uses two-sample Mendelian randomization (MR). We used both published summary results (i.e. taking results from published research papers and websites) and individual participant cohort data as follows: For the two sample MR, we used genetic variants associated with circulating plasma cortisol. We extracted the exposure associations for these genetic variants from a dataset available to download at the University of Edinburgh DataShare site. https://datashare.ed.ac.uk/handle/10283/3836#:~:text=The%20CORNET%20consortium%20extended%20its,genetic%20association%20with%20SERPINA6%2FSERPINA1 We extracted the outcome associations for these genetic instruments from genome-wide datasets of offspring birth weight adjusted for maternal genotype, available for download from the EGG Consortium. http://egg-consortium.org/birth-weight-2019.html The references to the journals that reported data sources are cited in the main paper. We used individual participant data for the second MR sample and for undertaking sensitivity analyses from the UK Biobank and EFSOCH cohorts. The data in UK Biobank is fully available, via managed systems, to any researchers. The managed system for both studies is a requirement of the study funders but access is not restricted on the basis of overlap with other applications to use the data or on the basis of peer review of the proposed science. UK Biobank. Full information on how to access these data can be found here - https://www.ukbiobank.ac.uk/using-the-resource/ EFSOCH. Requests for access to the original EFSOCH dataset should be made in writing in the first instance to the EFSOCH data team via the Exeter Clinical Research Facility [email protected]. Declarations Ethics approval and consent to participate For UK Biobank, all participants provided written informed consent, including for their collected data to be used by international scientists. UK Biobank has approval from the North West Multi-centre Research Ethics Committee (MREC), which covers the UK. UK Biobank’s research ethics committee and Human Tissue Authority research tissue bank approvals mean that researchers wishing to use the resource do not need separate ethics approval. For EFSOCH, all mothers and fathers gave informed consent and ethical approval was obtained from the North and East Devon Local Research Ethics Committee. Consent for publication This study does not use data that could be used as a means of identification. Competing interests DAL has received support from Medtronic LTD and Roche Diagnostics for biomarker research that is not related to the study presented in this paper. The other authors report no conflicts.
CC BY
no
2024-01-16 23:45:34
BMC Pregnancy Childbirth. 2024 Jan 15; 24:65
oa_package/94/95/PMC10789047.tar.gz
PMC10789048
38221629
Background Despite infection prevention and control (IPC) improvement efforts in the last decade, Sub-Saharan African countries continue to face a range of infectious disease threats affecting their population. In June 2021, the Democratic Republic of Congo (DRC) experienced a third wave of severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) infections, where the Delta variant (B.1.617.2) was found to be dominant [ 1 , 2 ]. The Omicron variant was later documented in the country in November, and subsequently, a fourth wave of infections emerged in December 2021 [ 2 ]. In the same year, the health system in DRC faced its 12th Ebola virus disease (EVD) outbreak, which began as a resurgence from a survivor of a previous outbreak and had a 50% mortality rate [ 3 ]. The 12th outbreak was officially declared over in May, but only five months later, the 13th Ebola outbreak occurred in October 2021 [ 4 ]. Similarly, Burkina Faso (BF) was affected by the COVID-19 pandemic, with its two biggest initial waves occurring in December 2020 and 2021 and resulting in a total of 21,128 cases [ 5 ]. Furthermore, its central location in west Africa with six border countries makes Burkina Faso a concentrated area of human movement at high-risk for transborder disease transmission. An additional image file shows a map of this movement in more detail (see Additional file 1 ) [ 6 ]. Such challenges demonstrate the need for robust IPC measures that can not only combat infections in emergency outbreak situations, but are established as routine practices and procedures embedded in effective and sustainable IPC programmes at the national and healthcare facility level. Evidence-based IPC interventions have been shown to prevent more than 50% of health care-associated infections (HAIs), increasing patient and healthcare worker (HCW) safety [ 7 – 9 ]. In 2016, the World Health Organization (WHO) published recommendations for the core components (CC) of IPC programmes [ 10 ]. However, in resource-limited settings, where HAI prevalence has been estimated to be 2–3 times more than in settings in Europe and the United States, the implementation of IPC CCs can be challenging for healthcare facilities due to lack of personnel, infrastructure and financial resources [ 11 ]. It is essential to determine how IPC guidelines can be effectively implemented in these areas [ 12 ]. A recent appraisal from African experts in the Pan African Medical Journal emphasized the contribution of nosocomial COVID-19 infection in the region and IPC programmatic challenges related to weak healthcare systems and infrastructure [ 13 ]. Robust evidence on IPC implementation strategies in low-resource settings remains limited, although selected studies have been published in recent years. In 2021, Tomczyk et al. qualitatively assessed IPC implementation themes from a series of interviews conducted with IPC experts from low-resource settings. A range of critical actions were identified that could be taken to achieve the WHO IPC CCs, such as continuous leadership advocacy, initial external technical assistance followed by local guideline adoption, establishment of local IPC career paths and pilots for HAI surveillance and monitoring, audit and feedback among other themes [ 7 ]. Our study aimed to add to the evidence base by describing the initial WHO IPC CC implementation experience at two reference hospitals in low-resource settings in the DRC and BF. A training was carried out on the WHO CCs of an IPC programme, and a mixed methods study was conducted to assess healthcare worker (HCW) knowledge, attitudes and practice (KAP), identify context-specific challenges to IPC programme implementation and evaluate the facility level of IPC implementation using the WHO Infection Prevention and Control Assessment Framework (IPCAF) [ 14 ].
Methods Study setting This study takes place in two reference acute health care facilities in Sub-Saharan Africa. Saint Luc Hospital of Kisantu (referred to as ‘Facility A’) is a general reference hospital with 340 beds, serving a population of 190,800 in the Kisantu Health Zone in DRC’s Kongo Central Province in Central Africa The hospital has eight departments (internal medicine, surgery, pediatrics, gynecology, obstetrics, orthopedics, dentistry and ophthalmology) and employs approximately 108 HCW and 60 administrative personnel [ 15 ]. Centre University Hospital of Souro Sanou (referred to as ‘Facility B’) is a national referral hospital in Bobo-Dioulasso, BF, with 650 beds, serving several regions with a combined population of over six million. The hospital has six departments (surgery, obstetrics and reproductive medicine, medicine, pediatrics, pharmacy and laboratory) and employs 927 HCWs and 124 administrative staff. Both facilities are partner hospitals in the African Network for improved Diagnostics, Epidemiology and Management of Common Infectious Agents (ANDEMIA), and the study was conducted as part of this partnership [ 16 ]. Following discussions with all ANDEMIA network facility leadership during the COVID-19 pandemic response, these two health care facilities were identified as those who expressed the most urgent need for IPC improvement. Study design The purpose of this study was to describe the initial WHO IPC CC implementation experience at the selected facilities. Interest in developing an IPC programme was expressed by the facilities and a five-day interactive training programme on the WHO IPC CCs was conducted. Multidisciplinary participants were nominated by hospital leadership as representatives responsible for IPC (e.g. part of the acting hygiene committees or facility leadership teams) across the professional hierarchy. Participation in the training and study was voluntary. The training material was developed based on available WHO guidance by national IPC experts including the input from a global IPC expert [ 17 , 18 ]. The training programme was delivered by the respective national IPC experts with the engagement of local environmental hygienists. The training was conducted in Facility A in September 2021 and in Facility B in March 2022. These training times were identified by the facilities according to the timing of their COVID-19 pandemic response activities and availability of participants and trainers. In addition, a basic provision of IPC supplies was procured for the facilities to support the initial built environment for IPC. Alongside the conducted training and basic provision of IPC supplies, a three-part mixed methods study was conducted, consisting of: (1) a baseline and follow-up participant KAP survey, (2) a qualitative assessment of plenary discussion transcripts to identify context-specific barriers and facilitators to IPC programme implementation and (3) the guided use of the WHO IPCAF to evaluate the facility level of IPC implementation. Part one: baseline and follow-up participant KAP survey A tailored KAP survey on IPC programmes was developed based on the WHO IPC CC and consisted of four sections: participant background characteristics (10 questions), attitudes (13 Likert-scale statements), practices (two yes/no questions, six Likert-scale questions) and knowledge (17 true/false questions, 14 multiple-choice questions, and five open-ended questions). A 7-point Likert scale was used to assess attitudes: completely disagree (1 point), disagree (2 points), slightly disagree (3 points), neutral (4 points), slightly agree (5 points), agree (6 points) and completely agree (7 points). A different Likert scale was used to assess practices, ranging from: never, sometimes, often, always, I don’t know. The knowledge true/false and multiple-choice questions were scored according to the pre-determined correct responses. Using this KAP instrument, a baseline survey was conducted among all training participants on the first day prior to the commencement of the training. Likewise, a follow-up survey with the same instrument and among the same participants was conducted immediately following the conclusion of the training. Part two: qualitative assessment of plenary discussions Interactive plenary discussions were held throughout the training and key points expressed were transcribed for a qualitative assessment of context-specific barriers and facilitators to IPC programme implementation. Daily small group discussions (e.g. consisting of six people) were held for approximately 10–15 min on an assigned topic (e.g. each individual WHO CC). Each small group then nominated a spokesperson to present key conclusions to all training participants in the full plenary for broader discussion. Part three: guided use of IPCAF The IPCAF is a systematic tool to support the implementation of the WHO CC of IPC programmes at the acute health care facility level. It is a structured closed-formatted questionnaire with an associated scoring system to measure the level of IPC implementation and can act as a progress indicator to facilitate improvement over time [ 14 ]. The IPCAF instrument allocates points to each question and a maximum score of 100 points can be achieved for each CC section. An overall score is calculated by adding the total scores of all sections. On the final day of the training, the IPCAF was conducted in the facility. Training participants were divided into four groups and asked to assess two assigned CCs of the ICPAF during a targeted walk-through of the hospital. The completion of the IPCAF was done under the guidance of the IPC expert trainers. Following its completion, the groups were asked to synthesize their findings in a plenary presentation and results were further discussed in the full group. Statistical analysis For the participant KAP survey, frequencies and proportions of categorical response proportions were summarized and baseline and follow-up results were compared with a paired analysis using the Stuart-Maxwell Marginal homogeneity test. Median and inter-quartile (IQR) estimates were summarized for the Likert-scale responses to attitude statements, and baseline and follow-up responses were compared with a paired analysis using the Wilcoxon signed-rank test. Baseline practices were described as proportions and histograms, follow-up practice responses were not analyzed because enough time had not passed for changes to practices. Key feedback points from plenary discussions and written responses to the open-ended knowledge questions were analyzed using a qualitative, inductive thematic analysis in which responses were coded first according to WHO IPC CC and then emerging themes for each CC were identified. Themes that emerged more than once were considered to be ‘reoccurring’. Responses to selected open-ended questions were also analyzed for word frequency using word cloud queries. The IPCAF scoring results were analyzed using descriptive statistics. Stata Version 17.0, Nvivo 1.5.2 and Excel were used for analyses. Ethics approval and consent to participate The ANDEMIA Project is currently operating in the Democratic Republic of Congo under the ethical approval granted by the Ethics Committee of the University of Kinshasa Deliberation N o ESP/CE/042/2017, in Burkina Faso under the ethical approval granted by the Ethics Committee by the Burkina Faso Ministry of Health Deliberation N o 2017-5-057, and the German Charité Medical University EA2/230/17.
Results Participant characteristics A total of 22 and 24 individuals participated in separate five-day WHO IPC CC training programmes in Facility A (September 2021) and Facility B (March 2022), respectively. The participants were predominately HCWs, with the largest professional groups being medical doctors and nurses (see Table 1 below). Approximately half of the training participant groups were members of the respective hygiene committees for each facility. In Facility A, it was also considered necessary to include external participants from the affiliated Health Zone Departments and the Central Health Bureau. Alongside the training, the facilities prioritized basic IPC supplies which were procured for the hospital, including personal protective equipment (PPE) as well as consumables for hand hygiene and waste management. Knowledge, attitudes and practices (KAP) survey Participant responses to selected knowledge questions in the KAP survey are shown in Table 2 . Overall, participants demonstrated a high understanding of questions related to standard precautions, importance of HAI surveillance, practical IPC training, monitoring the implementation of IPC guidelines and standards for staffing and bed occupancy at both time points. From baseline to follow-up, participants in both facilities showed a significant increase in understanding of questions related to the necessity of a dedicated IPC focal person, at least annual evaluations of IPC training, healthcare waste segregation standards ( p < 0.01) as well as a modest increase in the understanding of toilet facility standards. However, gaps at both the baseline and follow-up timepoints included lack of recognition on the importance of including senior hospital leadership in IPC training and the necessity to monitor hand hygiene compliance. Participant responses to attitude statements are shown in Table 3 below. High agreement with the perception that one can dedicate time to an IPC programme was seen at both timepoints. There was a significant increase in agreement with the feeling of responsibility to IPC and understanding of the IPC core components from baseline to follow-up ( p -value < 0.04). At Facility A, significantly more participants from baseline to follow-up agreed with the attitude that sufficient funds for IPC were available ( p -value < 0.04). However, participants from Facility B reported a stronger feeling of barriers to IPC programme implementation from baseline to follow-up ( p -value < 0.001). Participant responses to practice questions at baseline are reported in Fig. 1 . A majority of participants at both facilities reported never or only sometimes attending regular IPC meetings and few reported ever being part of a process to draft an action plan to address identified IPC needs (9.1% Facility A, 37.5% Facility B; not shown in Figure below). However, a majority reported often or always adhering to practices such as teaching patients about IPC and using masks when caring for patients with acute respiratory infections. In addition, the open-ended KAP question “What are the most important steps to organizing an IPC program?” was analyzed using a word cloud to show frequency of responses (see Fig. 2 below). From baseline to follow-up, facility responses appeared to show a shift in participants stressing individual training to emphasizing the concept of an IPC team as well as evaluation, monitoring and implementation. A word cloud analysis was also conducted for the question ‘Once IPC guidelines have been developed, what steps should be taken to ensure their implementation at the facility?’ and can be viewed as an additional file (see Additional file 2 ). Reoccurring themes identified in responses to the three-remaining open-ended KAP questions were analyzed using a thematic analysis (see Table 4 below). Most frequent reoccurring themes included statements related to the role of the IPC committee for decision-making compared to the operational role of the IPC team as well as the need for effective IPC trainings to consist of both practical and theoretical components. There were also reocurring themes related to the use of HAI data for improving quality of care, evaluating IPC programmes, or providing feedback to inspire behavioural change. All qualitative themes can be viewed as an additional file (see Additional file 3 ). Plenary interactive discussions The reoccurring themes of IPC programme challenges from the interactive plenary discussion sessions were identified according to CC in Table 5 . Limited resources as a key barrier emerged as a theme across all CCs. Resources mentioned ranged from material and financial to human resources, and related misconceptions such as the use of handwashing with ash when there was a shortage of water or soap, decontamination or sterilization with inappropriate substances or the multiuse of single-use items were noted. Others expressed concerns with having a 100% dedicated person for IPC such as how to employ a new person in general and how to take on hospital staff and exempt them from clinical charges despite other needs in the hospital. Another dominant theme was that personnel attitudes were a major barrier to IPC programmes, including misperceptions and lack of awareness and commitment. Some participants expressed that “IPC is still considered a new concept that resulted from various epidemics, so it is not needed in non-epidemic times.” Others expressed that there is an insufficient commitment from health care facility management and a lack of responsibility among staff and users regarding compliance with IPC measures. The dominant theme of ‘Water is essential’ also emerged in the context of CC 8, with statements such as “water is life” and detailed discussions on available water sources and uses. In Facility B, it was estimated that 143 L of water are needed per hospitalized patient (per 24-hour day). Participants also suggested potential solutions and facilitators. One proposed plenary solution was to align Ministry of Health guidelines (CC1 theme ‘Ministry of Health alignment’) hygiene committee guidelines with respective facility IPC committees. Furthermore, it was discussed that conveying the HCW and patient benefits of IPC might combat misperceptions of IPC importance. Facility IPCAF evaluations The overall IPCAF score at Facility A (392.5/800 points) corresponded to a ‘Basic’ IPC level: “Some aspects of the IPC core components are in place, but not sufficiently implemented. Further improvement is required” (Fig. 3 ). The lowest ranked component was CC1 IPC programmes (10/100), and the highest ranked component was CC4 Healthcare-associated infection (HAI) surveillance (97.5/100). The IPCAF score at Facility B (415/800 points) corresponded to an ‘Intermediate’ IPC level: “Most aspects of the IPC core components are appropriately implemented. The facility should continue to improve the scope and quality of implementation and focus on the development of long-term plans to sustain and further promote the existing IPC programme activities.” [ 14 ]. The lowest ranked component was CC6 Monitoring, audits of IPC practices and feedback (22.5/100) and the highest ranked component was CC2 IPC guidelines (77.5/100).
Discussion We evaluated the initial WHO IPC CC implementation experience at two reference hospitals in the DRC and BF. Overall, these facilities demonstrated a basic to intermediate IPC baseline level, using the WHO IPCAF tool. This level of IPC implementation is comparable to the findings of other countries in low-income settings and within the African region according to a 2022 WHO global IPC survey in acute healthcare facilities [ 20 – 22 ]. Using mixed evaluation methods during and following a training on the WHO IPC CCs at the two reference facilities, a range of IPC implementation experiences and challenges were identified that could be used to inform future IPC improvement strategies. Some elements of an IPC programme (i.e. WHO IPC CC1) were reported in place at the facilities according to the WHO IPCAF tool. However, the KAP survey and assessment of plenary discussions revealed perceptions and practices affecting the effectiveness of IPC programme implementation at the facilities. Most training participants reported rarely attending regular IPC meetings and only a few participants reported involvement in a process to draft an IPC programme action plan. Following the training, participant responses shifted from stressing the need for more individual training to emphasizing the concept of an IPC team, responsibility for ensuring IPC and implementation elements such as evaluation and monitoring. Although training participants also demonstrated an increased recognition that healthcare facilities should have a dedicated IPC focal point, concerns were expressed regarding the practicalities of hiring a dedicated IPC focal person when additional staff are needed throughout the facilities to meet ongoing gaps in clinical services and patient management. Participants also highlighted a lack of commitment from hospital leadership as a potential barrier to IPC programme implementation. Interestingly, participants, however, did not believe that senior staff needed to be included in IPC training. This could be related to local hierarchical structures and practices, but inclusion of leadership in IPC training can be important to increase IPC awareness and buy-in. Similar thematic issues were also discussed in a qualitative study on IPC implementation in low-resource settings from Tomczyk et al., and suggestions were made to begin with a stepwise approach, i.e. “start with a small group of committed staff”, “ maintain continuous advocacy...with the inclusion of IPC in routine meetings” [ 7 ]. Such IPC champions and awareness-raising could support a paradigm shift from IPC as a “concept to only be used during epidemics” to a mindset that a robust IPC programme should be functioning at all times within a healthcare facility to ensure quality of care and patient safety. However, limited resources were raised as a key barrier throughout the training and evaluation, and global, regional and national health system initiatives are needed in parallel to ensure sufficient human resources and infrastructure for universal health coverage [ 23 , 24 ]. One proposed plenary solution to IPC programme barriers, was to align Ministry of Health hygiene committee guidelines with respective facility IPC committees. The alignment would make it easier to access national support and manage limited human resources. Furthermore, it was discussed that conveying the HCW and patient benefits of IPC might combat misperceptions of IPC importance. Evidence on benefits might elevate perceived importance of IPC measures and therefor improve HCW ownership and compliance. Participants reported strong agreement with the importance of IPC guidelines (i.e. WHO IPC CC2) and training (i.e. WHO IPC CC3) including monitoring their implementation. However, low IPCAF facility scores were particularly seen for IPC education and training, and reoccurring themes in discussions emphasized the need for improved communication mechanisms and involvement of all actors throughout the implementation process as well as greater recognition of practical or bed-side training approaches to operationalize the implementation of protocols and procedures. In another study at a tertiary care facility in Canada, HCWs also reported that they need more effective IPC communication and recommended a monthly emailed report of less than two pages covering outbreaks, infection rate comparisons (to other hospitals) and general IPC facts [ 25 ]. The US Centers for Disease Control and Prevention also issued IPC communication and collaboration recommendations such as fostering collaboration by engaging IPC actors (such as health service leadership and staff) in development of IPC decisions and actions [ 26 ]. Greater recognition of active training approaches aligns with WHO recommendations on participatory and bedside simulation strategies [ 10 ]. Participants from both facilities also showed a significant increase in knowledge that training and education can include patients and family members. HCWs have been shown to be hesitant to include this group in IPC measures despite WHO recommendations [ 27 , 28 ]. A high IPCAF score was seen for HAI surveillance (i.e. WHO IPC CC4), substantially higher than comparable facilities in the WHO IPC global survey [ 20 ]. This scoring may be biased due to the lack of participant understanding related to what constitutes HAI surveillance due to the lack of training on HAI surveillance standards and requirements. Qualitative participant responses showed that participants understood the value of data as indicators for quality of care and behavioral change, but limited resources and insufficient data collection and reporting systems were cited as ongoing barriers. Studies on HAI surveillance initiatives in lower-middle income hospitals recommend initially focusing a step-wise implementation in select units, such as intensive care, developing protocols that can consistently be used in the local context and using resulting data to emphasize the importance of IPC programmes for continued stakeholder motivation [ 29 – 31 ]. A modest proportion of participants showed an understanding of multimodal IPC strategies (i.e. WHO IPC CC5) throughout the training. However, the term “multimodal strategies” still appears to be a new concept in settings with a basic level of IPC implementation. Although some educational materials have been developed such as infographics by WHO, ongoing and improved communication approaches are needed to introduce and operationalize the concept of multimodal strategies [ 14 ]. Participants reported monitoring (i.e. WHO IPC CC6) as an important step in organizing an IPC programme, and the use of feedback (i.e. from monitoring or observation) to facilitate behaviour change was a reoccurring theme in plenary discussions. This reflects the WHO recommendations that monitoring and feedback are essential ways to support behaviour and system change [ 32 ]. However, fewer participants demonstrated an understanding of the specific recommendation to routinely monitoring hand hygiene compliance. This could be an effective starting point to operationalize thekey IPC indicators for monitoring, audit and feedback as suggested by Tomczyk et al. [ 7 ]. Participants also demonstrated an understanding of the importance of staffing, workload, bed occupancy (i.e. WHO IPC CC7) and sanitation and waste management (i.e. WHO IPC CC8) standards. Adherence to selected precautions such as the use of masks when caring for patients with acute respiratory infections was noted. However, limited resources were again a reoccurring theme for this CC. IPC training in low-resource settings should discuss appropriate low-cost alternatives that still meet minimum standards to avoid potentially harmful reported practices such as hand washing with ash, decontaminating or sterilizing with inappropriate substances or multiuse of single-usage items [ 33 , 34 ]. Water availability was also heavily discussed with multiple participants emphasizing “Water is Life”. Practical stepwise implementation tools such as the WHO practical manual for improving IPC at the health care facility level [ 19 ] and WASH FIT could offer guidance on finding stepwise, low-cost alternatives that still meet IPC standards. The WASH FIT guideline acknowledges that certain actions such as installing a water supply may not be feasible and recommends small actions that can instigate change such as appealing to district authorities for improvement [ 35 ]. Limitations The mixed methods evaluation utilized to describe and assess the initial WHO IPC CC implementation experience at the reference hospitals in the DRC and BF had limitations that should be considered. Study participation was voluntary and facility stakeholders were included based on their expressed interest in IPC. Thus, it is possible that results of this study may reflect findings where there is a greater than average interest in IPC. The KAP survey was self-administered and responses may have been affected by social-desirability bias or misinterpreted despite initial instructions and guidance upon dissemination. Furthermore, the follow-up survey timepoint was administered directly after the training and additional follow-up will be needed to understand long-term effects. Open-ended questions and plenary discussions were inductively coded and thematically compared, but the coding process may have been biased by the researcher’s subjectivity. Despite guidance provided during the IPCAF administration, social-desirability bias may have also affected the type of responses given.
Conclusion The mixed methods employed to evaluate the initial WHO IPC CC implementation experience at the reference hospitals in the DRC and BF revealed a range of implementation experiences, barriers and facilitators that could be used to inform stepwise approaches to the implementation of the WHO IPC CC in low-resource settings. Implementation strategies should consider both IPC standards such as the WHO IPC minimum requirements [ 10 ] as well as the specific local context affecting implementation. The early involvement of all relevant stakeholders including health care facility leadership and decision-makers and health care personnel contributing to current or future IPC teams and committees is critical to ensure sufficient support and an effective and sustainable process. Interactive training approaches with mixed evaluation methods and practical tools such as the WHO IPCAF can contribute to improved outcomes and action planning. Communication of benefits for patients and HCWs may improve IPC programme perceptions and compliance. In parallel, ongoing advocacy for health system changes will also be needed to enable sufficient human and material resources for IPC and quality.
Background The coronavirus pandemic again highlighted the need for robust health care facility infection prevention and control (IPC) programmes. WHO guidelines on the core components (CCs) of IPC programmes provides guidance for facilities, but their implementation can be difficult to achieve in resource-limited settings. We aimed to gather evidence on an initial WHO IPC implementation experience using a mixed methods approach. Methods A five-day training on the WHO IPC CCs was conducted at two reference acute health care facilities in the Democratic Republic of Congo and Burkina Faso. This was accompanied by a three-part mixed-methods evaluation consisting of a: (1) baseline and follow-up survey of participants’ knowledge, attitudes and practices (KAP), (2) qualitative assessment of plenary discussion transcripts and (3) deployment of the WHO IPC assessment framework (IPCAF) tool. Results were analysed descriptively and with a qualitative inductive thematic approach. Results Twenty-two and twenty-four participants were trained at each facility, respectively. Baseline and follow-up KAP results suggested increases in knowledge related to the necessity of a dedicated IPC focal person and annual evaluations of IPC training although lack of recognition on the importance of including hospital leadership in IPC training and hand hygiene monitoring recommendations remained. Most participants reported rarely attending IPC meetings or participating in IPC action planning although attitudes shifted towards stronger agreement with the feeling of IPC responsibility and importance of an IPC team. A reocurring theme in plenary discussions was related to limited resources as a barrier to IPC implementation, namely lack of reliable water access. However, participants recognised the importance of IPC improvement efforts such as practical IPC training methods or the use of data to improve quality of care. The facilities’ IPCAF scores reflected a ‘basic/intermediate’ IPC implementation level. Conclusions The training and mixed methods evaluation revealed initial IPC implementation experiences that could be used to inform stepwise approaches to facility IPC improvement in resource-limited settings. Implementation strategies should consider both global standards such as the WHO IPC CCs and specific local contexts. The early involvement of all relevant stakeholders and parallel efforts to advocate for sufficient resources and health system infrastructure are critical. Supplementary Information The online version contains supplementary material available at 10.1186/s13756-023-01358-1. Keywords Open Access funding enabled and organized by Projekt DEAL.
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We are thankful to all participants of the study for sharing their valuable time. Special thanks also to Sophie Müller, Megan Evans and Moussa Douno who helped review the KAP questionnaire. Special thanks as well to René Umlauf and Carlos Rocha for qualitative and theoretical insights. We are also very appreciative for the contribution of Patrick Mirindi in planning the training content. We are thankful to all participants of the study for sharing their valuable time. Special thanks also to Sophie Müller, Megan Evans and Moussa Douno who helped review the KAP questionnaire. Special thanks as well to René Umlauf and Carlos Rocha for qualitative and theoretical insights. We are also very appreciative for the contribution of Patrick Mirindi in planning the training content. Author contributions All authors made substantial contributions towards the conduct of the study, revised earlier versions of the manuscript and approved the final version for submission.S.T, S.M. and T.K. contributed as shared-senior supervisors who guided study conceptualization and implementation. S.T. contributed to pre-conception of the study framework, facilitated communications, co-drafted the KAP survey, supervised data analysis and provided in-depth editing of the manuscript. R.W. drafted and coordinated the study framework, drafted the protocol and KAP survey, facilitated implementation, performed data analysis and drafted manuscript. W.T. and E.L. edited and coordinated the study framework and implementation, co-drafted the protocol and edited the KAP survey and manuscript. A.S. and A.H. coordinated study framework and implementation, contributed to data collection, co-drafted the protocol and manuscript. C.B. adapted training material, led the training, contributed to data collection and edited the manuscript. R.L., N.A, G.M, and A.Z. coordinated study implementation, facilitated data collection and edited the manuscript. S.A., J.M., F.L., T.E. and G.S. supervised study design, coordinated implementation and edited the manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. The study is funded by two grants from the German Federal Ministry of Education and Research (BMBF; grant number 01KA1606; grant number 01KI2047). Open Access funding enabled and organized by Projekt DEAL. Data Availability All data and materials are accessible in the supplementary information. Declarations Ethics approval and consent to participate The ANDEMIA Project is currently operating in the Democratic Republic of Congo under the ethical approval granted by the Ethics Committee of the University of Kinshasa Deliberation N o ESP/CE/042/2017, in Burkina Faso under the ethical approval granted by the Ethics Committee by the Burkina Faso Ministry of Health Deliberation N o 2017-5-057, and the German Charité Medical University EA2/230/17. All participants gave written consent to be included in the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests. Abbreviations Burkina Faso Core components Democratic Republic of Congo Ebola virus disease Health care-associated infections Healthcare worker Infection Prevention and Control Infection Prevention and Control Assessment Framework Interquartile Range Knowledge, Attitudes and Practice National Institute of Biomedical Research CoV-2-Respiratory Syndrome Coronavirus Type 2 World Health Organization \
CC BY
no
2024-01-16 23:45:34
Antimicrob Resist Infect Control. 2024 Jan 15; 13:4
oa_package/1c/61/PMC10789048.tar.gz
PMC10789049
0
Background Histone methylation is a common DNA modification that plays a critical role in the regulation of gene transcription. This modification predominantly occurs on lysine residues at the N-terminus of histones and is catalyzed by a family of enzymes known as histone-lysine N-methyltransferases (KMT) [ 1 ]. Members of the histone-lysine N-methyltransferase 2 (KMT2) family comprise several subtypes, including KMT2A, KMT2B, KMT2C, and KMT2D, which exert significant effects on cellular processes such as proliferation, growth, development, and differentiation at various stages. Emerging research has demonstrated that mutations or aberrant expression of KMT2 genes are frequently observed in various tumors. These alterations disrupt histone methylation, leading to dysregulation of DNA damage repair, gene expression, and chromosomal structure, ultimately affecting their normal functions [ 2 ]. Consequently, these abnormalities directly contribute to the accumulation of genomic instability, increasing the risk of genetic mutations and chromosomal aberrations, thereby promoting tumor initiation and progression. Furthermore, genomic instability in tumor cells enhances tumor immunogenicity, potentially improving sensitivity to immune checkpoint inhibitor (ICI) therapy [ 3 ]. As a result, there is growing interest in investigating the role of KMT2 genes in tumor immunotherapy. In colorectal cancer, KMT2 family mutations are associated with a higher tumor mutation burden (TMB) and microsatellite instability, which are correlated with improved prognosis in patients with colorectal cancer [ 1 , 4 ]. In non-small cell lung cancer (NSCLC), different genetic alterations are associated with varying levels of programmed death-ligand 1 expression and TMB [ 5 ]. The co-occurrence of TP53/KMT2C mutations can effectively predict the response to ICI treatment. Collectively, these findings highlight the importance of understanding the impact of KMT2 genes on tumor immunotherapy, offering potential avenues for targeted therapies and personalized treatment strategies. However, current research has primarily focused on individual genes within the KMT2 family or specific cancer types, lacking comprehensive investigations into the systemic effects of the entire KMT2 family and their impact on the tumor immune microenvironment [ 6 ]. Therefore, it is crucial to conduct a systematic analysis using multiple immunotherapy cohorts and pan-cancer databases that can provide extensive genetic profiling characteristics [ 7 – 10 ]. This study aimed to explore the response to ICI therapy and the intrinsic biological connections in KMT2-mutated tumors across various dimensions, including immunotherapeutic efficacy and the tumor immune microenvironment. Through examining multiple characteristics, this study aimed to provide robust evidence regarding the relationship between KMT2 alterations and tumor immunotherapy, potentially contributing to advancements in this field.
Methods This study integrated mutational and clinical information from ICI-treated patients across four studies. MSK-IMPACT and whole-exome sequencing (WES) were utilized to sequence cohort samples, classifying tumors as KMT2-MUT or KMT2-WT based on KMT2 non-synonymous somatic mutations. Data from The Cancer Genome Atlas (TCGA) pan-cancer cohort across 33 cancer types were obtained to study the prognostic impact of KMT2 family mutations and the different tumor microenvironments between KMT2-WT and KMT2-MUT tumors. Outcome measures, including the objective response rate (ORR), overall survival (OS), progression-free survival (PFS), and durable clinical benefit (DCB), were obtained from four studies. The TMB was calculated differently for the MSK-IMPACT and WES-sequenced samples. The proportion of infiltrating immune cells was determined using the CIBERSORT algorithm. Immune cell scores from a pan-cancer study and the geometric means of granzyme A and perforin 1 for the cytolytic activity score (CYT) were calculated. Immunogenomic indicators, 29 immune signatures and 10 oncogenic pathways enrichment scores were obtained from various sources. Statistical analyses involved Fisher’s exact test, log-rank test, Cox regression analysis, and Wilcox test using R software (version 4.0.2; Foundation for Statistical Computing, Vienna, Austria). Statistical significance was set at P < 0.05. For detailed content and references, please refer to the Supplementary Methods.
Results Mutation status of the KMT2 family in the TCGA pan-cancer cohort This study explored the somatic alteration frequency of four KMT2 genes (KMT2A, KMT2B, KMT2C, and KMT2D) in the TCGA pan-cancer cohort. The analysis revealed a high mutation rate within the KMT2 family across various cancers, with melanoma exhibiting the highest rate (exceeding 50%, Fig. 1 A). In the TCGA pan-cancer cohort, the mutational landscape of the KMT2 gene family revealed that KMT2D and KMT2C exhibited the highest mutation frequency at 10%, followed by KMT2A and KMT2B at 6% (Fig S1 A). However, no significant hotspot mutations were identified in the KMT2 family (Fig S1 B). Survival analysis demonstrated that patients with KMT2 family mutations had decreased OS (Fig. 1 B), whereas no significant survival difference was observed in patients with or without mutations of the individual members of the KMT2 family (Fig S2 ). Considering the influence of clinical and pathological factors, we conducted subgroup analyses based on the tumor type, clinical stage, and histological grade. Univariate Cox regression analyses revealed that KMT2 family mutations significantly affected the prognosis of patients in the malignant pleural mesothelioma and uterine corpus endometrial carcinoma subgroups (Table S1 ). No significant differences were observed among the other subgroups (Table S1-3 ). KMT2 family mutations predicted improved clinical outcomes in the ICI-treated cohort We constructed an ICI-treated cohort comprising response data and mutational data from four studies ( n = 2069) across 10 cancer types to explore the impact of KMT2 family mutations on clinical outcomes in patients receiving ICI therapy. Patients were divided into the KMT2 (A, B, C, D)-MUT and KMT2-WT groups according to the family or individual KMT2 mutation status. First, we explored the difference in TMB between the KMT2-MUT and KMT2-WT groups in the ICI-treated cohort and found that the KMT2-MUT group had a higher TMB (Fig. 1 G); similar results were observed in the KMT2 (A, B, C, D)-MUT group (Fig S1 C). In the TCGA pan-cancer cohort, we found that non-silent and silent mutation rates were higher in the KMT2-MUT (Fig. 1 G) and KMT2 (A, B, C, D)-MUT groups compared with the KMT2-WT group (Fig S1 D, E). These results suggested that KMT2-MUT tumors had improved immunogenicity. The mRNA expression levels of three immune checkpoints (PDCD1, CTLA-4, CD274) were compared between the KMT2-MUT and KMT2-WT group, and they were all significantly elevated in the KMT2-MUT group (Fig. 1 H). Similar results were observed in the KMT2 (A, B, C, D)-MUT group (Fig S3 A), suggesting that KMT2-MUT tumors might be sensitive to ICI therapy. Moreover, we analyzed clinical outcomes (PFS, OS, DCB, and ORR) in the KMT2-MUT and KMT2-WT groups. The results indicated that the KMT2-MUT group had significantly longer OS (median OS: 34.0 months vs. 16.4 months, P < 0.001, hazard ratio [HR] = 0.733 [95% confidence interval (CI): 0.632–0.850]) and PFS (median PFS: 9.1 months vs. 3.5 months, P = 0.002, HR = 0.669 [95% CI: 0.518–0.864]) (Fig. 1 C, E), and significantly higher ORR (40.6% vs. 22.0% P < 0.001) and DCB (54.1% vs. 32.6% P < 0.001) (Fig. 1 D, F). Similar results were observed in the KMT2(A, B, C, D)-MUT group (Fig S3 B-E). OS- or PFS-related univariate and multivariate Cox regression analyses were further conducted, and we found that mutations in the KMT2 family are potential independent predictor for the prognosis of patients receiving ICI therapy (Fig. 1 -I, J); similar results were observed in the KMT2 (A, B, C, D)-MUT group (Fig S4 ). Using random sampling for 1000 iterations, we analyzed the clinical outcomes (PFS, OS, DCB, and ORR) within each generated internal clinical cohort. Patients in the KMT2-MUT group exhibited longer average median OS (34.00 vs. 16.58 months, HR = 0.574 [95% CI: 0.617–0.877]) and PFS (9.07 vs. 3.55 months, HR = 0.667 [95% CI: 0.498–0.894]), along with higher average ORR (40.6% vs. 22.0%) and DCB (54.4% vs. 33.3%) (Supplementary file 3 ). Exploration of tumor immune microenvironment in KMT2-MUT and KMT2-WT tumors Tumor immunogenicity plays a critical role in antitumor immunity, and boosted tumor immunogenicity stimulates improved antitumor immunity. We compared the scores of 10 immunogenomic indicators between KMT2-MUT and KMT2-WT tumors and found that the scores of all 10 immunogenomic indicators were significantly higher in KMT2-MUT tumors (Fig. 2 A). We also observed that the expression levels of most MHC molecules were elevated in KMT2-MUT tumors (Fig. 2 B). These results suggest that KMT2-MUT tumors have significantly boosted immunogenicity. Immune cell infiltration into tumors is crucial for the immune system to execute immune functions. We compared the immune cell infiltration levels between KMT2-MUT and KMT2-WT tumors from four aspects: (1) leukocyte fractions were measured using DNA methylation arrays; (2) infiltration levels of lymphocytes, measured using the CIBERSORT algorithm; (3) genomic measurements of the tumor-infiltrating lymphocyte (TIL) fraction; and (4) the TIL fraction estimated by deep learning methods based on hematoxylin and eosin-stained (H&E-stained) slides. The results demonstrated that the scores of all four indicator were higher in KMT2-MUT tumors (Fig. 2 E) compared with the KMT2-WT tumors, suggesting that KMT2-MUT tumors exhibited higher levels of immune cell infiltration. Similar results were observed in KMT2 (A, B, C, D)-MUT tumors (Fig S5 ). Twenty-nine immune signature scores of each sample in the TCGA pan-cancer cohort were estimated using the “single-sample gene set enrichment analysis (ssGSEA)” method. Based on these immune signature scores, two stable immune subtypes were identified using unsupervised clustering. The immune subtype with higher immune signature scores was defined as “hot tumor” while the immune subtype with lower immune signature scores was defined as “cold tumor” (Fig. 2 C). After conducting Fisher’s exact test, we found that a significantly higher proportion of “hot tumors” existed in KMT2-MUT tumors (Fig. 2 D); similar results were observed in KMT2 (A, B, C, D)-MUT tumors (Fig S6A ). We then used Danaher’s method to estimate the enrichment scores of particular cells in each sample of the TCGA pan-cancer cohort. Most enrichment scores, including those of CD8 T cells, were higher in KMT2-MUT tumors than in KMT2-WT tumors (Fig. S6B ). Considering that CD8 T cells are critical for antitumor immunity, we also estimated the CD8 T cell proportion of each sample in the TCGA pan-cancer cohort using the CIBERSORT algorithm and determined that KMT2-MUT tumors (Fig. 2 G) had a larger proportion of CD8 T cell compared with KMT2 (A, B, C, D)-MUT tumors (Fig S7 ). The volcanic diagram provides a more in-depth presentation of the higher cell enrichment scores in KMT2-MUT tumors (Fig S8A ). The expression levels of some chemokines, such as CXCL9 and CXCL10, that have been shown to recruit CD8 T cells, were significantly higher in KMT2-MUT tumors (Fig. 2 B). This association might explain the higher immune cell infiltration levels in KMT2-MUT tumors. Twenty-nine immune signature scores possibly representing the immune activity profile of the tumor in each sample of the TCGA pan-cancer cohort were estimated using the “ssGSEA” method. The results indicated that most immune signature scores were higher in KMT2-MUT tumors. The volcanic diagram provides a more in-depth presentation (Fig. 2 F, I). Additionally, the scores for CD8 T cells, which play a key role in tumor immunity, were significantly higher in KMT2-MUT tumors. Moreover, the correlation among immune activities was higher in KMT2A-MUT tumors (Fig S8 B, C), while no significant difference was observed between the other two groups. (Fig S9A ). We also calculated the CYT to evaluate the differences in immune cell cytotoxicity between KMT2-MUT and KMT2-WT tumors and found that KMT2-MUT tumors had stronger immune cell cytotoxicity (Fig. 2 H; S8 E, F; S9 B-D). In addition, the expression levels of most interleukins and receptors were higher in KMT2-MUT tumors (Fig S8G ). These results indicated that boosted tumor immunogenicity, higher levels of immune infiltration, and improved immune activity exist in KMT2-MUT tumors, suggesting that KMT2-MUT tumors might have an improved response to ICI therapy. In addition, we investigated some classical carcinogenic pathways enriched in KMT2-MUT and KMT2-WT tumors, the results of which are shown in Fig S8H . To explore the consistency of our research findings across various immune infiltration assessment methods, we included a comprehensive table summarizing the results obtained from various algorithms, such as XCELL, EPIC, MCPCounter, and ESTIMATE (Supplementary File 1 ). We employed the ESTIMATE algorithm to assess all samples in the TCGA pan-cancer cohort and discovered that KMT2-MUT tumors exhibit a significantly higher “ImmuneScore,” indicating a greater degree of overall immune infiltration in KMT2-MUT tumors.
Discussion Our study results demonstrated that KMT2 family mutations could predict improved clinical outcomes in patients undergoing ICI therapy. This could be attributed to the enhanced tumor immunogenicity, increased immune infiltration, and improved immune activity observed in KMT2-MUT tumors. The tumor microenvironment characterized by these factors may play a crucial role in facilitating better clinical outcomes in patients with KMT2 family mutations receiving ICI therapy. As an important epigenetic regulator, KMT2 frequently undergoes frameshift, truncation, and missense mutations in various tumors, mainly affecting the expression of the carboxyl-terminal SET domain. KMT2C is the most commonly mutated gene in gastric adenocarcinoma, while KMT2D is one of the most frequently mutated genes in epithelial cancers. Epithelial tissues rely on a highly coordinated balance among self-renewal, proliferation, and differentiation. KMT2D mutations can disrupt this balance and drive the transformation of the normal epithelium into tumors. This is because KMT2D mutations can lead to decreased expression of p63 target genes and key genes involved in epithelial development and adhesion as well as widespread loss of histone enhancer modifications H3K4 monomethylation and H3K27 acetylation [ 11 ]. In our study, four KMT2 family genes (KMT2A, KMT2B, KMT2C, and KMT2D) were identified as the most commonly altered genes in various tumors. To ensure that the OS difference between patients with different KMT2 family statuses treated with ICIs was not solely attributable to the general prognostic benefits of KMT2 family mutations, we conducted a survival analysis. This analysis compared patients with or without KMT2 family mutations in the TCGA pan-cancer cohort, aiming to investigate the general prognostic impact of KMT2 family mutations in various cancer types. Our results indicated a trend towards poorer prognosis among untreated patients with KMT2 mutations; however, no statistical significance was achieved. Subsequently, within specific ICI therapy cohorts, we found that, after initiating ICI therapy, patients with mutations demonstrated significantly better survival than those without mutations. This suggests that mutations in KMT2 may lead to biological changes in the tumor, consequently rendering these cancers more responsive to ICI treatment. Zhang et al. found that patients with KMT2A/C mutations had improved prognosis in terms of PFS, ORR, DCB, and OS [ 12 ]. These findings align with those of our research; however, their study did not systematically categorize the KMT2 family genes or investigate the tumor microenvironment to explore their potential mechanisms. Research exploring the relationship between KMT2 and immunotherapy is lacking. To verify the reliability of our results, we randomly selected a number of patients from the KMT2-WT group similar to that of the KMT2-MUT group, forming an internal clinical cohort along with patients from the KMT2-MUT group. These results confirmed the conclusions obtained from the original ICI-treated cohort. Studies on the structure of the KMT2 gene and its corresponding proteins are gradually increasing, clarifying its potential as a target drug [ 2 ]. However, research on changes in the tumor microenvironment caused by KMT2 gene alterations and its specific role in tumor immune regulation is scarce [ 13 ]. In this study, we explored the tumor microenvironment of KMT2-MUT and KMT2-WT tumors in pan-cancer and ICI cohorts. We explored immune infiltration from four aspects, including the leukocyte fraction measured by DNA methylation arrays, the infiltration levels of lymphocytes measured by the CIBERSORT algorithm, genomic measurement of the TIL fraction, and the TIL fraction estimated by deep learning methods based on H&E-stained slides [ 14 , 15 ]. In addition, we have included a comprehensive table summarizing the results obtained from various algorithms, such as XCELL, EPIC, MCPCounter, and ESTIMATE. These results indicate a greater degree of immune infiltration in KMT2-MUT tumors compared with KMT2-WT tumors. However, the function, molecular mechanism, and biological significance of KMT2 mutations in tumors require further study. This will potentially aid in discovering improved targeted treatment methods and positively impact personalized cancer treatment. Our current research has some limitations and lacks experimental studies. Therefore, further clinical and fundamental experiments are needed to explore the mechanisms underlying KMT2-MUT tumors. Compared to combined prognostic signatures, single-gene family analysis does not provide a more comprehensive and detailed picture of the biological and/or clinical significance of the findings.
Conclusion This study revealed that patients with KMT2 mutations benefited significantly from ICI therapy in terms of OS, PFS, DCB, and ORR. These tumors are considered “hot tumors,” harboring a tumor microenvironment potentially more responsive to ICI therapy. Consequently, KMT2 family mutation status could serve as an effective predictor of ICI therapy outcomes, indicating associated tumor microenvironment variations and guiding personalized therapeutic strategies.
Mounting evidence suggests a strong association between tumor immunity and epigenetic regulation. The histone-lysine N-methyltransferase 2 (KMT2) family plays a crucial role in the methylation of histone H3 at lysine 4. By influencing chromatin structure and DNA accessibility, this modification serves as a key regulator of tumor progression and immune tolerance across various tumors. These findings highlight the potential significance of the KMT2 family in determining response to immune checkpoint inhibitor (ICI) therapy, which warrants further exploration. In this study, we integrated four ICI-treated cohorts ( n = 2069) across 10 cancer types and The Cancer Genome Atlas pan-cancer cohort and conducted a comprehensive clinical and bioinformatic analysis. Our study indicated that patients with KMT2 family gene mutations benefited more from ICI therapy in terms of overall survival ( P < 0.001, hazard ratio [HR] = 0.733 [95% confidence interval (CI): 0.632–0.850]), progression-free survival ( P = 0.002, HR = 0.669 [95% CI: 0.518–0.864]), durable clinical benefit ( P < 0.001, 54.1% vs. 32.6%), and objective response rate ( P < 0.001, 40.6% vs. 22.0%). Through a comprehensive analysis of the tumor microenvironment across different KMT2 mutation statuses, we observed that tumors harboring the KMT2 mutation exhibited enhanced immunogenicity, increased infiltration of immune cells, and higher levels of immune cell cytotoxicity, suggesting a propensity towards a “hot tumor” phenotype. Therefore, our study indicates a potential association between KMT2 mutations and a more favorable response to ICI therapy and implicates different tumor microenvironments associated with ICI therapy response. Supplementary Information The online version contains supplementary material available at 10.1186/s12943-023-01930-8. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We thank Yu Lin and Shenzhen Withsum Technology Limited for their technical support in statistical analysis. Author contributions Dongxu Wang, Ruizhe Li and Junyu Long conceived this study. Dongxu Wang, Ruizhe Li, Junyu Long, Jie Liu, Haitao Zhao and Tao Li collaborated on the design of this project. Hui Liu, Jingru Liu and Jincheng Tian, Han Li and Zhaoru Dong obtained and analyzed the expression data. Daolin Zhang were of great help in the preparation of the manuscript. Dongxu Wang, Jie Liu and Ruizhe Li wrote the paper. All authors read and approved the final manuscript. Funding This work was supported by the Beijing Natural Science Foundation (Grant No. 7234381), the Fundamental Research Funds for the Central Universities (3332023011), National High Level Hospital Clinical Research Funding[2022-PUMCH-B-128], CAMS Innovation Fund for Medical Sciences(CIFMS)[2022-I2M-C&T-A-003],CAMS Innovation Fund for Medical Sciences (CIFMS) [2021-I2M-1-061] [2021-I2M-1-003] and the National Natural Science Foundation of China (Grant No. 82073200 & 81874178 & 82203000 & 82203014 & 82303720), China Postdoctoral Science Foundation (2023M742149), Major basic research of Shandong Provincial Natural Science Foundation (Grant No. ZR2021ZD26), Founds for Independent Cultivation of Innovative Team from Universities in Jinan (Grant No. 2020GXRC023), the Taishan Scholars Program of Shandong Province (tstp20221158, tsqnz20221164, tsqn202306386), and Shandong Provincial Natural Science Foundation (ZR2022QH300, ZR2022QH017). Data availability All of the data we used in this study were publicly available in cBioPortal ( https://www.cbioportal.org ) and the PanCancer Atlas consortium ( https://gdc.cancer.gov/about-data/publications/pancanatlas ). The code utilized for data processing to support the findings of this study is available on request from the corresponding author. Declarations Ethics approval and consent to participate Ethical approval was waived since we used only publicly available data and materials in this study. Consent for publication Informed consent was obtained from all individual participants included in the study. Competing interests The authors declare no conflict of interests. Abbreviations Convolutional neural network Complete response Cytotoxic T lymphocyte antigen 4 Cytolytic activity score Dendritic cell Durable clinical benefit Food and Drug Administration Granzyme A Hematoxylin and eosin-stained Head and neck squamous cell carcinoma Hazard ratio Immunophenoscore Immune checkpoint inhibitors Histone-lysine N-methyltransferase 2 KMT2-wildtype KMT2-mutant Major histocompatibility complex Memorial Sloan Kettering-Integrated Mutation Profiling of Actionable Cancer Targets No durable benefit Non-small cell lung cancer Objective response rate Overall survival Progression of disease Programmed cell death (ligand) 1 Progression-free survival Partial response Perforin 1 Response Evaluation Criteria in Solid Tumors Stable disease Single-sample gene set enrichment analysis The Cancer Genome Atlas T cell receptor Tumor-infiltrating lymphocyte Tumor mutational burden Tumor microenvironment Whole-exome sequencing
CC BY
no
2024-01-16 23:45:34
Mol Cancer. 2024 Jan 15; 23:15
oa_package/3f/ca/PMC10789049.tar.gz
PMC10789050
38225650
Introduction Background and rationale {6a} Unfortunately, there is no information on the prevalence of Helicobacter pylori (HP) infection among Syrians [ 3 ]. A systematic review showed that the prevalence of HP infection ranges between 22 and 87.6% in Middle Eastern countries; regrettably, it did not include any data from Syria [ 4 ]. Syrian refugees may have an HP infection prevalence similar to that of their native country. There are two reports of the prevalence of HP infection among Syrian refugees. The first reported that only 8 individuals (66.7%) from the Middle East region were infected with HP when they presented to a family care clinic in the USA. Unfortunately, this prevalence includes patients from across the Middle East and may not adequately reflect the prevalence of infection among Syrians [ 5 ]. The second report from Germany revealed that the prevalence of HP infection among Syrian refugees was about 34%, which may be closest to reality [ 6 ]. The prevalence of HP infection appears to be higher in developing nations than in industrialized nations, with the majority of infections happening during childhood. Poor sanitation standards, low-income levels, and overcrowded living conditions appear to be associated with a higher prevalence of HP infection [ 7 ]. The recent humanitarian crisis has had a terrible impact on Syrian lives, resulting in millions of refugees and displaced individuals, massive infrastructure destruction, and the greatest economic catastrophe Syria has ever faced. It had an enormous impact on the health sector, with up to 50% of health facilities destroyed and up to 70% of healthcare providers fleeing Syria [ 8 , 9 ]. Peptic ulcer disease and consequent bleeding [ 10 – 12 ], gastric adenocarcinoma [ 13 , 14 ], dyspepsia [ 15 , 16 ], mucosa-associated lymphoid tissue (MALT) lymphoma [ 17 , 18 ], unexplained iron deficiency anemia [ 19 ], and idiopathic thrombocytopenic purpura [ 20 , 21 ] are all linked to HP infection, which requires antimicrobial treatment [ 22 , 23 ]. In real-world applications, only a few antibiotics are efficient at eradicating HP infection. Treatment regimens include a combination of two or three antibiotics, and a proton pump inhibitor (PPI), with or without a bismuth component that gives extra antibiotic properties [ 22 , 23 ]. However, the increasing antibiotic resistance of HP has become a major global problem [ 24 – 35 ]. In Syria, the eradication rate of traditional triple therapy with clarithromycin or levofloxacin was less than 30% [ 36 ], whereas the eradication rate with the levofloxacin concomitant regimen and the doxycycline-bismuth-based quadruple regiment was 82.05% and 78.9%, respectively [ 37 ]. As a result, there is a need to look for more effective therapeutic regimens as well as the best therapeutic regimens for follow-up when the first line of treatment fails.
Methods: participants, interventions, and outcomes Study setting {9} The research will be carried out in Damascus Hospital’s outpatient clinic. Damascus Hospital is the primary medical facility affiliated with the Ministry of Health. Damascus, Syria’s capital, treats patients from throughout the country. Eligibility criteria {10} Inclusion criteria (1) Men and women between the ages of 18 and 65 years; (2) naive to HP infection treatment; (3) HP-positive, as determined by histological examination. Exclusion criteria (1) Allergic to any medicine supplied; (2) pregnant or lactating; (3) suffering from serious systemic disorders, such as severe cardiopulmonary or hepatic dysfunction; (4) having had a previous gastrectomy or a history of stomach cancer; (5) chronic renal failure; (6) unwilling to participate in the trial; (7) persistent use of non-steroidal anti-inflammatory drugs (NSAIDs), antibiotics, proton-pump inhibitors (PPIs), H 2 receptor antagonists, aspirin, herbal remedies, and probiotics during the trial procedure. Keeping to take any dose of any previous medication is compatible with the concept of persistence, and any participant who also starts using or uses any of these treatments during the trial or follow-up will be excluded [ 38 ]. Who will take informed consent? {26a} Before beginning any trial procedures, investigators must get written informed consent from patients. The investigators will explain and discuss the experiment with potential volunteers to ensure that they understand what is being studied and that their involvement is entirely voluntary. Patients will be informed that they can drop out of the trial at any time. It was explicitly specified that only clinical information would be discussed in the research, with no private data mentioned in any part of the trial report. It was made clear to all patients that dropping out of the trial would not affect the quality of follow-up or treatment. Additional consent provisions for collection and use of participant data and biological specimens {26b} A written request form must accompany every specimen, and the identification information on the specimen and requisition must be identical. The requisition form must contain all the following information: (1) patient’s legal name, (2) unique identification number, (3) age, (4) source of the specimen, (5) complete provider details, (6) underlying medical condition, and (7) pathology investigations requested. All specimen containers must be leak-proof, placed in a secondary leak-proof container for transport to the laboratory, and transported to the laboratory as quickly as possible. Tissue specimens must be suspended or totally immersed in ten times their volume of 10% neutral-buffered formalin to maintain the integrity of the specimen. The initial informed consent form contains data collection and request information. Interventions Explanation for the choice of comparators {6b} For two weeks, patients on the concomitant levofloxacin regimen will receive levofloxacin 500 mg once daily, amoxicillin 1000 mg, tinidazole 500 mg, and esomeprazole 20 mg twice daily. When applied as a first-line treatment, the previous regimen had the highest eradication rate for Syrian patients naive to HP treatment [ 36 , 37 ]. A rescue regimen of high-dose dual therapy consisting of esomeprazole (40 mg twice daily) and amoxicillin (1000 mg three times daily) for 2 weeks will be used after first-line treatment fails [ 22 , 39 ]. Intervention description {11a} The sequential levofloxacin regimen consists of esomeprazole 20 mg and amoxicillin 1000 mg taken twice daily for 1 week, followed by metronidazole 500 mg, esomeprazole 20 mg twice daily, and levofloxacin 500 mg once daily for 1 week [ 22 , 40 ]. Microscopic examination of the stomach biopsies and hematoxylin and eosin, followed by Giemsa stains [ 41 ], will be used to confirm HP infection, which has a sensitivity and a specificity of 95% and 99%, respectively [ 41 ]. Gastric biopsies will be collected via gastroduodenoscopy and forwarded to the pathology laboratory of the same referral hospital. According to the Sydney system [ 42 ], endoscopists obtained five stomach biopsies: two from the body, two from the antrum, and one from the incisura. If the first-line treatment fails, a high-dose dual therapy of PPI and amoxicillin will be provided as a rescue therapy [ 22 ]. Criteria for discontinuing or modifying allocated interventions {11b} The safety of HP eradication therapy is well known, with the most common adverse events being taste disturbance, diarrhea, nausea, and abdominal pain. The vast majority of adverse events are minor and transient [ 43 ]. According to estimates, just 1.3% of patients terminate treatment due to adverse events [ 44 ]. Six weeks after ending treatment, all patients will visit the central laboratory of our hospital and undergo stool antigen tests using the enzyme immunoassay method (EIA) [ 45 ]. In our research, when individuals report adverse events, the investigators will examine them. The experiment’s drugs may be withdrawn for any of the following reasons: (1) significant adverse events that are considered unsuitable for continuing treatment, such as events that are life-threatening, necessitate inpatient hospitalization, result in persistent or significant disability or incapacity, or may necessitate medical or surgical intervention to prevent one of the outcomes listed above; (2) inability to comply with the trial procedures. Individuals may withdraw from the trial at any time for any reason. The reasons for withdrawal will be noted if individuals indicate them. Strategies to improve adherence to interventions {11c} Face-to-face treatment and leaflet instruction about the trial will be provided at the initial appointment such as how to take the medication, the potential treatment interactions, side effects, and contraindications; any inquiries will be welcomed either in person by visiting investigators or over the phone. Investigators will discuss the importance of participants completing treatment regimens. Patients are also encouraged to alert investigators if they experience issues linked to trial therapies. Furthermore, participants will be given instructions on how to take trial medications (dosage, timing, and storage), as well as what to do if a dose is missed. Participants will also get a compliance reminder phone call twice a week after treatment begins. Relevant concomitant care permitted or prohibited during the trial {11d} All other antibiotics, proton-pump inhibitors (PPIs), H 2 receptor antagonists, non-steroidal anti-inflammatory drugs (NSAIDs), aspirin, herbal remedies, and probiotics will be restricted during the trial procedure. Provisions for post-trial care {30} As this trial is a low-risk intervention, no particular post-trial care is required. The trial’s risks are covered by insurance at the trial site. It may involve additional health care coverage, reimbursement, or damages. Outcomes {12} The trial’s primary outcome is the percentage of patients who have successfully eradicated HP infection. This will be established based on the findings of stool antigen testing using the enzyme immunoassay method (EIA) [ 45 ], 6 weeks after the completion of the first-line treatment phase of the levofloxacin-based concomitant or levofloxacin-based sequential treatment regimen. The HP stool antigen test is an accurate approach for confirming HP eradication after the fourth week following treatment, based on a comprehensive evaluation of 25 reports involving 2078 individuals who examined the HP stool antigen test for confirmation of HP eradication. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were as follows: 88.3%, 92%, 75.1%, and 94.8% [ 46 ]. The secondary outcomes are: The rate of HP eradication of rescue treatments and total eradication rates among the levofloxacin-based concomitant and levofloxacin-based sequential treatment regimens; (2) the type and frequency of adverse events, as well as the rate of compliance, between the levofloxacin-based concomitant and levofloxacin-based sequential treatment regimen groups. Participant timeline {13} Figure 2 summarizes the enrollment, intervention, and assessment timeline. Sample size {14} We will conduct this as a superiority clinical trial, and the calculation of the sample size is based on the primary outcome, which is the eradication rate of HP infection using ITT. According to a meta-analysis, Kale-Pradhan et al. reported the eradication rate of a sequential levofloxacin-based treatment regimen as 87.8% (P2 = 0.877) [ 40 ], while the eradication rate of a concomitant levofloxacin-based regimen was based on the results of a randomized clinical trial conducted in Syria and was 82.05% (P1 = 0.8205) [ 37 ]. We applied the following statistical hypotheses: (1) 80% power ( β = 0.2); (2) the 5% level of significance ( α = 0.05); and (3) the superiority margin is 10% ( δ = − 0.10) and ( ε = 0.20). Given that Zα and Z β are 1.64 and 0.845, respectively; (4) the ratio of case to control is equal to one ( K = 1). The sample size formula of the two parallel superiority designs resulted in 64 patients as a sample size for each parallel [ 47 , 48 ]. We also added 15% for possible dropouts, making the final sample size 150 patients. For more information, please check Additional file 1 . Recruitment {15} Participants will be recruited through Damascus Hospital’s outpatient clinic. We also have a dedicated phone line to answer any questions about participating in the study. On average, our outpatient clinic sees 10 to 15 newly diagnosed HP-infected patients per month. The investigators will explain the trial to eligible patients and ensure they understand the hazards of participating. Before taking part in the experiment, patients will need to sign an informed consent form.
Discussion The prevalence of HP infection in Syria is high and is estimated to be about 34% [ 6 ]. On the other hand, 67.3% of the Syrian population practiced self-medication, while the most commonly used drugs were antibiotics, which create another problem related to bacterial resistance [ 52 ]. Antibiotic resistance restricts the efficacy of triple therapy for HP infections around the world, necessitating searching for new treatment protocols. Until 2018, the most commonly used HP treatment in Syria was triple therapy with clarithromycin and, to a lesser extent, triple therapy with levofloxacin, until a pilot trial was conducted in Syria and proved the ineffectiveness of these regimens [ 36 ]. Treatment guidelines recommend the use of levofloxacin-concomitant or bismuth-containing regimens as an alternative first-line treatment, particularly in areas with a high frequency of clarithromycin resistance like Syria [ 22 , 23 ]. Knowing that levofloxacin concomitant, or levofloxacin sequential, and bismuth-based protocols are the most commonly used for HP infection in Syria. Unfortunately, neither treatment regimen was as effective as expected, since in Syria, the HP eradication rate did not exceed 82% at best [ 37 , 53 ]. A meta-analysis revealed that a fluoroquinolone-based sequential regimen is a viable therapeutic option for HP infection treatment, with an eradication rate of 87.8%, but it was not evaluated in Syria [ 40 ]. As a result, it was critical to find a treatment regimen with a higher eradication rate, and the most important thing was to estimate the eradication rates of a rescue regimen with dual therapy with high-dose PPI and amoxicillin following the failure of the first line [ 22 ]. In general, compliance with medication is defined as adherence to taking over eighty percent of prescribed medications. Despite the fact that this regimen is believed to be beneficial, there are concerns about compliance because of the medication change between the first and second weeks [ 38 ]. This study will evaluate the two HP treatment regimens that are now of the most interest. Furthermore, because Damascus Hospital is the main hospital linked with the Ministry of Health, we will enroll patients from all throughout Syria. A suitable sample size will be collected to address the trial question statistically. This trial will generate critical evidence that will lead to adjustments in first-line treatment regimens for Helicobacter pylori infection in Syria, thus increasing eradication rates and enhancing patient life quality.
Background Treating Helicobacter pylori is becoming increasingly difficult with the development of bacterial resistance to many established treatment regimens. As a result, researchers are constantly looking for novel and effective treatments. This trial aims to establish the efficacy of levofloxacin-based sequential treatment regimen and concomitant levofloxacin-based regimen as empirical first-line therapy in the Syrian population. Method This is an open-label, prospective, single-center, parallel, active-controlled, superiority, randomized clinical trial. The recruitment will target Helicobacter pylori -positive males and females between the ages of 18 and 65 to evaluate the efficacy of empirical first-line therapy in the Syrian population. We are planning to recruit up to 300 patients which is twice the required sample size. One hundred fifty individuals will be randomly assigned to undergo either a sequential levofloxacin-based treatment regimen or a concomitant levofloxacin-based regimen. High-dose dual therapy (proton-pump inhibitor and amoxicillin) will be the rescue therapy in the event of first-line failure. The first-line eradication rate in both groups is the primary outcome, and one of the secondary outcomes is the overall eradication rate of high-dose dual therapy in the event of first-line treatment protocol failure. Intention-to-treat analysis and per-protocol analysis will be used to evaluate the eradication rates of Helicobacter pylori for first-line treatment protocols. Discussion For the first time in the Syrian population, this randomized controlled trial will provide objective and accurate evidence about the efficacy of a sequential levofloxacin-based treatment regimen. Trial registration ClinicalTrials.gov NCT06065267 . Registered on October 3, 2023. Prospective registered. Enrollment of the first participant has not started yet. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-024-07906-3. Keywords
Administrative information Note: the numbers in curly brackets in this protocol refer to SPIRIT checklist item numbers [ 1 ]. The order of the items has been modified to group similar items [ 2 ] (see http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/ ). Objectives {7} The main objective of this study is to compare the eradication rate of HP infection using levofloxacin-based sequential therapy against levofloxacin-based concomitant therapy using an intention-to-treat analysis (ITT) and a per-protocol analysis (PPA). Trial design {8} This is a single-center, prospective, superiority randomized, open-label, active-controlled clinical trial with a 1:1 allocation ratio. Figure 1 shows a trial flow chart. Assignment of interventions: allocation Sequence generation {16a} An independent assistant will generate a randomized number table using LibreOffice Calc’s RANDBETWEEN function [ 49 ]. Concealment mechanism {16b} The independent assistant will keep the randomized number table sealed in an envelope. Implementation {16c} Once a researcher has obtained the patient’s informed consent, they will contact the independent assistant to get the allocated treatment regimen. Assignment of interventions: blinding Who will be blinded {17a} It is challenging to blind the participants and the researchers in this open-label trial. However, throughout the whole trial, laboratory medical professionals, data collectors, and data analysts will remain blinded to the therapy allocation. The allocation information will be hidden on a data collection form for adverse events and compliance, and data collectors are not permitted to inquire about participants' regimens. Procedure for unblinding if needed {17b} Not applicable because it is not possible to blind the investigators or participants. Data collection and management Plans for assessment and collection of outcomes {18a} During screening, sociodemographic and baseline information (including age, sex, contact information, and past medical history) will be obtained. The success of HP eradication will be assessed by stool antigen assays performed with the enzyme immunoassay method (EIA) 6 weeks following the completion of therapy [ 45 ]. Participants will be prohibited from using PPIs, antibiotics, H 2 receptor antagonists, aspirin, herbal remedies, and probiotics for 8 weeks until the stool antigen testing. The stool antigen tests must be performed no later than 6 weeks + 3 days after the completion of therapy. The data on adverse events and compliance will be obtained face-to-face and documented using the data collection form no later than 3 days following the completion of therapy. The count of pills taken will be used to determine study drug compliance. The data will be obtained from two trained assistants who are instructed to collect the information in a consistent, reproducible manner, and the integrity of the data will be overseen by the principal investigator. Plans to promote participant retention and complete follow-up {18b} Investigators will communicate with participants on a regular basis by phone and WhatsApp twice a week. They will utilize approaches such as notifications such as phone calls, texting (SMS), or WhatsApp to take the trial medicine and schedule meetings for stool antigen tests to improve participant engagement. Data management {19} Two assistants will be in charge of data entry and data integrity by double-checking the data. All information will be entered digitally. A Microsoft Access database has data entry forms that will allow the assistants to select specific data from a list of valid data, while what will be stored in the data tables are only the codes that express this data. A main researcher will check the data entry to ensure that the data is entered into the correct fields. Participant files must be stored in a secure location in numerical sequence. After the study is completed, the files will be kept in storage for 3 years. A password system will be used to restrict access to the study data. Confidentiality {27} All data obtained for this trial will be encoded with unique patient identities, ensuring that no individual patient can be recognized. Patient records shall be reviewed only when required and in accordance with Damascus Hospital’s Ethics Committee requirements. Any records relating to participant identification are concealed and will not be made public, to the extent permitted by applicable laws and regulations. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} The researchers will collect gastric mucosal biopsy samples, and after 4 weeks + 3 days of end-of-treatment, they will get a stool sample for the HP infection test. For the specimens that bear unique patient identities, a standard methodology will be followed. All biopsies will be sent to the referring hospital’s central pathology laboratory, while stool samples will be sent to the same referring hospital’s central laboratory. Statistical methods Statistical methods for primary and secondary outcomes {20a} To compare the eradication rate of HP, we will use the ITT analysis (all individuals who got at least one dosage of the trial treatment) and PPA (all individuals who complied and were tested again using a stool antigen test). We will use the χ 2 test, which will be used for nominal variables, such as sex, treatment protocol, treatment outcome based on stool antigen tests, each adverse event, smoking, and alcoholic status, to determine if there is a relationship between the treatment protocol and the result of the treatment for both the primary and secondary outcomes [ 50 ]. We will use the t -test to compare the means of the continuous variable, such as patient’s age, with the normal distribution for both the primary and secondary outcomes [ 51 ]. The difference in eradication rates between the two treatment regimen groups can be assessed by a two-sided 95% confidence interval (CI). Individuals whose stool antigen tests have not been retested including dropouts will be considered treatment failures. The statistical significance level is a P -value of 0.05. Interim analyses {21b} We intend to do a subgroup analysis of the eradication rate of the rescue protocol following first-line therapy failure. We expect rescue treatment to be similarly effective in both groups. The overall eradication rate of the rescue protocol and the relationship between the first-line treatment protocol and the outcome after using the rescue regimen will be assessed by using the χ 2 test, with a two-sided 95% CI of the difference in eradication rates between the two groups. The statistical significance level will be set at a P -value of 0.05. Methods for additional analyses (e.g., subgroup analyses) {20b} The overall eradication rate of the rescue protocol and the eradication rate according to the first-line treatment protocol and outcome after using the rescue regimen will be assessed by using the χ 2 test with a two-sided 95% CI of the difference in eradication rates between the two groups. The statistical significance level will be set at a P -value of 0.05. Patients who have not had their stool antigen tests reexamined will be deemed treatment failures, i.e., “not eradicated,” in the statistical analysis. Strategies will be put in place to increase follow-up, promote adherence, and prevent missing data. We will report and qualitatively compare the reasons for non-adherence for each randomization group. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} Strategies will be put in place to increase follow-up and promote adherence, like phone calls twice a week, texting, and WhatsApp messaging. To prevent missing data, we will use a programmed Microsoft Access database to enter data. This will allow the use of sound alerts in addition to alert messages that appear when any record is saved with missing data. We will report and qualitatively compare the reasons for non-adherence for each randomization group. Plans to give access to the full protocol, participant-level data, and statistical code {31c} Within a year of publishing the results of this clinical trial, we will upload the entire dataset to a repository to ease sharing, access, and dataset citation. Oversight and monitoring Composition of the coordinating center and trial steering committee {5d} The lead investigator is the trial’s designer and is in charge of the study’s execution. The lead investigator and research gastroenterologists are in charge of recruiting, treating, and following up on study participants, as well as reporting severe adverse events and serious, unexpectedly suspected adverse events. The data manager will be in charge of data collection, entry, and verification by matching forms with data within the database. A steering group will be formed to oversee the entire research process. The study team will meet every 2 weeks to monitor the progress of the trial. Regular communication via WhatsApp and phone with patients will ensure the study works successfully. Composition of the data monitoring committee, its role, and reporting structure {21a} Because the trial is short in duration and the treatment regimens are linked to known modest threats, no data monitoring committee will be constituted. Adverse event reporting and harms {22} We will keep track of adverse reactions related to treatment in this trial, which are defined as any events that arise after the administration of the first dosage of the trial drug or any events at baseline that deteriorate in either intensity or frequency after the administration of the first dose of the study drug. An adverse event that meets the threshold for a significant adverse event will be reported to Damascus Hospital’s Ethics Committee according to guidelines [ 43 ]. Serious adverse events, which include events that are life-threatening, necessitate inpatient hospitalization, result in persistent or significant disability or incapacity, or may necessitate medical or surgical intervention to prevent one of the outcomes listed above [ 43 ]. While the nature and severity of unexpected adverse events do not match the information provided in the appropriate product information [ 43 ]. Determination of the relationship between the adverse event and a reasonable chance of being related to treatment exposure. This determination of causality may be based on considerations such as biological plausibility, prior experience with the drug, and the temporal correlation between product exposure and event beginning, as well as dechallenge and rechallenge [ 43 ]. Frequency and plans for auditing trial conduct {23} An auditor will examine the study protocols for participant enrollment, consent, eligibility, and allocation to research groups; adherence to trial interventions and policies to protect participants, including reporting of harms; and data collection completeness, accuracy, and timeliness. Over the course of the study, the auditor will do at least one onsite monitoring visit every 3 months. The auditing process will be independent of the investigators and the sponsor. Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25} Any changes to the protocol that may affect the study’s conduct, such as changes to the potential benefits and risks, will be approved by the Ethics Committee of Damascus Hospital before implementation and communicated to the health authorities in accordance with local regulations. Dissemination plans {31a} The findings of the trial will be made available to clinicians, patients, and the general medical community. The information will be reported irrespective of the magnitude or nature of the treatment's effect. The results will be discussed at national and international conferences, and they will be made available in peer-reviewed journals. Trial status The protocol version is V3, December 24, 2023. We did not enroll any patients; recruitment had started in October 2023 and is estimated to end in August 2026. The trial is estimated to end in December 2026. Supplementary Information
Abbreviations Helicobacter pylori Confidence interval Intention-to-treat Per-protocol analyses Proton-pump inhibitor/inhibitors Non-steroidal anti-inflammatory drugs Positive predictive value Negative predictive value Confidence interval Standard Protocol Items: Recommendations for Interventional Trials Acknowledgements We would like to thank each of Dr.Soumar Mueen Alziadan and Hanan Fakher for their help in the data collection and data entry; we also would thank Jouna Nizar Alzaim, Yara Lturkmany alabeed, and Reem Jado Alnoh for communicating with patients. Authors’ contributions {31b} MH is the Chief Investigator; he conceived the study and led the protocol development. RM contributed to the study design and protocol development. All authors contributed to the recruitment and treatment of the participants. Funding {4} This study was funded by Damascus Hospital, No. GD-23091. The grantee (Damascus Hospital) will not be a part of the trial procedures (including trial design, gathering data, data management, data analysis and interpretation, report writing, and publication decision). The sponsor has no authority over any of the mentioned activities. Availability of data and materials {29} The dataset will be saved on https://data.mendeley.com and will be available within 1 year of the study’s conclusion; we will transfer a totally identified data set to an appropriate data archive for sharing purposes. Declarations Ethics approval and consent to participate {24} Damascus Hospital Ethics Committee reviewed and approved this study (No: 41/23). Written informed consent to participate will be obtained from all participants. Consent for publication {32} Not applicable. We have not included any individual’s data in our study protocol. Competing interests {28} The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Trials. 2024 Jan 15; 25:55
oa_package/28/5e/PMC10789050.tar.gz
PMC10789051
0
Background Advances in spatial transcriptomics have made it possible to identify genes that vary across spatial domains in tissues and cells [ 1 ]. The detection of spatially variable genes (SVGs) is essential for capturing genes that carry biological signals and reducing the high-dimensionality of the spatial transcriptomics data [ 1 ], which is akin to defining highly variable genes (HVGs) [ 2 ] in single-cell RNA sequencing (scRNA-seq) data [ 3 ]. These SVGs are therefore useful for various downstream analyses of spatial transcriptomics data. Spatially variable genes are conceptually different from HVGs found in scRNA-seq data as, by definition, SVGs preserve the spatial relationships of tissues and cells in the biological samples whereas HVGs do not necessarily preserve such relationships. A fast-growing number of methods for SVG detection have been proposed in the recent literature. Some popular examples include SpatialDE [ 1 ] based on Gaussian process; SPARK [ 4 ] and SPARK-X [ 5 ] based on mixed and non-parametric models, respectively; SOMDE based on self-organizing map [ 6 ]; Giotto based on statistical enrichment of spatial network in neighboring cells [ 7 ]; nnSVG based on nearest neighbor Gaussian processes [ 8 ]; MERINGUE based on nearest neighbor spatial autocorrelation [ 9 ], and Moran’s I as implemented in the Seurat package [ 10 ]. While various SVG detection methods have been incorporated into the typical workflows and pipelines for spatial transcriptomics data analysis such as the Giotto and Seurat packages, there is a lack of systematic evaluation and comparison of different methods. Essential questions including the degree of agreement among different methods in terms of the ranking and selection of SVGs, the reproducibility of these methods in terms of SVG detection when the genes included in a given dataset changes, and the accuracy and robustness of SVG detection, and the utility of the selected SVGs to perform in downstream data analysis such as spatial domain clustering remain to be addressed. In addition, practical considerations such as running time and memory usage required by each method have not been systematically benchmarked. To fill this critical gap, we systematically evaluated a panel of eight popular SVG detection methods on a collection of 31 spatial transcriptomics and synthetic spatial datasets. These datasets together capture various sample and tissue types and major spatial biotechnologies with different profiling resolutions, including Visium (10X Genomics), ST [ 11 ], Slide-seq [ 12 ], Slide-seqV2 [ 13 ], MERFISH [ 14 ], seqFISH+ [ 15 ], Stereo-seq [ 16 ], SM-Omics [ 17 ], and DBit-seq [ 18 ]. Our results shed light on the performance of each tested SVG detection method in various aspects and highlight some of the discrepancies among different methods especially on calling statistically significant SVGs across datasets. Taken together, this work provides useful information for considering and choosing methods for identifying SVGs while also serving as a key reference for future development of SVG detection methods from spatial transcriptomics data.
Methods SVG detection methods Datasets were first filtered by first removing cells whose top-50 highly expressed genes contributed to 50% of the total counts and then genes that were expressed in fewer than 30 cells. Log normalization of raw counts was performed prior to SVG detection as per the recommended default for each method. The same reproducible seed was set prior to running each method. Giotto KM and Giotto rank Giotto [ 7 ] requires a spatial Delaunay triangulation network to be built on reduced dimensions to represent the spatial relationships. Then, statistical enrichment using Fisher’s exact test of binarised expression in spatial nearest neighbors is performed to determine SVGs. The two methods differ in their binarization method. In Giotto KM , expression values for each gene are binarised using k -means clustering ( k = 2); otherwise, simple thresholding on rank is applied in Giotto rank (default = 30%). Thus, a gene is considered an SVG if it is highly expressed in neighboring cells. Normalization was performed using normalizeGiotto() under default parameters. SVG detection was thus performed with two different approaches k -means and rank using binSpect(bin_method = ”kmeans”) and binSpect(bin_method = ”rank”) respectively, following the author’s tutorial. https://rubd.github.io/Giotto_site/articles/mouse_visium_kidney_200916.html . Moran’s I Moran’s I ranks genes by the observed spatial autocorrelation [ 19 , 20 ] to measure the dependence of a feature on spatial location. Weights are calculated as 1/distance. Raw counts were first normalized using SCTransform() . Using Seurat v4.1.1, SVGs were detected using FindSpatiallyVariableFeatures(selection.method = “moransi”) and statistics for all features were returned. p -value adjustment was manually performed using the BH method. MERINGUE MERINGUE identifies spatially variable genes using neighborhood adjacency relationships and spatial autocorrelation. MERINGUE first represents cells as neighborhoods using Voronoi tessellation. Then, the resulting Delaunay-derived weighted adjacency matrix and a matrix of normalized gene expression is used to calculate Moran’s I . Raw counts were CPM-normalized using scuttle::normalizeCounts() and the default filtering distance was used to generate the weighted adjacency matrix. Statistics and p -values for all features were returned. P -value adjustment was manually performed using the BH method. nnSVG nnSVG is based on scalable estimation of spatial covariant functions in Gaussian process regression using nearest neighbor Gaussian process (NNGP) models. The BRISC algorithm [ 21 ] was used to implement the NNGP model and obtain maximum likelihood parameter estimates for each gene. A likelihood ratio test is performed to rank genes by estimated LR statistic values. Log normalization was performed using scater::LogNormCounts prior to running nnSVG() with default parameters ( k = 10). Where default parameters were unsuccessful, the number of nearest neighbors was fine-tuned from k = 5 to k = 15. SOMDE A SOM neural network is used to adaptively integrate nearest neighbor data into different nodes, achieving a condensed representation of the spatial transcriptome. SVGs are identified on a node-level, using spatial location and gene meta-expression information. A squared exponential Gaussian kernel is applied to generate log-likelihood ratio values wherein a likelihood ratio test is performed to rank genes by estimated LLR statistic values. The procedure was performed as per the recommended tutorial at https://github.com/WhirlFirst/somde using python. k = 10 was chosen as the default nearest neighbors when constructing the SOM across all benchmarking datasets to preserve local spatial patterns across both small and large datasets. Where default parameters were unsuccessful, the number of nearest neighbors was fine-tuned from k = 5 to k = 20. SPARK-X SPARK-X is a non-parametric method that relies on a robust covariance test framework, including the Hilber-Schmidt independence criteria test and the distance covariance matrix test. A test statistic is observed by measuring the similarity between two relationship matrices based on gene expression and spatial coordinates respectively. A p -value is computed for each distance covariance matrix constructed and a Cauchy combined p -value is reported. sparkx() was run under default parameters. SpatialDE SpatialDE fits a linear mixed model for each gene with Gaussian kernels and decomposes the gene variation into spatial or non-spatial variation. The non-spatial variation is separately modeled using observed noise, and the spatial variation is explained by an exponential covariance function. For each Gaussian kernel, a p -value is calculated from the likelihood test to rank genes by estimated LR statistics. SpatialDE was run under the python implementation and the procedure was as follows in the tutorial by the authors as in https://github.com/Teichlab/SpatialDE . Correlation of ranked gene statistics To calculate pairwise Spearman’s correlation between each method for each dataset, the corresponding gene statistics were used as outlined in Table 1 . Where a comparable gene statistic was not reported by a method, the − log10(adjusted p -value) was used to rank the genes. Identifying significant SVGs Significant SVGs were typically defined as genes with an adjusted p -value of < 0.05. Specifically for Moran’s I , genes that have a positive spatial autocorrelation coefficient and an adjusted p -value of < 0.05 were selected as significant. Dependency across genes To assess the dependency across genes in SVG analysis, we randomly down-sampled 50% of genes from all datasets that ran successfully. We next applied each SVG detection method and calculated SVG statistics of remaining genes in the down-sampled dataset as per Table 1 . The relative rank of these genes was compared with their rank in the original full dataset to assess if there is any change of relative ranking when other genes were included in the dataset. Methods that lead to a different ranking of SVGs in the down-sampled dataset when additional genes were included are considered as calculating spatial variability of a gene depending on the presence and absence of other genes. Robustness against sparsity To assess how each method performs against sparse data, we randomly down-sampled 80% of spots from all datasets that ran successfully. After applying each SVG detection method, we evaluated the performance of each method in two aspects. To assess the impact of sparsity on the relative rankings of the gene statistics, we computed Spearman’s correlation of the original dataset and the down-sampled dataset using the statistics reported in Table 1 . To assess the extent of sparsity on the significantly detected SVGs, we visualized the proportion of uniquely detected SVGs because of the subsampling against the total number of SVGs significantly detected in the original dataset. Simulation of spatial transcriptomics data To evaluate the capacity of methods to detect SVGs with high sensitivity and specificity, we simulated a set of spatial transcriptomics data using scDesign3 [ 22 ], providing us with ground truth spatially variable genes. The synthetic data were generated using real spatial transcriptomics datasets from Additional file 1 : Fig. S1a. Simulation of realistic spatial transcriptomics data was performed following the default settings of scDesign3. To enable fast computation of the model parameters estimated from the real data, we simulated up to approximately 2000 genes and for each dataset generated 10% of all genes as spatially variable. The synthetic datasets model parameters from nine datasets from seven independent studies that cover different sequencing technologies (Visium and DBiT-seq), tissue histologies (breast cancer, brain, embryo, and cancer), number of spatial spots (369–4895 spatial spots) and sequencing depths (590–1937 genes and 59–194 spatially variable genes). Benchmarking of simulation studies To evaluate the performance of the SVG detection methods on the simulated data, we calculated the receiver operating characteristic curve based on the statistics or p -values of the genes, indicating the capacity of methods to rank truly spatially variable genes before non-variable ones. We next calculated the true positive rate (TPR) and the false discovery rate (FDR) to evaluate FDR control at six adjusted p -value thresholds (1e–100, 1e−50, 1e−10, 0.01, 0.05, and 0.1) for each simulated dataset. The cutpointr package [ 23 ] was used to calculate the TPR and FDR performance metrics. Clustering and concordance quantification To quantify the utility of SVGs in spatial domain clustering, we used varying number of top significant SVGs (between 100 and 1900 genes) reported from each method to subset the expression matrix, compute principal component analysis, and performed clustering on the top 20 principal components to cluster the E9.5 mouse embryo spatial transcriptomics data into 13 tissue domains based on the original annotation [ 16 ]. We performed 10 repeats by random subsampling of the spatial data to 80% of the total number of spatial spots for each repeat. We performed either spatial clustering using the default settings (unless otherwise stated) of BayesSpace [ 24 ] (gamma = 2 and nrep = 1000) and SpaGCN [ 25 ] or k -means, hierarchical, Louvain, and Leiden clustering. The total number of clusters was set to the total number of spatial domains observed in the data. In particular, we performed a binary search to tune the resolution parameter as described in SINFONIA [ 26 ] to tune the clustering in the two community-based clustering algorithms. To assess the clustering performance of the SVGs defined by various SVG detection methods, we used the adjusted Rand index (ARI), the normalized mutual information (NMI), the Fowlkes-Mallows index (FMI), and purity to evaluate the concordance between the clustering labels and the spatial domains. Each metric was calculated as follows: Adjusted Rand index Let denote the known ground truth spatial domains of spots, denote the predicted clustering labels from k -means clustering, denote the total number of spatial locations, denote the number of spots assigned to the th cluster of , denote the number of spots that belong to the th unique label of , and denote the number of overlapping spots between the th cluster and the th unique label. The Rand index (RI) denotes the probability that the obtained clusters and the spatial domain labels agree on a randomly chosen pair of spots. The adjusted Rand index (ARI) adjusts for the expected agreement by chance. Normalized mutual information Normalized mutual information (NMI) assesses the similarity between the obtained cluster labels and the ground truth spatial locations, scaled between 0 and 1. We calculate the NMI as follows: where is the entropy function. A comparison of ARI and NMI presented in previous studies [ 27 , 28 ] suggest ARI is preferred when there are large equal-sized clusters, while NMI is preferred in the presence of class imbalance and rare clusters. Fowlkes-Mallows index The Fowlkes-Mallows index (FMI) measures the similarity in two clustering results and is defined as the geometric mean of the precision and recall. The FMI is calculated using the following equation: Where TP is the number of true positives, which are pairs of spots that are in the same spatial domain in both the true and predicted labels; FP is the number of false positive, which are pairs of spots that are in the same cluster in the predicted clusters but in different clusters in the ground truth labels; and FN is the number of false negatives, which are pairs of spots that are in the same cluster in the ground truth labels but in different clusters in the predicted clusters. The score is adjusted to a range between 0 and 1, where a value of 1 signifies when all the spatial spots are correctly labelled. A higher FMI denotes a greater similarity between the two clustering results. Purity Purity is scored in terms of whether the clusters contain only spots of the same spatial domain. Purity equals to 1 if all the spots within the same cluster correspond to the same spatial domain. The purity score is computed using the following equation: Where indicates the uncertainty of true labels based on the predicted labels. Time consumption and memory usage To measure computational consumption for each method, a standard virtual machine with 16 OCPUs and 256 GB was used. Where methods offered parallelization (Giotto, SPARK-X, nnSVG, SOMDE, and SpatialDE) all available cores when it was possible to specify, were utilized to record the running time. For all methods run in R, the elapsed time to run each method was evaluated using the system.time() function. The peak memory usage was monitored using gc() . For methods run in python, perf_counter() from the time package was used to record the elapsed time. To record the peak memory usage, get_traced_memory() was used from the tracemalloc package.
Results Evaluation framework and data summary We designed an evaluation framework to gain insight into the performance of different SVG detection methods to call SVGs from a collection of real and simulated spatially resolved transcriptomics datasets (Fig. 1 ). These include spatial transcriptomics data with varying sequencing depths generated from a wide range of spatial profiling platforms, species, tissue types, and spatial resolutions (Additional file 1 : Fig. S1). Specifically, our evaluation framework entailed a wide range of comparative and benchmarking analyses to investigate key questions. First, we compared the concordance between the overall rankings of the SVGs between SVG tools and evaluated their dependence on mean gene expression to assess the variability among methods and their capacity to account for the bias between gene expression and variance. Next, we investigated the capacity of each SVG method to reproducibly rank SVGs independently of the pool of genes observed in the dataset or with induced sparsity in spots, to call ground truth SVGs from synthetic spatial data, and to define SVGs required to accurately cluster spatial domains. Finally, using the spatial benchmarking datasets we compared the computational cost in terms of speed and memory required for SVGs to be called by each method. Concordance among SVG detection methods To quantify the degree of agreement among the different SVG detection methods, we first obtained the ranking of genes in each dataset ordered from the most to least spatially variable based on the statistics reported by each method (“Methods”, Table 1 ) and correlated the SVG rankings from each pair of methods. These correlation results were summarized for each SVG detection method with respect to other methods across the spatial datasets in Fig. 2 a and visualized individually in Additional file 1 : Fig. S2. The choice to rank the genes based on transformed raw p -values or Benjamini Hochberg-adjusted p -values, or test statistics had negligible impact on most methods as there was an observed linear relationship between ranked p -values and ranked test statistics (Additional file 1 : Fig. S3a-b). However, we found it was necessary to rank Moran’s I based on the observed coefficient as genes with positive spatial autocorrelation would be highly ranked, whereas the adjusted p -values exhibit a symmetric relationship at the two extremities (Additional file 1 : Fig. S3c). The overall concordance results showed two groups of methods that highlighted an average similarity (measured as the Spearman’s correlation of SVG statistics) of greater than 0.8 across the spatial datasets (Fig. 2 a). The most correlated pair of methods were Giotto K-means and Giotto rank, as expected, because of a large overlap in their framework to perform spatial network enrichment. The next group of correlated methods were MERINGUE, Moran’s I , and nnSVG. SOMDE, SPARK-X, and SpatialDE showed the least concordance with the other methods, suggesting the prioritization of SVG statistics by these methods, in particular SpatialDE, are different to other methods. Among the methods, we observed that SpatialDE demonstrated the highest variability across datasets. Coloring the data points in Fig. 2 a by the total number of spatial spots and technology (Additional file 1 : Fig. S4a-b) revealed an interesting trend, which was most striking in SpatialDE, where despite an overall low correlation in spatial statistics with all other methods a high correlation was observed in specific datasets derived from the 10X Visium platform. Overall, these results demonstrate that while we observed moderate-to-high correlation between SVG detection tools in terms of SVG ranking, we found considerable variability of reported SVG statistics across the computational methods, platforms, and datasets. While the ranking of SVGs is useful for selecting the top candidates for subsequent analysis, in practice, statistical significance such as p -values is frequently used for selecting SVGs. To this end, we first partitioned the SVGs into three categories (i.e., p = 0; 0 < p ≤ 0.05; p > 0.05) based on the adjusted p -value reported from each computational method (Fig. 2 b). We found that most methods report a large proportion of SVGs at an adjusted p -value threshold of 0.05 on many datasets. Among the eight methods, nnSVG, MERINGUE, and SpatialDE, and to a lesser degree, SOMDE, reported a sizable proportion of SVGs with an adjusted p -value of 0. Interestingly, SOMDE reported on average the fewest number of significant SVGs with some datasets having almost no significant SVGs (Fig. 2 b and Additional file 1 : Fig. S4c). Intriguingly, we observed that despite the high correlation in SVG statistics (Fig. 2 a), different methods predicted a vastly differing number of SVGs as significant using a p -value threshold of 0.05. However, we note that the overall pattern between methods is still similar when we compute the average concordance in gene sets of the top 200, 500, 1000, and all significant SVGs across all the datasets between methods (Additional file 1 : Fig. S5). As before, SpatialDE demonstrated the least similarity against all other methods, followed by SPARK-X and SOMDE (Additional file 1 : Fig. S5). Giotto KM and Giotto ranks again demonstrated a high similarity, but this time Moran’s I ’s gene sets tended to show a higher concordance with the Giotto methods rather than MERINGUE and nnSVG, suggesting that while the overall ranking in gene statistics may be similar between Moran’s I and MERINGUE and nnSVG, the top most significant SVGs identified by Moran’s I appear to be more similar to those of the Giotto methods (Additional file 1 : Fig. S5). Importantly, despite the relatively high correlation in SVG statistics observed between methods, the number of SVGs found by all methods is strikingly low with many datasets having close to no overlapping SVGs across all eight computational methods (Fig. 2 c). In addition, many unique genes were found by various individual methods in most datasets (Fig. 2 c). Together, these findings highlight the discrepancy among methods when an adjusted p -value threshold of 0.05 was used for calling statistically significant SVGs. Dependency of SVG statistics on gene expression levels In scRNA-seq data, it is known that variance in gene expression is positively correlated with gene expression level; therefore, most highly variable gene (HVG) detection methods implement procedures to account for this bias [ 2 ]. To test whether methods designed for SVG detection have the tendency to select genes with higher expression levels, we investigated the correlation between mean gene expression and the SVG statistics for each method and dataset pair. We found that indeed the rankings of SVGs from most methods correlated positively with the mean gene expression (Fig. 3 a, b). In particular, SPARK-X showed average correlations of around 0.8 across the datasets (Fig. 3 c), and the Giotto methods and nnSVG showed correlations of around 0.5 across the datasets, suggesting a high dependency of SVG ranking on gene expression for these methods. We also correlated the proportion of zeros in gene expression across cells against SVG ranking for each method (Additional file 1 : Fig. S6a-b). Since the proportion of zeros is known to be negatively correlated with their expression levels, the negative correlation observed among each method and dataset pair further confirms the dependency we found between SVG ranking and gene expression among current SVG detection methods. Dependency of SVG statistics across genes and spatial spots We next assessed the reproducibility of gene ranks based on the SVG statistics reported from each method when either the number of genes or the total number of spatial spots included in a dataset changes. To this end, we randomly down-sampled the genes in all benchmarking datasets (Fig. 4 a) to 50% and re-calculated the ranks of genes from the reported SVG statistics of each method on the down-sampled datasets. Most methods except for SpatialDE, and to a lesser extent nnSVG and Giotto KM, demonstrate a high fidelity in gene ranks across all datasets. Therefore, the methods that have a lower correlation when the genes included in a dataset changes, do not independently calculate the SVG statistics for each gene (Fig. 4 b). Although there is some variability in MERINGUE, SPARK-X, Moran’s I , Giotto rank and SOMDE, this variability may not have a significant impact on downstream analysis. These analyses reveal that the decisions made on gene filtering, a common step in data pre-processing, may result in a change in SVG statistics and their ranking for some of the SVG detection methods. Each spatial technology has a different capacity to capture spatial locations (Additional file 1 : Fig. S1a) which may be due to the relatively low-throughput nature of some spatial technologies or inefficiencies in sample preparation. To test the robustness of each method against the sparsity of spatial locations, we down-sampled all datasets to 80% of the total number of spatial spots and repeated the SVG detection (Fig. 4 c). Across all methods, there is some degree of variability in Spearman’s correlation among datasets due to the induced sparsity (Fig. 4 d). In particular, we found that the variability among datasets and the degree of sensitivity to spot sparsity tend to be greater for methods that rely on neighborhood adjacency relationships like nnSVG (uses spatial covariance functions in Gaussian Processes using a nearest neighbor Gaussian process model), SOMDE (uses self-organizing map to cluster neighboring cells into nodes), MERINGUE (uses neighborhood relationships encoded by a Voronoi Tessellation and Delaunay-derived weighted adjacency matrix), and the Giotto methods (uses a Delaunay triangulation network based on cell centroid physical distances). Conversely, methods that were less sensitive were SPARK-X, SpatialDE, and Moran’s I . The reliance on such nearest neighborhood maps or distance-based networks in the former group of methods may explain the sensitivity to sparsity as it affects the detection of SVGs based on its expression between neighbors in a spatial network. To investigate the capacity of the methods to correctly identify SVGs and avoid the detection of false positive SVGs with induced down-sampling of the spatial spots, we next quantified the proportion of SVGs that are uniquely identified in the down-sampled data (Fig. 4 e). We consider that the original full dataset has the most power to detect SVGs and any significant SVGs that are detected in the down-sampled data but not in the original data are false positives. We visualized the proportions of all significant SVGs identified in the down-sampled data that are either identified as significant in the full data or unique to the down-sampled data (Fig. 4 f). Our findings show that SPARK-X, SOMDE and SpatialDE performed the best in terms of identifying the lowest proportion of false positive SVGs with down-sampling of the data. Although the performance of SOMDE suffers under induced sparsity, the low proportion of false positive SVGs may be explained by the fact that SOMDE tends to select fewer SVGs overall compared to other methods (Fig. 2 b). Again, for most methods, there is high variability among datasets, which suggests that a method’s performance may be dataset dependent under sparse conditions. Overall, our down-sampling experiments of genes and spots show that the performance of most methods to detect significant SVGs may be affected by changes in the gene number and sparsity of spatial spots. This has important implications when considering the most suitable method that is insensitive to gene filtering and dataset quality. Accuracy of SVG methods in detecting SVGs using synthetic spatial transcriptomics data To test the accuracy of the SVG detection methods, we next simulated spatial transcriptomics datasets with ground truth SVGs and spatially invariant genes using scDesign3 [ 19 ] (Additional file 1 : Figs. S7-S9). To enable representation of the diverse sequencing technologies and tissue histologies in real spatial data, we simulated in silico data from nine data sources covering nine distinct spatial masks, five tissue histology types, two spatial platform technologies, and diverse sequencing depths (590–1937 genes, 59–194 spatially variable genes, and 369–4895 spatial spots). We then performed SVG detection on the simulated datasets using the eight methods and evaluated their performance by calculating the true positive rate (TPR) and the false discovery rate (FDR) across three adjusted p -value thresholds (0.01, 0.05, and 0.1) (see “ Methods ” for details). At the adjusted p -value thresholds of 0.01 and 0.05, we found that SPARK-X, SOMDE, nnSVG, and SpatialDE performed well with a high TPR and a low FDR (Fig. 5 and Additional file 1 : Fig. S10). Under adjusted p -value thresholds of 0.01, 0.05, and 0.1, Giotto rank, Moran’s I , and nnSVG all demonstrated a high TPR but suffered from a high level of false positive identification. Compared to the other methods, the Giotto methods and Moran’s I performed relatively poorly in the simulation, displaying the highest FDRs in most datasets (Fig. 5 a, b). These methods tended to identify a greater proportion and number of significant SVGs (Additional file 1 : Fig. S10b-c). These findings reveal that for methods, except SPARK-X and SOMDE, the estimated FDRs (i.e., adjusted p -value thresholds) do not accurately represent the true FDRs for SVG detection in these simulated datasets. Performance on clustering spatial domains A key task in spatial transcriptomics data analysis is to identify spatial domains that mark distinctive cell and tissue types in a biological sample. One approach to achieve this is to cluster profiled locations into spatial domains using SVGs. To compare the capacities of SVGs identified by each method in clustering the spatial domains, we took advantage of the spatial transcriptomics data of an E9.5 mouse embryo given the availability of tissue annotations in these samples (Fig. 6 ). First, we performed SVG calling using each SVG detection method. Then taking a varying number of top SVGs, we computed the top 20 principal components (PCs) using the feature-selected spatial transcriptomics data. Using either spatially aware clustering tools (BayesSpace [ 20 ] and SpaGCN [ 21 ]) or canonical clustering approaches ( k -means, hierarchical, Louvain, and Leiden clustering using the SINFONIA framework [ 22 ]), we performed clustering on the top 20 PCs and calculated the concordance between the clustering results and the pre-defined spatial domains to measure the performance of the SVGs to delineate the anatomical locations. By taking a large range in the number of features used (between 100 and 1900 features), we were able to observe an overall increasing trend in performance with an increasing number of SVGs used for all SVG methods, with the accuracy in classification peaking at around 900–1100 SVGs (Fig. 6 ). While this observation was broadly consistent, the pattern differed for some clustering and SVG method combinations. For example, hierarchical clustering demonstrated a decreasing trend in accuracy with increasing number of SVGs used unlike most clustering methods. The overall pattern was consistent between different concordance measures, including Fowlkes-Mallows index (FMI), normalized mutual information (NMI), and purity score (Additional file 1 : Fig. S11). These results suggest that while the selection of the number of top SVGs used in clustering will depend on the data using approximately between 900 and 1300 genes for the dataset tested led to the highest accuracy in clustering of spatial domains across most conditions. Computational time and memory usage Computational time and memory usage are key considerations in practical applications, especially for large spatial transcriptomics data analyses. In our evaluation, we configured a standard virtual machine, with 16 OCPUs and 256 GB of memory and recorded the runtime and the peak memory usage for each SVG detection method on each dataset (Fig. 7 ). As expected, we found the computational time and the peak memory usage are both positively correlated with the number of spatial locations in the datasets. In terms of computational time, comparison across methods revealed that SPARK-X is the fastest method and scales extremely well with the number of spatial locations. While SOMDE is the second best in most cases, it is significantly slower compared to SPARK-X. In contrast, SpatialDE performed poorer especially on datasets with large numbers of spatial locations. Giotto KM performed poorly in most of the datasets but does scale better than SpatialDE with the number of spatial locations in datasets. Similarly, nnSVG scaled better with the number of spatial locations than SpatialDE but was slower on datasets with many genes. In terms of peak memory usage, we found that SOMDE uses the least peak memory across all datasets and SPARK-X ranked the second in most cases although it has a significantly higher peak memory usage. In comparison, the two methods implemented in Giotto and SpatialDE show high peak memory usage especially in datasets with many spatial locations. While there is a trade-off between speed and memory usage, taken together, these results suggest that SPARK-X and SOMDE are the most efficient methods in terms of speed and memory usage for SVG detection.
Discussion We found that, for most methods, a significant proportion of genes were detected as SVGs under the adjusted p -value of 0.05 in most of the tested datasets (Fig. 2 b and Additional file 1 : Fig. S4c). However, the overlaps across the eight methods were relatively small considering the large numbers of SVGs identified from each SVG detection method (Fig. 2 c), suggesting large discrepancies among SVG detection methods when a significance cut-off is used to filter for SVGs. Consistent with this, in our simulation study where ground truth SVGs were introduced into simulated spatial transcriptomics data, we found that for some methods, in particular the Giotto methods and Moran’s I , the estimated FDRs did not accurately represent the true FDRs in most of the synthetic datasets (Fig. 5 ). These results highlight that the estimation of statistical significance is difficult and there is much room for improvement. It also cautions the use of and the reliance on such statistical significance from some of the current SVG detection tools for drawing data and biological conclusions. We also discovered that SVGs identified by most methods show a strong positive correlation with their expression levels (Fig. 3 ). We note that a similar relationship was found between gene variability and expression level in scRNA-seq data and most computational methods designed for HVG detection actively correct for such a “bias” [ 2 ]. While we could not rule out the possibility that genes that vary spatially are also highly expressed, future work should be performed to investigate the biological basis and plausibility for such a correlation. During practical application, it is important to be aware of the tendency of current SVG detection tools to select genes with high expression levels. Future method development will be required to account for this effect such as to retain relatively lowly expressed genes such as transcription factors in downstream analysis. In addition, we found that for most methods the relative rankings of SVGs change when different pools of genes and spots are included in the datasets (Fig. 4 c, f). While considering the interdependency among genes may provide useful information for identifying SVGs, it is important to be aware that different SVG detection results may be obtained when different pre-processing steps were used to filter genes prior to SVG analysis. Lastly, SVG detection can be viewed as a feature selection step in spatial transcriptomics data analysis, where useful features (i.e., SVGs) are selected and/or uninformative ones are removed. In particular, the current SVG detection methods can be considered as unsupervised approaches where no information such as cell types, cell states, or spatial domains are required. A great amount of work has been done in feature selection in single-cell data analysis [ 23 ], including unsupervised methods and more advanced methods that perform combinatorial feature selection using supervised learning such as embedded feature selection using random forest and wrapper feature selection using genetic algorithms. We anticipate that future development of SVG detection methods will explore the utility of information such as cell types and states to identify SVGs that not only independently mark the spatial variability but also those that cooperate across multiple genes and together define spatial variability. We believe these developments will introduce additional computational new challenges but will undoubtedly lead to new biological insight from spatial transcriptomics data analyses.
Conclusions SVG selection is an essential step for spatial transcriptomics data analysis and can have a significant impact on their downstream interpretation. An increasing number of SVG selection methods have been proposed. Yet, questions such as method reproducibility, reliability, accuracy, and robustness are critical for their applications and downstream data analysis. This study provides a much-needed benchmark of current SVG methods which will serve as guide for SVG method selection and their future development.
Background The identification of genes that vary across spatial domains in tissues and cells is an essential step for spatial transcriptomics data analysis. Given the critical role it serves for downstream data interpretations, various methods for detecting spatially variable genes (SVGs) have been proposed. However, the lack of benchmarking complicates the selection of a suitable method. Results Here we systematically evaluate a panel of popular SVG detection methods on a large collection of spatial transcriptomics datasets, covering various tissue types, biotechnologies, and spatial resolutions. We address questions including whether different methods select a similar set of SVGs, how reliable is the reported statistical significance from each method, how accurate and robust is each method in terms of SVG detection, and how well the selected SVGs perform in downstream applications such as clustering of spatial domains. Besides these, practical considerations such as computational time and memory usage are also crucial for deciding which method to use. Conclusions Our study evaluates the performance of each method from multiple aspects and highlights the discrepancy among different methods when calling statistically significant SVGs across diverse datasets. Overall, our work provides useful considerations for choosing methods for identifying SVGs and serves as a key reference for the future development of related methods. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-023-03145-y.
Supplementary Information
Acknowledgements We thank our colleagues at the School of Mathematics and Statistics, The University of Sydney, and the Sydney Precision Data Science Centre for their feedback. This work is supported by a National Health and Medical Research Council (NHMRC) Investigator Grant (1173469) and a Metcalf Prize to P.Y. and a postgraduate scholarship from Research Training Program and a Children’s Medical Research Institute postgraduate scholarship to C.C. Review history The review history is available as Additional file 2 . Peer review information Veronique van den Berghe was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Authors’ contributions C.C., H.J.K., and P.Y. conceived the study. C.C. and H.J.K. performed data analyses and interpreted the results with input from P.Y. All authors wrote the manuscript and approved the final version of the manuscript. Funding This work is supported by a National Health and Medical Research Council (NHMRC) Investigator Grant (1173469) to P.Y. Availability of data and materials Summary information of spatial transcriptomics datasets was included in Additional file 1 : Fig. S1a. Below we provide the accession numbers when available or download links used to obtain each dataset. • Liu et al., DBiT-seq [ 18 ]. Mouse Embryo E12 (GSM4189614_0628cL) and E11(GSM4364243_E11-2L). Downloaded from GEO accession GSE137986 [ 29 ]. • Xia et al., MERFISH [ 14 ]. Human osteosarcoma. Downloaded from the supplementary section of the corresponding paper. https://www.pnas.org/doi/suppl/10.1073/pnas.1912459116/suppl_file/pnas.1912459116.sd12.csv • Eng et al., SeqFISH+ [ 15 ]. Mouse primary visual cortex (VISp). Downloaded from https://github.com/CaiGroup/seqFISH-PLUS . The spatial coordinate of each spot was generated using ‘stitchFieldCoordinates’ function in Giotto. • Rodriques et al., SlideseqV1 [ 12 ]. Mouse cerebellum. Downloaded the ‘Puck_180819_11’ sample from https://singlecell.broadinstitute.org/single_cell/study/SCP354/slide-seq-study [ 30 ]. • Marshall et al., SlideseqV2 [ 13 ]. Human kidney cortex. Downloaded the ‘HumanKidney_Puck_20011308’ sample from https://cellxgene.cziscience.com/datasets . • Stickels et al., Slide-seqV2 [ 31 ]. Mouse hippocampus. Downloaded the ‘Puck_200115_08’ sample from https://singlecell.broadinstitute.org/single_cell/study/SCP815/highly-sensitive-spatial-transcriptomics-at-near-cellular-resolution-with-slide-seqv2 [ 32 ]. • Vickovic et al., SM-Omics [ 17 ]. Mouse brain cortex. Downloaded the ‘10015CN78_C1_stdata_adjusted’ and ‘10015CN89_D2_stdata_adjusted’ samples from https://singlecell.broadinstitute.org/single_cell/study/SCP979/sm-omics-an-automated-platform-for-high-throughput-spatial-multi-omics [ 33 ]. • Ji et al., ST [ 11 ]. Human squamous carcinoma. Downloaded from GSM4284322 [ 34 ]. • Navarro et al., ST [ 35 ]. Mouse hippocampus wild-type replicate 1. Downloaded from https://data.mendeley.com/datasets/6s959w2zyr/1 [ 36 ]. • Biancalani et al., Visium [ 37 ]. Mouse primary motor cortex. Downloaded from https://storage.googleapis.com/tommaso-brain-data/tangram_demo/Allen-Visium_Allen1_cell_count.h5ad • Ferreira et al., Visium [ 38 ]. Mouse kidney. Downloaded the Sham model and ischemia reperfusion injury model from GSE171406 [ 39 ]. • Hunter et al., Visium [ 40 ]. Zebrafish melanoma. Downloaded the ‘Visium-A’ sample from GSE159709 [ 41 ]. • Janosevic et al., Visium [ 42 ]. Mouse kidney. Downloaded from GSE154107 [ 43 ]. • Joglekar et al., Visium [ 44 ]. Mouse pre-frontal cortex. Downloaded from GSE158450 [ 45 ]. • Lopez et al., Visium [ 46 ]. Mouse lymph node and MCA205 tumour. Downloaded from GSE173776 [ 47 ] and GSE173773 [ 48 ] respectively. • McCray et al., Visium [ 49 ]. Human prostate. Downloaded from GSM4837767 [ 50 ]. • Wu et al., Visium [ 51 ]. Human breast cancer. https://zenodo.org/record/4739739#.YY6N_pMzaWC [ 52 ] • E9.5 Mouse Embryo [ 16 ]. E9.5 mouse embryo spatial profile. Downloaded from https://db.cngb.org/stomics/mosta/ . Code availability SVG detection methods were run on R (v4.3) or python (v3.8) and the source code is deposited in Zenodo ( https://zenodo.org/doi/10.5281/zenodo.10295502 ) [ 53 ] and is freely available from https://github.com/PYangLab/SVGbench [ 54 ]. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Genome Biol. 2024 Jan 15; 25:18
oa_package/de/7d/PMC10789051.tar.gz
PMC10789052
0
Background In this study, we report previously unknown relationships among three biological events, i.e., neural tube defects, the function of a molecule called ARMC5, and the degradation of DNA-directed RNA polymerase II (Pol II). Neural tube defects During embryonic development, the neural plate folds into a neural tube to form the future brain and spinal column. The process occurs during days 17–28 of human gestation or e8.5–10.5 (embryonic days) in mice [ 1 ]. Defective closure of the neural tube causes various manifestations, such as spina bifida , anencephaly, encephalocele, and iniencephaly. The three latter forms of neural tube defects (NTDs) are very severe and frequently result in miscarriage or stillbirth. There are four principal types of spina bifida : meningocele, myelomeningocele (MM), myelocele, and spinal dysraphism. NTD is one of the most common birth defects, with median prevalence varying from 6.9 to 21.9 per 10,000 live-birth in different World Health Organization-designated regions [ 2 ]. During the process of neural tube closing, the neural plate cells need to undergo necessary proliferation, differentiation, morphological transformation, and migration [ 3 – 6 ]. These steps can be affected by genetic and environmental factors, which all contribute to NTD risks [ 7 ]. It has been well established that sufficient maternal dietary folate intake is essential for proper neural tube closing [ 8 ]. The genetic contribution to NTD is polygenic. Based on candidate gene studies, some genes in the folate metabolic pathways have been found to be associated with NTD risks in humans [ 9 – 11 ]. Mutations/deletions in more than 200 genes are known to cause NTD in mice [ 12 ], although only a limited number of them are associated with human NTD risks according to human genetic studies [ 13 ]. ARMC5 ARMC5 is a protein containing an armadillo domain consisting of seven armadillo repeats. Each repeat is about 40 amino acids (aa) long and consists of three α-helices [ 14 ]. Human and mouse ARMC5 proteins share 90% aa sequence homology and have similar structures. Both have the ARMC domain towards their N-terminus and a BTB (Broad-complex, tramtrack, and bric-à-brac) domain towards their C-terminus [ 15 – 17 ]. The dominant human ARMC5 protein isoform is 935 aa in length (NP_001098717.1) [ 18 ], and mouse ARMC5, 926 aa (NP_666317.2). In humans, four other isoforms are derived from the same gene. Most isoforms vary in the 5′- and 3′- regions. One of the four isoforms has two extra exons at the 5′ end of the gene and translates into a longer protein isoform of 1030 aa in length (NP_001275696.1) [ 18 ]. ARMC5 does not have any enzymatic activities. It functions via its association with other proteins. We previously reported that ARMC5 was associated with a group of molecules, including Cullin-3 (CUL3) [ 19 ], according to a yeast 2-hybrid assay. We recently reported the phenotype of Armc5 gene knockout mice (KO mice) [ 19 ]. The KO mice were smaller from the fetal stage until old age and were born below the expected Mendelian ratio from heterozygous parents. The function of T lymphocytes of the KO mice was compromised in that they had reduced proliferation and differentiation in vitro, decreased autoimmune responses, and defective viral clearance in vivo. The KO mice presented adrenal gland hyperplasia in old age, similar to primary bilateral macronodular adrenal gland hyperplasia (PBMAH). Approximately 21–26% of PBMAH patients carry ARMC5 mutations [ 20 – 23 ]. Pol II Pol II is responsible for transcribing all the mRNA and most small nuclear RNA, microRNA, and long non-coding RNA [ 24 , 25 ]. Pol II is highly conserved. Human and mouse Pol II both have 12 subunits [ 26 ]. POLR2A is the largest catalytic subunit. Pol II might pause during mRNA transcription for various reasons, such as template DNA damage, cell stress, or gene activation status. Once these adverse conditions are resolved, it will continue its journey along the template DNA. Permanent Pol II stalling will block the transcription, and the stalled Pol II needs to be removed to resume transcription. It is believed that ubiquitination, followed by proteasome degradation, is a process to remove the stalled Pol II [ 27 – 29 ]. It follows that if such permanently stalled Pol IIs are not degraded, there will be a generalized transcription depression. How the Pol II pool size homeostasis is maintained and whether an abnormal Pol II pool size plays a pathogenic role are an understudied area. A related critical question is whether an abnormal Pol II pool size affects all the genes or just a subset of genes. Recently, Vidakovic et al. and Nakazawa et al. reported that K1268 ubiquitination is necessary and sufficient for POLR2A degradation in cells after UV irradiation. K1268R mutation prevents POLR2A ubiquitination, resulting in POLR2A accumulation and an enlarged Pol II pool size in cells with irradiation-induced massive DNA damage [ 30 , 31 ]. The enlarged Pol II pool in the mutant cells in this model selectively leads to faster transcription recovery of short genes and upregulates a subset of genes (about 1600 genes) [ 30 ]. These studies demonstrate that the effect of Pol II pool size is not universal to all the genes in the irradiated cells. However, we do not know whether the same is true under a physiological condition without massive DNA damage. Pol II degradation Ubiquitination is involved in protein degradation and function. Protein ubiquitination is catalyzed by a cascade of enzymes, i.e., E1 (Ub-activating enzyme), E2 (Ub-conjugating enzyme), and E3 (Ub ligase) [ 32 ]. The specificity of the cascade is determined by E3, which has three families: Ring-finger (single or multiple subunits), HECT, and RBR [ 33 ]. The Ring-finger E3s are the largest family. A numerous subunit Ring-finger E3 contains a RING-finger protein (e.g., ROC1 or RBX1), a Cullin (CUL) protein (CUL1, 2, 3, 4A, 4B, 5, and 7), and a substrate recognition unit [ 34 ]. Due to the central role of Pol II and its largest subunit, POLR2A, in cell biology, POLR2A-specific E3 is of vital interest to cell biologists. Several such E3s have been reported before, but most of them only have convincing activities in cultured cells after irradiation- or drug-induced massive DNA damage [ 35 – 40 ]. Two of these E3s do have activity in unmanipulated cell lines, but such observation has not extended to tissues and organs [ 38 , 41 ]. None of these POLR2A-specific E3s have known effects on the degradation of the other 11 Pol II subunits, either after massive DNA damage or under physiological conditions. It is to be noted that in most of the previous studies related to Pol II function and pool size, POLR2A has been used as a surrogate marker of Pol II, and the levels of the other 11 subunits are rarely assessed. To our knowledge, only three prior publications have addressed the ubiquitination and degradation of other Pol II subunits. An E3 Asr1 in yeasts can mono-ubiquitinate POLR2B in vitro, probably via its interaction with POLR2A [ 42 ], although whether this E3 affects the POLR2B protein level in vivo is not known. BRCA1 is reported to ubiquitinate POLR2H after massive DNA damage, but it does not affect the protein level of POLR2H in whole-cell lysates [ 43 ]. VHL-containing E3 ubiquitinates POLR2G and controls its degradation in the absence of massive DNA damage [ 44 ]. In the present work, we revealed that a novel ARMC5-containing E3 was essential for the degradation of most of the 12 subunits of Pol II and thus controlled the homeostasis of the whole Pol II complex under physiological conditions. Failed Pol II degradation due to ARMC5 deletion did not result in generalized Pol II stalling nor generalized transcription depression. The abnormally large Pol II pool in the KO neural precursor cells and intestine dysregulated the transcription of a subset of genes, some of which are known to be critical in neural tube development. A human genetic study discovered nine highly deleterious single-nucleotide variants (SNVs) in the ARMC5 exons of myelomeningocele (MM) patients; four of the nine SNVs were proven essential for the POLR2A-specific E3 activity.
Methods In situ hybridization To determine Armc5 mRNA tissue-specific expression, we employed 1526-bp (starting from GATATC to the end) mouse Armc5 cDNA (GenBank: BC032200, cDNA clone MGC: 36,606) in pSPORT1 as a template for sense and antisense riboprobe synthesis, using SP6 and T7 RNA polymerase for both 35 S-UTP and 35 S-CTP incorporation. Tissues from WT mice were frozen in − 35 °C isopentane and kept at − 80 °C until sectioned. X-ray autoradiography focused on 10-μm-thick cryostat-cut sections. Briefly, overnight hybridization at 55 °C was followed by extensive washing and digestion with RNase to eliminate non-specifically bound probes. Anatomical-level images of in situ hybridization were generated using X-Ray film autoradiography after 4-day exposure. RT-qPCR Total RNA from cells or tissues was extracted with Rneasy kit (Qiagen) and reverse-transcribed with iScriptTM cDNA Synthesis Kit (Bio-Rad Laboratories). The primer sequences are listed in SI-Table 4. Rn7sk or β-actin was used as internal controls. The samples were first denatured at 95 °C for 2 min. They then underwent 40 cycles of amplification using the following cycling conditions: 95 °C for 15 s, 60 °C for 60 s, and finally, with a melting step from 72 to 95 °C for 5 s. qPCR signals between 22 and 30 cycles were analyzed. Samples were assayed in triplicate, and the data were expressed as signal ratios of target mRNA/internal control mRNA. ARMC5 KO mice ARMC5 KO mice were described in our previous publication [ 19 ]. These mice were bred into the CD1 x C57BL/6 F1 background for this study. Micro-CT whole-body bone imaging The mice were euthanized by CO 2. The whole-body bone images were obtained by scanning the mice using Broker SkyScan1176 Micro-CT. Collection of mouse fetuses Fetuses were harvested for neural tubes (e8.5, e9.5, and e10.5), for the assessment of exencephaly (e9.5 or e12.5), and for CNS tissues to generate NPCs (e13.5). Neural tube isolation The neural tubes were isolated from e8.5 or e9.5 mouse embryos under a dissecting microscope and digested with pancreatin (6 mg/ml in PBS) for 6 min at room temperature. Sticky lateral tissues were teased away, and cleaned neural tubes were used in the experiments. Immunofluorescence E9.5 embryos were fixed in PBS containing 4% paraformaldehyde at 4 °C overnight and then sequentially soaked in PBS containing 30% sucrose at 4 °C for 24 h, followed by a mixture of 30% sucrose (in PBS) and OCT at a 1:1 ratio at 4 °C for another 24 h. The samples were then embedded in OCT and stored at − 80 °C until use. Fetal WT and KO neural tubes at the level of hindbrains were cryosectioned (10–12 μm) transversely. The cryosections were first permeabilized with 0.3% Triton X-100 in PBS for 3 min and treated with blocking buffer (PBS containing 5% goat serum and 0.1% Tween 20) at room temperature for 1.5 h. To quantify apoptosis in e9.5 neural tubes, we assessed the cryosections by fluorescent TUNEL using an in situ Cell Death Detection Kit (Roche) according to the manufacturer’s instructions. Fluorescent images were collected on an AxioPhot fluorescent microscope (Zeiss). The images were analyzed using the Cell Counter Plugin of the ImageJ software. TUNEL-positive cells among total cells, which were visualized by DAPI staining, in the neural folds and adjacent areas were registered. For immunofluorescent staining of NPCs, the cells were cultured on poly-D-lysine and laminin pre-coated glass slips in the NeuroCult proliferation medium for 1 day. The cells were fixed with 4% paraformaldehyde in PBS and permeabilized with PBS containing 0.3% Triton X-100 for 3 min. The slips were then soaked in blocking buffer (PBS containing 5% goat serum and 0.1% Tween 20 at room temperature) for 1.5 h and reacted with different first Abs (mouse anti-Nestin mAb, 4 μg/ml, Abcam; rabbit anti-Sox2 Ab, 1 μg/ml, Abcam). The coverslips were then incubated with secondary Abs (AlexaFluor488-conjugated goat anti-mouse Ab; Invitrogen; rhodamine-conjugated goat anti-rabbit Ab; Jackson ImmunoResearch Laboratories) in the blocking buffer for 2 h at room temperature. The coverslips were washed three times with PBS and mounted in ProLong gold anti-fade containing DAPI (Invitrogen). For immunofluorescent staining of cytosolic and nuclear ARMC5, SK-N-SH neuroblastoma cells were cultured on CELLstart substrate (Invitrogen)-pre-coated coverslips overnight and transiently transfected for 2 days with plasmids expressing human ARMC5-HA (Genecopoeia) using Lipofectamine 3000 transfection reagent (Invitrogen). The procedure of immunofluorescent staining was the same as that for NPC staining, except that rabbit anti-HA mAb (Cell signaling Technology) and rhodamine-conjugated goat anti-rabbit Ab (Jackson Laboratories) were used as the primary and secondary Abs, respectively. Generation of mouse NPCs The brains from e13.5 mouse fetuses were separated at the cervical spinal cord level, and the ganglionic eminences were dissected and harvested. The harvested tissue pieces were collected in a complete neural stem cell medium (NeuroCult NSC Basal Medium and NeuroCult NSC Proliferation Supplements at a 9:1 ratio; Stemcell Technologies) and dissociated thoroughly but gently by pressing the pipette tip to the bottom of the tube and pipetting five times to obtain a single-cell suspension. The cells were plated at the density of 2 × 10 5 cells/ml in a complete NSC medium supplemented with 20 ng/ml EGF; Stemcell Technologies). Five to 6 days later, the neurospheres were treated with Accutase (Stemcell Technologies) and cultured for additional 5–6 days. The neurospheres of the second passage were used for experiments. NPC proliferation assay NPCs were cultured in 96-well plates in a complete NeuroCult proliferation medium for 1 day. CellTiter 96 AQueous One Solution (20 μl/well; Promega) was added to the wells. After an additional 2-h culture, the absorbance of the wells at 490 nm was registered with an ELISA reader. Flow cytometry For cell cycle analysis, NPCs were blocked at the G1 phase by aphidicolin (Millipore Sigma;12 μM) for 8 h. The cells were released into the S phase by washing three times with NeuroCult basal medium and then incubated with a complete NeuroCult proliferation medium. NPCs were collected at 0, 2, 4, 6, 8, 10, and 24 h later, fixed with 70% ethanol, and stained with propidium iodide for cell cycle analysis by flow cytometry. For apoptosis analysis, NPCs were cultured without EGF or with different concentrations of EGF for 20 h. A single-cell suspension was obtained by treating the cells with ACCUTASE. The cells were stained with Annexin V (1:50 dilution; BD Bioscience) and analyzed by flow cytometry. LC–MS/MS HEK293 cells were cultured in DMEM medium supplemented with 10% fetal bovine serum and 2 mM glutamine and transfected with FLAG-tagged ARMC5- or ARMC5 R315W-expressing plasmids by using Jet Prime Transfection Reagent (PolyPlus). The transfected cells were incubated at 37°C for 24 h, washed with PBS, pelleted, and snap-frozen until use. Affinity purifications were performed in four independent replicate experiments as described previously [ 93 ]. The Speedvac-dried protein extracts were re-solubilized in 10 μl of a 6 M urea buffer, reduced (45 mM DTT, 100 mM ammonium bicarbonate) for 30 min at 37°C, and alkylated (100 mM iodoacetamide, 100 mM ammonium bicarbonate) for 20 min at 24°C. Proteins were digested in 10 μl of trypsin solution (5 ng/μl of trypsin, Promega; 50 mM ammonium bicarbonate) at 37°C for 18 h. The digests were acidified with trifluoroacetic acid and cleaned by the Oasis MCX 96-well Elution Plate (Waters). Peptides were identified by LC–MS/MS using HPLC coupled to an Orbitrap Fusion mass spectrometer (Thermo Scientific) through a Nanospray Flex Ion Source. MS/MS raw data were searched against the human SwissProt database (updated on April 24th, 2019) and X-Tandem using ProHits software [ 94 ]. Spectral counts were transferred in Perseus (Version 1.6.1.3) [ 95 ]. Proteins quantified in three out of four experiments for either WT ARMC5 or ARMC5 R315W were kept for further analysis. Spectral counts reported as 0 by X-Tandem were replaced by a randomly generated spectral count value normally distributed with a mean and S.D. equal to those of the lowest 20% spectral count values from the LC–MS/MS analysis. Spectral counts were normalized by the spectral count of the bait (ARMC5) to allow comparison between different purifications. Proteins in the WT ARMC5 and ARMC5 R315W precipitates were compared to the FLAG empty vector control samples and were labeled as high-confidence interactors when their p -value was under 0.05. Their spectral count ratio was over 1.5. Statistically significant differences between proteins from the WT ARMC5 and ARMC5 R315W precipitates were determined using a two-tailed t -test. They were subsequently adjusted for multiple testing using a Benjamini-Hochberg-based test [ 96 ]. FDR of 5% was adjusted using a 0-correction factor of 0.1. The level of differential interaction was considered statistically significant when the FDR was < 0.05 and its average spectral count fold change between WT ARMC5 and ARMC5 R315W was > ± 2. Immunoprecipitation and immunoblotting Cells or tissues (i.e., human neuronal SK-N-SH cells or HEK293 cells transfected with human WT ARMC5- or mutant ARMC5-expressing plasmids, mouse NPCs, mouse e9.5 neural tubes or mouse intestine, and MEFs) were lysed in RIPA buffer (25 mM Tris, pH 7.6, 150 mM NaCl, 1% Nonidet P-40, 0.1% SDS) supplemented with protease inhibitors and phosphatase inhibitors (Roche Diagnostics). For immunoprecipitation, 0.5 mg of protein was incubated with mouse anti-HA mAb (clone HA-7, Sigma), mouse anti-ubiquitin mAb (clone F-11; Santa Cruz), mouse anti-POLR2A mAb (clone F-12; Santa Cruz Biotech), mouse anti-POLR2A mAb (clone 4H8; BioLegend) overnight, and then with protein G pre-conjugated agarose beads for additional 2 h at 4 °C with rotary agitation. The beads were washed with lysis buffer four times and eluted in SDS-loading buffer. For immunoblotting, the lysates were resolved by 6 to 12% SDS-PAGE and transferred to nitrocellulose membranes. The membranes were blocked with 5% (w/v) milk in TBST (Tris-Buffered Saline, 0.05% Tween 20) and incubated with first Abs overnight at 4 °C, followed by HRP (horse radish peroxidase)-conjugated secondary Abs for 1 h at room temperature. In some cases, HPR-conjugated first Abs without a second Ab were used. The signal was revealed by the Western Lightening pro-ECL (PerkinElmer) or SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific) and detected with X-ray film. Following first Abs or HRP-conjugated first Abs were used for blotting. HRP-conjugated anti-HA- mAb (clone 6E2Cell Signaling Technology), mouse anti-FLAG mAb (clone M2; Sigma-Aldrich), mouse anti-CUL3 mAb (clone G-8; Santa Cruz), mouse anti-POLR2A mAb (clone 4H8; BioLegend), mouse anti-POLR2A mAb (clone 8WG16; BioLegend), rabbit anti-phospho-POLR2A-S2 mAb (clone E1Z3G; Cell Signaling Technology), rabbit anti-phospho-POLR2A-S5 mAb (clone D9N51; Cell Signaling Technology), mouse anti-ubiquitin mAb (clone F-11; Santa Cruz Biotech), mouse anti-FOLH1 mAb (clone OTI3H5; Novus Biologicals), rabbit anti-β-actin Ab (Cell Signaling Technology), rabbit anti-α-actinin mAb (clone D6F6; Cell Signaling Technology) or mouse anti-K48-ubiquitin mAb (clone Apu2; Millipore/Sigma), mouse anti-POLR2A mAb (clone F-12; Santa Cruz), mouse anti-POLR2B mAb (clone E-12; Santa Cruz), rabbit anti-POLR2C mAb (clone EPR13294(B); Abcam), rabbit anti-POLR2D Ab (Abcam), mouse anti-POLR2E mAb (clone B-5; Santa Cruz), mouse anti-POLR2F mAb (clone E-8; Santa Cruz), mouse anti-POLR2G mAb (clone C-2; Santa Cruz), mouse anti-POLR2H mAb (clone B8-1; Santa Cruz), mouse anti-POLR2I mAb (clone F-11; Santa Cruz), mouse anti-POLR2J mAb (clone G-2; Santa Cruz), rabbit anti-POLR2K Ab (ThermoFisher), rabbit anti-POLR2L Ab (ThermoFisher), rabbit anti-β-actin rabbit Ab (Cell Signaling Technology), and rabbit anti-α-actinin mAb (clone D6F6; Cell Signaling Technology). Construction of a 3D model of the E3 and Pol II complex The structure of ARMC5 was obtained from AlphaFold2 [ 64 , 65 ]. The structures of Pol II and other components of the E3 were extracted from the Protein Data Bank (PDB). ChimeraX [ 97 ] was used to construct the 3D model of the complex. The components in the complex were positioned according to the information derived from our previous deletion studies [ 45 ] and PDB. RNA-seq The total RNA of three pairs of biological replicates of KO and WT NPCs was extracted by the Rneasy kit (Qiagen). The total RNA was quantified using a NanoDrop Spectrophotometer ND-1000 (NanoDrop Technologies), and its integrity was assessed on a 2100 Bioanalyzer (Agilent Technologies). rRNA was depleted from 250 ng of total RNA using QIAseq FastSelect (Human 96rxns; Qiagen). Library construction, quantification, normalization, and sequencing were conducted as described elsewhere [ 45 ]. Data processing such as read trimming, clipping and alignment, and differential expression analysis were carried out as described previously [ 45 ]. The mapping was conducted at the transcript level. Out of the original transcripts, 47,059 transcripts were left after filtering. It is to be noted that one gene could have several different transcript isoforms due to alternative splicing or the use of varying initiation sites. Each gene was tested for differential expression between WT and KO NPCs with an EdgeR LRT test. Due to the concern that the augmented Pol II pool caused by ARMC5 deletion might generally affect all the gene transcription in the KO NPCs, the raw counts of each transcript were normalized by the ratio between the log 2 counts per million reads of Rn7sk of a particular sample to the average Rn7sk log 2 counts per million reads of across all samples. Rn7sk was transcribed by Pol III and was thus independent of the putative influence of the Pol II pool size. This normalization was used instead of using edgeR::calcNormFactors, which uses a trimmed mean of M values normalization by default. The heatmap was constructed using R pheatmap. The volcano plots and bar plots were produced using R v3.6.3. ggplot2. Based on a threshold for gene-level significance of 5% FDR, GO analysis of the RNA-seq data was performed using the Cytoscape v3.7.2 application ClueGO v2.5.6. The Uniprot Gene Ontology Annotations were used to classify the GO terms. POLR2A ChIP-Seq Three biological replicate pairs of KO and WT NPCs were washed with ice-cold PBS twice and re-suspended in 1 ml PBS. They were crosslinked by adding 66.7 μl of 16% formaldehyde (1% final) at room temperature for 15 min. The reaction was quenched with 107 μl of 1.25 M glycine (0.125 M final) at room temperature for another 10 min in rotating tubes. The samples were centrifuged, and the pellets were washed twice with ice-cold PBS. The crosslinked pellets were suspended in 300 μl swelling buffer (25 mM HEPES, pH 7.5, 1.5 mM MgCl 2 , and 10 mM KCl, 0.1% NP-40) and incubated on ice for 20 min to release nuclei. The nuclei were harvested by centrifugation, re-suspended in 200 μl ChIP sonication buffer (50 mM HEPES, pH 7.5, 140 mM NaCl, 1 mM EDTA, 0.1% Na-deoxycholate, 1% Triton X-100, 0.1% SDS), and incubated on ice for 20 min. The nuclei were sonicated with a probe-based sonicator (model FB120, CL-18 probe; Fisher Scientific) at a 25% amplitude setting. The sonication was conducted using 30-s pulses at 30-s intervals for a total of 5 min. The sonicated nuclei representing chromatin were harvested by centrifugation and were ready for immunoprecipitation. To quantify chromatin and assess the degree of its fragmentation, we treated 5% of the sonicated nuclei (10 μl/sample) with 10 μg of RNase A for 15 min at 37 °C, followed by 20 μg of proteinase K for 30 min at 65 °C. They were quickly de-crosslinked for 5 min at 95 °C. DNA was extracted with the QIAquick PCR Purification Kit (Qiagen). DNA concentration was determined with a Nanodrop 1000 Fluorospectrometer. DNA fragment sizes were confirmed to be mainly at the 100–800-bp range according to electrophoresis. For immunoprecipitation, an equal amount (based on their DNA measurements) of sonicated chromatin of different samples was reacted with mouse anti-POLR2A N-terminal domain mAb (clone D8L4Y, Cell Signaling Technology) (1:100) at 4 °C overnight, followed by 40 μl magnetic protein G beads (Bio-Rad) for another 2 h at 4 °C. The beads were rinsed once with sonication buffer, once with wash buffer A (50 mM HEPES, pH 7.5, 500 mM NaCl, 1 mM EDTA, 0.1% Na-deoxycholate, 1% Triton X-100, 0.1% SDS), once with wash buffer B (20 mM Tris, pH 8.0, 250 mM LiCl, 1 mM EDTA, 0.5% NP-40, 0.5% Na-deoxycholate) and then twice with TE buffer (10 mM Tris, pH 8.0, 1 mM EDTA). The chromatin was eluted with elution buffer (50 mM Tris, pH 8.0, 10 mM EDTA, 1% SDS) at 65 °C for 10 min. The immunoprecipitated chromatins were de-crosslinked at 65 °C overnight with NaCl adjusted to 540 mM. The chromatins were then treated with 10 μg RNase A/sample at 37 °C for 1 h, followed by 40 μg proteinase K/sample for 2 h at 45 °C. DNA of the samples was purified with QIAquick PCR Purification kit (Qiagen) and quantified by the Bioanalyzer (Agilent). Libraries were prepared robotically with 2–10 ng of fragmented DNA ranging from 100 to 300 bp in length, using the NEBNext Ultra II DNA Library Prep Kit for Illumina (New England BioLabs), as per the manufacturer’s recommendations. Adapters and PCR primers were purchased from Integrated DNA Technologies. Size selection was carried out using SparQ beads (Qiagen) prior to PCR amplification (12 cycles). Libraries were quantified using the Kapa Illumina GA with Revised Primers-SYBR Fast Universal kit (Kapa Biosystems). Average fragment sizes were determined using a LabChip GX (PerkinElmer) instrument. The library construction and sequencing were the same as described elsewhere [ 45 ]. Downstream data processing, such as ChIP-seq read trimming, alignment, peak calling, and annotation, was performed as described before [ 45 ]. To assess differences in Pol II occupancy patterns between WT and KO samples, we obtained ChIP-seq read counts within the following genomic regions using HOMER: the promoter region (from TSS (transcription start site) − 400 bp to TSS + 100 bp), the gene body (from TSS + 100 bp to TES (transcription end site) − 100 bp), the TES region (from TES − 100 bp to TES + 2000 bp; also called the downstream region), the 5′ untranslated region (5′UTR), introns, 3′UTR, enhancers (from TSS − 5000 bp to TSS − 400 bp), the region from − 10,000 bp to TSS, the region from TSS to + 10,000 bp, and the intergenic region. Since the POLR2A levels in the KO tissues were elevated, we speculated that there would be more Pol II association with the genes, hence higher POLR2A ChIP signal in the KO promoter regions than in the WT counterparts. Therefore, genes that lacked POLR2A ChIP-seq signal in the KO tissues were filtered out, as these genes were believed to have no signals in WT tissues either. Raw counts were normalized using edgeR’s trimmed mean of M algorithm [ 98 ] and were then transformed to log 2 counts per million using the Voom function implemented in the Limma R package [ 99 ]. To construct the global metagene Pol II-binding profile, normalized read counts (fragments per kilobase of transcript per million mapped reads of the full gene length plus 2000-bp flanks (TSS − 2000 bp to TES + 2,000 bp) were obtained from all the genes that passed the filtering. Both flanks were divided into 20 equal-sized bins of 100 bp each. The gene bodies were scaled to 60 bins for the full gene length. FPKM was calculated from BAM input files using ngs.plot [ 100 ] with the following parameters: -G mm10 -R genebody -D ensembl -FL 200 -BOX 0 -SE 1 -VLN 0 -LWD 2 -WD 9. These global metagene Pol II-binding profiles were only for visualization of differences in Pol II density, and inferential statistics were not conducted as per custom. The peak count versus distance (− 10 kb to + 10 kb from TSS) profile was generated from 51 equal-sized bins of 400 bp for this region of all the genes that passed filtering. Differential Pol II peak density analysis in WT and KO tissues was conducted as described before [ 45 ]. We calculated the pausing index for each gene by computing the ratio of Pol II signal density in the promoter region (from TSS − 400 bp to TSS + 100 bp) to signal density within the gene body (from TSS + 100 bp to TES + 2 kb) as follows: L1 is the length of the promoter region (always 500 bp), and L2 is the length of the gene body (variable). Genome browser tracks were created with the HOMER makeUCSCfile command and bedGraphToBigWig utility from UCSC. Tracks were normalized so that each value represented the read count per base pair per 10 million reads. UCSC Genome Browser ( http://genome.ucsc.edu/ ) was implemented for track visualization. Nuclear run-on assay Nuclear run-on assays were carried out according to a step-by-step protocol by Roberts et al. [ 101 ]. Briefly, nuclei from 4 × 10 6 KO or WT NPCs were collected and transcribed with Br-UTP and other NTPs. Nuclear RNA was extracted using the MEGAclear transcription clean-up kit (Life Technologies), and genomic DNA contamination was removed using the TURBO DNA-free kit (Life Technologies). Br-UTP-incorporated nascent transcripts were precipitated with anti-BrdU mAb (Santa Cruz Biotechnology), extracted, and reverse-transcribed using a high-capacity cDNA reverse transcription kit (Invitrogen). qPCR was performed to quantify the nascent mRNA. To empirically determine the sensitivity of detecting nascent transcripts and the purity of Br-UTP-incorporated nascent transcripts over UTP-containing transcripts, before reverse transcription of the nuclear run-on reactions, we spiked the test samples with separately prepared control bacterial oligonucleotides with or without incorporated Br-UTP at known concentrations. MM study cohort A total of 511 subjects were selected for whole-exome sequencing from an MM study cohort enrolled in spina bifida clinics in five locations in North America between 1997 and 2010 [ 102 ]. All the subjects were consented to and enrolled in accordance with an institutional Internal Review Board at the University of Texas Health Science Center at Houston. In total, samples of 257 MM subjects of European descent, comprising 140 females and 117 males, and 254 Mexican–American MM subjects comprising 134 females and 120 males, were sequenced. Three hundred and sixty-five of the study subjects (over 70%) were born before January 1998, when the North American countries mandated folic acid fortification of food crops. Sixty subjects were born in 1998, and 86 after 1998. Blood samples were collected from the subjects, and genomic DNA was extracted for the study. Exome sequencing and variant annotation Exome library probes were made from an in-house design based on TargetSeq (Invitrogen) with the addition of splice sites, UTRs, small non-coding RNAs (e.g., microRNAs), and a selection of miRNA binding sites, and 200-bp promoter regions. High-quality genomic DNA samples were processed using the exome library probes, and the captured DNA products were sequenced following the manufacturer’s standard protocol for multiplexed sequencing using the P1 chip on the Ion Proton platform (Invitrogen). Quality of sequencing was maintained at 40–60 million reads/sample with read length between 120 and 150 bases, and over 75% of reads were on target for all successfully sequenced samples. Other quality controls were implemented to map around 45,000–60,000 single-nucleotide variants (SNV) per sample with ~ 50% heterozygote variants and the transition/transversion ratio around 2.5. Samples that failed to meet the above quality criteria were repeated or substituted by another subject’s DNA. Sequence data passed the above variant- and sample-quality filters were processed to call variants using Genome Analysis Toolkit HaplotypeCaller version 3.x, following best-practice guidelines. Briefly, only variants designated a “PASS” by Variant Quality Score Recalibration and having mapping quality score > 20, or inbreeding coefficient < − 0.3, were retained for further analysis. Individual sample filters were used to ensure only high-fidelity variants with an alternate allele depth > 25%, a read depth > 10, and a genotype quality score > 20. The allele count, allele number, and allele frequency were recalculated for individual ethnicities after the filtering processes. Filtered high-quality SNVs were annotated using the non-synonymous SNV functional predictions database [ 103 ] with an in-house Python script for all current functional prediction information publicly available. Further analyses were focused on single SNVs leading to stop-gained, stop-lost, non-synonymous, splice donor, and acceptor site changes in canonical transcripts. Novel functional deleterious SNV analysis To analyze SNVs, we referred to AFs of variants observed in the non-Finnish European and Ad Mixed American populations of the genome aggregation database (gnomAD) Exome Controls [ 104 ]. Variants not observed in the non-Finnish European or Ad Mixed American gnomAD Exome Controls or having ethnic allele frequency = 0 were defined as novel SNVs (nSNVs). Datasets of non-Finnish Europeans or Ad Mixed Americans in gnomAD Exome Controls were downloaded for extracting alternate allele counts and total allele counts of all variants identified in MM subjects for comparison using the sample filters described previously [ 105 ]. For novel variants identified in subjects but not in gnomAD, we further verified that the loci were sequenced in gnomAD with ≥ 30X coverage, and the corresponding variants were absent. Loci with < 30X coverage were considered poor in quality and were discarded. These loci in the gnomAD controls were interpreted as having the reference alleles only, and the alternate allele frequency was considered to be zero. nSNVs identified in the MM subjects were further verified by Sanger sequencing. PCR primers franking 200 to 300 bases from the variants were designed to amplify the variant-containing loci from the MM subjects. The amplified loci were then sequenced. Variants with allele frequency in non-Finnish Europeans or Ad Mixed Americans less than 0.01 were defined as rare, while allele frequency ≥ 0.01 was defined as common. Combined Annotation Dependent Depletion [ 71 ] (C-score) of variants was used as a model to predict deleteriousness. C-score is the -10 × log % rank of deleteriousness. A variant with a C-score of 13.01 is among the top 5% of most deleterious variants, and a variant with a C-score of 20 is among the top 1%. For alternate allele counts between the MM subjects and gnomAD Exome Controls, odds ratios were calculated, and the Fisher tests were performed. Analysis of variants within the ARMC5 transcript (NM_001288767) for linkage disequilibrium (LD) was carried out using Idlink [ 106 ]. Construction of mutant ARMC5-expressing plasmids Plasmids expressing human ARMC5 mutants ARMC5(P33S)-FLAG, ARMC5(R334C)-FLAG, ARMC5(R406Q)-FLAG, ARMC5(G422S)-FLAG, ARMC5(P559L)-FLAG, and ARMC5(R793Q)-FLAG were generated by mutating WT ARMC5-FLAG-expressing plasmid (EX-H0661-M11, GeneCopoeia, Rockville, US), using KOD Xtreme Hot Start DNA polymerase (71,975, Millipore-Sigma, US) and the Q5 Site-Directed Mutagenesis Kit (E0554S, New England Biolabs, ON, Canada).
Results Increased incidence of NTD in Armc5 KO mice To explore unknown functions of a molecule, we often first assess its expression location. In situ hybridization revealed that Armc5 had generalized expression in all the tissues in the e10 embryo and highly expressed in the mouse e10 neural tube (Fig. 1 A). RT-qPCR showed that Armc5 was expressed in the neural tubes as well as their surrounding tissues on e8.5 and e9.5 (Fig. 1 B). On e10.5, according to RT-qPCR, Armc5 expression in the whole neural tube tended to be higher than that in the surrounding tissues, but the difference did not reach a significant level at the present sample size ( n = 3) (Fig. 1 B). This is compatible with the in situ finding that on e10, only in a small section of the neural tube Armc5 expression was higher than the surrounding tissues. We previously reported that ARMC5 was expressed predominantly in the cytosol [ 19 ]. However, we later found that if nuclear export was blocked, ARMC5 appeared in the nuclei of transfected HEK293 cells [ 45 ]. To determine the location of ARMC5 expression in neuronal cells, we transfected SK-N-SH neuronal cells with ARMC5-expressing constructs. Tagged ARMC5 was found in both cytosol and nuclei (Fig. 1 C). The microscopy image was deposited in Figshare [ 46 ]. The presence of ARMC5 in the neural tube of embryos and in the nuclei of cells prompted us to assess the role of Armc5 in neural tube development and its activity in the nuclei. In addition to smaller body sizes, as reported before [ 19 ], live-born Armc5 KO mice in the CD1 x C57BL/6 F1 background presented significantly high incidences of kinky tails, a form of NTD [ 47 , 48 ], upon visual inspection (Fig. 1 D) or micro-CT imaging (Fig. 1 E). Kinky tails were observed in 27.7% of KO mice at weaning, compared to 3.7% of the wild-type (WT) counterparts (Fig. 1 F). The incidence rates did not show any apparent sex bias, with 27.6 and 27.9% in male and female KO mice, respectively. Between e9.5 and e13.5, about 14.9% of KO embryos but 0% of WT embryos manifested exencephaly, a severe form of NTD. A representative image of an e12.5 KO fetus with exencephaly is shown in Fig. 1 G, and a bar graph summarizing the incidence of KO and WT fetuses between e9.5 and e13.5 is presented in Fig. 1 H. The KO mice from heterozygous male and female parents were born below the Mendelian ratio. Only about 10% of the live pups at weaning were KO instead of the expected 25%. This suggests a high degree (15/25 = 60%) of embryonic/perinatal lethality in KO mice. At e13.5, the percentage of KO fetuses (including those with exencephaly) was still at the expected Mendelian ratio of 25%. Thus, 60% of KO fetuses/newborns must have died between e9.5 and 13.5 and weaning (F i g. 1 I; yellow triangle). The mice with kinky tails at weaning (27.7%) represented 11.1% of the KO fetuses at e9.5–e13.5 (Fig. 1 I, upper panel). Hence, the NTD incidence (14.6% with exencephaly, which must have died before birth, plus 11.1% live mice with kinky tails) is 26% among all the KO mice (dead or live) (Fig. 1 I, upper panel), compared to 3.7% among the WT mice (Fig. 1 I, lower panel). Heterozygous embryos/fetuses and mice had no increased NTD incidence compared to the WT ones. The results of this section indicate that ARMC5 mutation is an NTD risk modifier in mice. Decreased proliferation and increased apoptosis of KO cells implicated in neural tube development Neural tube closing requires the coordination of many cellular processes, such as proliferation and apoptosis. We thus evaluated these processes in the KO cells that were implicated in neural tube development. The apoptosis of cells in the e9.5 neural plate was evaluated using fluorescent terminal deoxynucleotidyl transferase dUTP nick end-labeling (TUNEL) assay (Fig. 2 A). The microscopy image was deposited in Figshare [ 49 ]. The number of TUNEL-positive apoptotic cells (pseudo-green) in the neural plate at the rostral hindbrain level, as well as those in the surrounding tissues of the KO mice with exencephaly, was significantly higher than that of the WT counterparts. It is to be noted that only KO embryos with exencephaly were assessed for apoptosis in their neural plates. This was because we only wanted to test KO embryos that had NTD. Only those with exencephaly were the ones who surely had NTD. At e9.5, the kinky tail phenotype had not appeared yet, and most (74%) of the KO embryos did not have NTD. Randomly testing KO embryos would include many that had no NTD. It is very challenging to obtain neural tube cells in sufficient numbers and homogeneity for biochemical analysis. An alternative to conducting biochemical studies of neural tube development at the molecular level is to use neural stem cells and neural progenitor cells (collectively called neural progenitor cells (NPCs) in this work), which are known to be involved in neural tube development [ 50 , 51 ]. We isolated these cells from the e13.5 KO and WT brains and spinal cords and expanded them in vitro for 10–12 days according to the standard NPC preparation protocol [ 52 ]. The WT and KO e13.5 brains used for NPC preparation were harvested based on genotype. As long as the NPCs keep the progenitor cell characteristics, they resemble the NPCs presented in the neural tubes. SOX2 and NESTIN are markers of NPCs, and their expression in the KO and WT NPCs was similar according to RNA-seq (GEO; accession number GSE169350)[ 53 ]. The purity of WT and KO NPCs we prepared was routinely about 85%, according to SOX2 and NESTIN staining. A typical SOX2 and NESTIN staining of WT NPCs is presented in Fig. 2 B. The microscopy image was deposited in Figshare [ 54 ]. These NPCs were used to study their proliferation and apoptosis in this section, as well as their transcriptome and POLR2A ChIP-seq. NPC growth is EGF (epidermal growth factor)-dependent in vivo and in vitro [ 55 , 56 ]. The KO NPCs proliferated significantly slower than WT ones at different input cell numbers in the presence of EGF (Fig. 2 C). To test the responsiveness of WT and KO NSCs to EGF stimulation, we also cultured the NSCs at a constant input cell number but at different EGF concentrations. KO NSCs responded poorly to EGF (Fig. 2 D). To further prove the proliferative defect of KO NPCs, we showed that KO NPCs synchronized at the G1 phase progressed slower into the S phase, compared to their WT counterparts (Fig. 2 E). At all the time points after the release, except 0 and 24 h, fewer KO cells were found in the S phase, indicating slower S phase entry. Conversely, more G1 phase KO NPCs were found at most of these time points, as expected. NPCs will undergo apoptosis upon sudden EGF withdrawal [ 57 ]. This is a convenient model to assess NPC apoptosis. EGF withdrawal-induced apoptosis of KO and WT NPCs was measured by annexin V staining followed by flow cytometry. Representative histograms are shown (Fig. 2 F), and a bar graph quantifies the results of all the independent experiments (Fig. 2 G). Both KO and WT NPCs manifested an increased apoptosis rate inversely correlated to EGF concentrations, but KO NPCs showed a significantly higher degree of apoptosis. These results indicate that ARMC5 deletion compromises the proliferation and induces apoptosis of cells involved in neural tube development in mice. ARMC5 physically interacts with CUL3 and POLR2A We previously conducted a yeast 2-hybrid assay to identify ARMC5-binding proteins. Seventeen significant hits were obtained. CUL3 and POLR2A were among the top six on the list [ 19 ]. To validate these findings, we transfected HEK293 cells with FLAG-tagged ARMC5-expressing plasmids. CUL3 and POLR2A were significantly associated with ARMC5 (false discovery rate (FDR) < 0.05 and fold change > 2) according to anti-FLAG Ab immunoprecipitation followed by liquid chromatography with tandem mass spectrometry (LC–MS/MS) (Fig. 3 A). The dataset is available in proteomeXchange (accession number PXD047533) [ 58 ]. This was consistent with our previous immunoprecipitation/immunoblotting results in HEK293 cells [ 45 ]. However, the current LC–MS/MS results revealed a new finding that multiple other Pol II subunits (i.e., POLR2B, 2C, 2H, and 2I) in addition to POLR2A were also associated with ARMC5, suggesting a possibility that these Pol II subunits are also substrates of this novel E3. Additional validation of the interaction among ARMC5, CUL3, and POLR2A was carried out employing immune precipitation followed by immunoblotting in neuronal cells, which were more relevant to NTD than HEK293 cells. SK-N-SH human neuronal cells were transfected with plasmids expressing human ARMC5-HA. The cell lysates were precipitated with anti-HA Ab and then immunoblotted with anti-POLR2A Ab (Fig. 3 b) or anti-CUL3 (Fig. 3 C). Endogenous POLR2A (Fig. 3 B) and CUL3 (Fig. 3 C) were detected in the precipitates, confirming that the ARMC5 physically interacted with these two molecules. In the ARMC5 immunoblotting, there were always two prominent bands, one at 130 kD and the other at 100 kD. The smaller band’s intensity varied in different experiments. Our previous study using step-wise deletion of ARMC5 confirmed that the 100-kD fragment was a cleavage product of the full-length larger 130-kD ARMC5 but not an isoform initiated from a downstream start codon during translation [ 45 ]. The results of this section show that ARMC5 interacts with both CUL3 and POLR2A. ARMC5 was part of POLR2A-specific E3 responsible for POLR2A degradation under a physiological condition CUL3 is often part of a multiple-subunit RING-finger E3 complex, in which CUL3 interacts with a RING-finger protein RBX1 [ 59 ]. In such complexes, CUL3 also interacts with a BTB domain-containing protein, which serves as an E3 substrate recognition subunit [ 60 ]. Since ARMC5 contained a BTB domain towards its C-terminus and interacted with CUL3 and POLR2A, we hypothesized that it was the substrate recognition subunit of an E3 whose substrate was POLR2A. One of the consequences of protein ubiquitination, particularly K48-linked ubiquitination, is to channel substrate proteins to the proteasome for degradation [ 32 ]. If ARMC5-CUL3-RBX1 is a POLR2A-specific E3, we might observe an accumulation of POLR2A protein in the ARMC5 KO tissues. The C-terminal domain (CTD) of POLR2A of humans and mice contains 52 tandem heptapeptide repeats (Tyr1-Ser2-wujianpPro3-Thr4-Ser5-Pro6-Ser7). The phosphorylation or the lack of it in different serine residues at the CTD reflects Pol IIs at different stages of transcription. POLR2A without CTD phosphorylation is present in Pol II at the preinitiation stage at the promoter. S5 phosphorylation occurs when POLR2A is at the beginning of the transcription, i.e., at the transcription start site (TSS), while S2 phosphorylation occurs when Pol II moves towards the end of the gene, i.e., at the transcription end site (TES) [ 61 ]. MAb F12 binds to the N-terminus of POLR2A regardless of the latter’s phosphorylation status, and such PLOR2A is in all the Pol IIs that are on the whole length of genes, i.e., at the promoter region, TSS, gene body, and TES. MAb 4H8 recognizes POLR2A with hyper and hypo CTD phosphorylation. This mAb is slightly different from mAb F12 in that the former does not bind to the un-phosphorylated POLR2A at the preinitiation stag. Our immunoblotting results showed that hyper- and hypo-phosphorylated POLR2A (recognized by mAb 4H8) (Fig. 4 A), the total POLR2A (recognized by mAb F12) (Fig. 4 B), POLR2A with CTD S2 phosphorylation (recognized by anti-P-S2 mAb) (Fig. 4 C), and POLR2A with CTD S5 phosphorylation (recognized by anti-P-S5 mAb) (Fig. 4 D) in KO neural tubes and NPCs were all increased. These results indicate that POLR2As at different stages of transcription are all increased under a physiological condition. The mRNA levels of POLR2A in the KO neural tubes (Fig. 4 E) and NPCs (Fig. 4 F) were not upregulated. This confirms that the increased POLR2A protein levels were a post-transcription event. As a matter of fact, the POLR2A mRNA level in the KO NPCs were even reduced (Fig. 4 F) due to some unknown mechanisms after ARMC5 deletion. As a reduced mRNA level normally translates into a reduced protein level, the increased POLR2A protein level in the KO NPCs, accompanied by their reduced POLR2A mRNA level, suggests that the post-translational upregulation of POLR2A is more prominent than it appears to be. Is POLR2A accumulation in the KO cells due to compromised ubiquitination? We evaluated POLR2A ubiquitination in KO NPCs. Total POLR2A ubiquitination was reduced in KO NPCs in the presence of proteasome inhibitor MG132, which prevented rapid degradation of ubiquitinated proteins by the proteasome), supporting our hypothesis that ARMC5 is part of a POLR2A-specific E3 (Fig. 5 A). K48-linked ubiquitination is the major type of polyubiquitination for proteasome-mediated protein degradation [ 62 ]. K48-linked POLR2A ubiquitination was thus evaluated. Total POLR2A in the KO NPCs was precipitated by F12 mAb, which binds POLR2A regardless of its CTD phosphorylation status and then blotted with Ab against K48-linked ubiquitin. In this experiment, a limited amount of F12 mAb was used to pull down a similar amount of POLR2A in WT and KO NPC samples (second row, Fig. 5 B). The results showed that the KO NPCs had significantly lower levels of K48-linked ubiquitination of total POLR2A (top row, Fig. 5 B). A similar result was obtained when mAb 4H8, which is against both hyper- and hypo-phosphorylated POLR2A, was used for immunoprecipitation (Fig. 5 C). In these experiments (Fig. 5 BC, ), the presence of proteasome inhibitor MG132 augmented the signals of ubiquitinated POLR2A, indicating that the proteasome constantly removed K48-linked POLR2A in the absence of the inhibitor. The presence of K48-linked ubiquitinated POLR2A in the KO cells is probably caused by some other E3s. Using a different approach to prove the role of ARMC5 on POLR2A ubiquitination, we overexpressed FLAG-tagged ARMC5 in HEK293 cells, along with an HA-tagged mutant ubiquitin that only allowed K48-linked polyubiquitination. ARMC5 overexpression in these cells resulted in enhanced K48-linked POLR2A ubiquitination (Fig. 5 D). This result conversely corroborates the conclusion obtained from KO NPCs. The results in this section reveal that ARMC5 physically interacts with CUL3 and POLR2A and is part of a POLR2A-specific E3 under a physiological condition. This E3 controls POLR2A degradation by the latter’s K48-linked ubiquitination and shows no discrimination against the POLR2A CTD phosphorylation status. ARMC5 controls the degradation of most subunits of Pol II under physiological conditions The presence of multiple Pol subunits in the ARMC5 precipitates according to proteomics (Fig. 3 A) raised an intriguing probability that this novel ARMC5-containing E3 was involved in the degradation of not only POLR2A but also other Pol II subunits that are associated with POLR2A. We assessed the protein levels of all 12 Pol II subunits in the WT and KO mouse embryonic fibroblasts (MEFs). Quite unexpectedly, in the KO MEFs, all the subunits were drastically accumulated according to immunoblotting (Fig. 6 A). The statistical analyses are presented in Fig. 6 B. Although POLR2C had not reached statistical significance, probably due to a high degree of inter-experimental variation, the tendency of the increase in the KO cells is obvious. The mRNA levels of these subunits in the KO and WT MEFs had no significant difference except POLR2H, which had a very moderate increase in the KO MEFs (Fig. 6 C). These results indicate compromised degradation of most, if not all, the Pol II subunits, in the absence of ARMC5. Based on our novel results presented in this study, our previous detailed deletion studies of interacting regions among ARMC5, POLR2A, and CUL3 [ 63 ], and the structural information found in the Protein Data Bank and AlphaFold2 [ 64 , 65 ], we constructed a 3D model for a complex containing this multi-subunit RING Finger family E3 and its substrate Pol II (Fig. 6 D). In this complex, ARMC5 functions as the substrate (POLR2A) recognition subunit of the E3, which likely acts not only on its direct target POLR2A but all the other Pol II subunits in its vicinity. Our previous study demonstrated that this E3 forms a dimer, with two ARMC5s linked together via their ARM domains [ 45 ]. Thus, a dimeric ARMC5-containing E3 interacting with two Pol II complexes is illustrated in this 3D model. The impact of ARMC5 KO on the NPC transcriptome The significant accumulation of almost all the Pol II subunits in the KO cells suggests an enlarged Pol II pool size. Since most Pol IIs in the nuclei are known to engage with the genes [ 66 ], this enlarged Pol II pool also likely does so. The consequence of an enlarged Pol II pool is a topic that is not well studied. Is there generally decreased transcription due to the failure to degrade stalled Pol II or a generally increased transcription because more Pol IIs are available? To answer these questions, we conducted RNA sequencing (RNA-seq) of KO and WT NPCs to evaluate their transcriptome. Forty-seven thousand fifty-nine transcripts from 16,475 genes showed detectable expression in NPCs after filtering out those with less than one count per million reads. The RNA-seq dataset is available in the Gene Expression Omnibus (GEO; accession number GSE169350) [ 53 ]. Due to concerns that the abnormal Pol II pool size might systematically skew all the transcribed genes, we employed Rn7sk RNA as an internal control to normalize each sample’s reads. Rn7sk is transcribed by Pol III and is not subjected to the possible influence of Pol II [ 67 , 68 ]. Indeed, Rn7sk expression in both KO and WT NPCs was similar (Additional file 1 ; Fig. S1). A threshold for transcript-level significance of FDR < 0.05 was applied to the paired comparison of RNA-seq results from 3 KO and 3 WT NPC biological replicates. After filtering out transcripts that were not true positives (true positives were defined as having a complete exact match of intron chains with a GffCompare class code of “ = ” [ 69 ]), we obtained 111 transcripts from 106 unique genes that showed significantly different expressions between KO and WT NPCs. These transcripts and genes are listed in Additional file 1 , Table S1, along with their FDRs, fold changes, and read numbers. It is to be noted that three genes (i.e., Fam172a , Slx1b , and Slc25a53 ) each had one upregulated transcript and one downregulated transcript (Additional file 1 , Table S1). Two other genes ( Pogk and Spg20 ) each had two transcripts, but both transcripts were upregulated. This resulted in 46 unique genes with upregulated transcripts and 63 unique genes with downregulated transcripts. Fifty-five transcripts in this list with the lowest FDRs are shown in a heatmap (Fig. 7 A). In this heatmap, Cnot1 appeared twice: once as an increased transcript and once as a decreased one, probably reflecting up- and downregulation of different isoforms of this gene. A volcano plot illustrates the fold change and FDR of the significantly changed genes, with several prominently changed ones annotated (Fig. 7 B). Armc5 was among the downregulated ones, as expected. One of the possible roles of POLR2A ubiquitination is to remove persistently stalled Pol II to allow transcription to resume in the case of DNA damage or cellular stress. Failure to remove the stalled Pol II is believed to cause a general decrease in transcription. However, to our surprise, there was no generalized depression of transcription in the KO NPCs according to RNA-seq. As mentioned above, only 111 transcripts from 106 unique genes were significantly dysregulated, 48 (43.2%) being upregulated and 63 (56.8%) downregulated (Fig. 7 C). For the vast majority of the genes (16,475–106 = 16,369 genes) that had detectable expression in NPCs, their expression was not influenced by the failed degradation of Pol II. To understand the roles of those dysregulated genes in NTD pathogenesis, we performed a gene ontology (GO) analysis of the significantly changed genes for their relationship to biological processes. Twenty-eight significant terms were identified. In addition, eight terms with high relevance to neural tube development were also chosen, even though they were not statistically significant. The GO terms, term p -values corrected with Bonferroni step down, the number of genes associated with the terms, and the name of the associated genes are presented in Additional file 1 , Table S2. Fifteen terms with known relevance to NTD were selected, and the number of the significant genes related to a particular term is depicted in a bar graph (Fig. 7 D). This GO analysis will facilitate our future investigation regarding which and how the dysregulated genes cause NTD. The steady-state mRNA levels determined by RNA-seq reflect a combined outcome of mRNA transcription and degradation. To ascertain that the mRNA upregulation according to RNA-seq of KO NPCs was genuine but not due to decreased mRNA degradation, we conducted nuclear run-on assays on several genes (i.e., Dnah9 , Ifi44 , Irf8e7 , and Tgfb1 ) that were upregulated according to RNA-seq at the transcript level ( Dnah9 and Ifi44 ) or at the gene level ( Irf8 and Tgfb1 ). We confirmed that their de novo transcription was indeed upregulated, consistent with their steady-state mRNA levels according to RNA-seq (Fig. 7 E). The nuclear run-on assay was also used to assess another group of four genes ( Gapdh, Rpl10, Rplp0 , and Ubc ) that were not modulated in the KO NPCs according to RNA-seq. As expected, their de novo transcription was similar to their WT counterparts. These results corroborate that of RNA-seq and indicate that the RNA-seq results largely reflect the transcription rates in our experiments. NTD has multifactorial pathogenic mechanisms, and dysfunctional NPCs probably only contribute to some extent to the pathogenic process. Other critical contributing factors include folate intake and metabolism [ 8 ]. Genes involved in folate metabolism have their prominent expression in tissues other than NPCs or neural tubes. In a separate project where we conducted an RNA-seq of the adrenal glands, we noticed that Folh1 expression in the KO tissue was significantly reduced (Fig. 7 F). This finding was confirmed by RT-qPCR (Fig. 7 G, left panel). In NPCs, the Folh1 mRNA level was too low to be detected by RNA-seq, but RT-qPCR showed significantly lower Folh1 expression in the KO NPCs (Fig. 7 G, right panel). More importantly, the FOLH1 protein level in the KO intestine was significantly reduced, the intestine being the site where FOLH1 exerts its function in folate absorption (Fig. 7 H). This transcriptome study indicates that in the KO NPCs, there is no generalized transcription suppression or upregulation. A subgroup of genes in NPCs and the intestine from KO mice are dysregulated. Some of these dysregulated genes have functions relevant to neural tube development and might contribute to NTD pathogenesis. The effect of compromised POLR2A degradation on gene-associated Pol II peak density The accumulation of Pol IIs in the KO cells raised the question of whether they were part of the stalled Pol IIs due to failed degradation. To answer this question, we conducted POLR2A ChIP-seq in NPCs, and the results were analyzed along with RNA-seq data. A total of 12,107 genes had discernable ChIP-seq signals. The ChIP-seq dataset is available in GEO (accession number: GSE169582) [ 70 ]. The distribution of Pol II peaks in different regions of the genome is illustrated in Fig. 8 A. In both KO and WT NPCs, the introns had the highest peak number, followed by intergenic regions and promoter regions. Within the genes, the highest normalized Polr2a read counts (read count per million (CPM) of mapped reads) were accumulated near TSS (Fig. 8 B). Representative counts per million heatmaps for the region from − 2000 bp upstream of TSS to + 2000 bp downstream of TES of all genes in one pair of WT and KO samples are illustrated in Fig. 8 C. The Pol II peak density of all the genes for a fixed region spanning from − 10 kb to + 10 kb surrounding the TSS is shown in Fig. 8 d. These metagene analyses (Fig. 8 B–D) show no visible Pol II peak density differences between KO and WT NPCs. Such metagene analyses are for visual appreciation but are not suitable for statistical analysis. However, statistical analysis revealed 59 individual genes with highly differential Pol II peak density (FDR < 0.1) in KO versus WT NSCs (23 genes in the TSS region (from TSS − 400 bp to TSS + 100 bp); 33 genes in the gene body region (from TSS + 100 bp to TES − 100 bp); and three genes in the TES region (from TES − 100 bp to TES + 2000 bp) (Additional file 1 : Table S3). Interestingly, except for three genes (i.e., Tex14, Ttyh1 , and Adcyap1r1 ), most of these genes (56 out of 59 genes) presented significantly increased peak density. The Pol II peak density tracks of 4 genes (i.e., Cdkn1a , Gadd45b , Mafa , and Pcdh8 ) that had the highest increase of Pol II peak density in the KO NPCs or had known relevance to NTD were illustrated (Fig. 8 E). The higher Pol II peak density in these genes was accompanied by increased mRNA levels, as confirmed by RT-qPCR (Fig. 8 F). This finding has raised an interesting possibility that for a subset of genes, a larger Pol II pool might promote their transcription. We determined the pausing indices (PI) of KO and WT ChIP-seq results. PI is defined as the ratio of Polr2a CPM in the TSS region versus that in the gene body region of all the genes with ChIP-seq signals and is an indicator of the transcription activity. Genes in the KO and WT NPCs had no significant differences in PI (Fig. 8 G), suggesting that there is no genome-wide Pol II pausing, and this is consistent with the RNA-seq results. ARMC5 mutation was a risk factor for human NTD Having demonstrated the involvement of ARMC5 in mouse NTD, the logical next step was to study whether ARMC5 mutation was relevant to human NTD. We recruited a cohort of 511 MM patients, MM being a severe form of NTD. Single-nucleotide variants (SNVs) in ARMC5 transcripts of these MM subjects were assessed by whole-exome sequencing. Among the 511 MM subjects, 257 were Americans of European descent, and 254 were Mexican-Americans. The control populations of Non-Finnish Europeans and Ad Mixed Americans in the genome aggregation database (gnomAD) were used as reference controls. These controls were not selected for or against MM. The MM subjects’ age, alternate SNV position and protein mutation, alternate allele counts, allele number, and allele frequency are shown in Table 1 . The allele numbers varied for different SNVs in both MM subjects and controls due to differences in exon-capturing techniques and batch effect. Larger allele number variation occurred in gnomAD controls because the data were compiled from multiple projects. The deleteriousness of the SNVs in ARMC5 of the MM subjects was calculated according to the Combined Annotation Dependent Depletion Phred score (C-score) [ 71 ]. Nine SNVs representing the top 5% of most deleterious ones (i.e., with C-score > 13.01) found in ARMC5 transcripts of MM subjects are listed in Table 1 . These SNVs were all missense variants, and their positions are illustrated in Fig. 8 a. ARMC5 has several isoforms, all coded by the same ARMC5 gene. The 935-aa isoform (NP_001098717.1) (upper panel, Fig. 9 A) is the most abundant one presented in almost all the tissues [ 18 ]. The 1030-aa isoform (NP_001275696.1) (lower panel, Fig. 9 A) is the longest and has a 95-aa N-terminal region coded by two extra exons, but this isoform is only present in a few tissues at a relatively low level [ 18 ]. The longest ARMC5 isoform was used to number the SNV positions in Table 1 so that all the mutations in any isoform can be presented in the table. Using the longest isoform for the position numbering purpose does not mean that these SNVs only exist in this longest isoform. In all likelihood, most of these SNVs, except for the three in the first 95-aa region in the N-terminus, are in the most abundant 935-aa isoform. The positions of these mutants numbered according to the longest 1030-aa isoform are also translated to positions according to the 935-aa isoform in Fig. 9 A. Later, our functional validation of these mutants would use the 935-aa isoform-based numbering. These nine SNVs were rare ones (defined as having allele frequency < 0.01), four and five being in European-American and Mexican–American MM subjects, respectively. Two of the four variants in European-American MM subjects and four of the five variants in Mexican–American MM subjects were assigned as the top 1% deleterious variants (C-scores > 20). Two variants (p.T12A rs979451735 and p.R429C rs539440145) in European-American MM subjects were not present in gnomAD non-Finnish European controls, and they were considered novel SNVs. Their alternate allele counts were significantly higher than those of the Non-Finnish European controls ( p < 0.05). The p -value of the alternate allele count of one rare SNV found in Mexican–American MM subjects was approaching significant ( p = 0.083). Since all variants identified in the approximately 9-kb ARMC5 loci were in linkage disequilibrium, these p -values were not subjected to a multiple-testing penalty. The results from this genetic study confirm that, indeed, ARMC5 mutation is significantly associated with NTD risks in humans. Our current genetic materials do not allow us to determine whether these SNVs in ARMC5 are de novo or inherited ones. Human ARMC5 mutations compromise the activity of the ARMC5-containing E3 Having found nine deleterious ARMC5 SNVs in MM patients, we needed to ask whether these SNVs affected the E3 activity. Three SNVs were located only at the N-terminus of the longest isoform. The most significant R334C mutation (position based on the 935-aa isoform, or position R429C based on the 1030-aa isoform) was in the 5th repeat in the ARM domain. One SNV was at the N-terminus before the ARM domain in the 935-aa isoform. Three SNVs (i.e., R406Q, G422S, and P559L) were scattered along the region between the ARM domain and the BTB domain. One SNV(R793Q) was found in the BTB domain (upper panel, Fig. 9 A). According to our previous deletion mutation studies [ 45 ], the ARMC domain is critical for POLR2A binding. The region before the ARM domain (aa 1–141 based on the 935-aa isoform), the region between the ARM domain, and the BTB domain also contribute to POLR2A binding but to a lesser extent [ 45 ]. We thus investigated how the mutations in these domains and regions affected the ARMC5 binding to POLR2A and CUL3. Currently, we have no knowledge about the function or the possible binding partners of aa 1–96 in the N-terminal region (positions according to the 1030-aa isoform) in the 1030-aa ARMC5 isoform (lower panel, Fig. 9 A). It is not feasible to study the binding of the three SNVs with their unknown partners. When transfected into HEK293 cells, the ARMC5 R334C mutant co-precipitated significantly less POLR2A compared to WT ARMC5 (Fig. 9 B), indicating that this mutation hampered the interaction between ARMC5 and POLR2A. Similarly, ARMC5 with mutations at G422S, P559L, and R793Q showed reduced binding with POLR2A (Fig. 9 C). However, ARMC5 mutations P33S and R406Q minimally affected POLR2A binding, suggesting that they are not essential for substrate recognition. CUL3 has a BTB-interacting domain, and it recruits a BTB domain-containing protein as its substrate recognition unit to form an active E3 [ 60 ]. The ARMC5 BTB domain is critical for the association with CUL3, according to our previous study [ 45 ]. ARMC5 R793Q mutation in the BTB domain resulted in a decreased association with CUL3 (Fig. 9 D). ARMC5 R315W mutation (position based on the 935-aa isoform) is significantly associated with the PBMAH [ 45 ]. This mutation is also located in the 5th repeat in the ARM domain. We tested ARMC5 R315W for its binding with POLR2A using a different approach, i.e., immunoprecipitation followed by LC–MS/MS. The proteomics dataset is available in proteomeXchange (accession number PXD047572) [ 72 ]. This mutant ARMC5 had about a fourfold lower binding capacity to POLR2A than that of WT ARMC5 (FDR < 0.05) (Fig. 9 E). It is interesting to note that in addition to POLR2A, the binding of the ARMC5 R315W mutant to other components of Pol II, such as POLR2B, POLR2C, and POLR2K was also reduced (Fig. 9 E). This suggests the association of WT ARMC5 with these Pol II subunits (whether directly or via POLR2A is yet to be determined) and a reduced association due to the mutation. We asked whether these ARMC5 mutations compromised the E3 function. HEK293 cells were transfected with WT or mutant ARMC5, along with HA-tagged ubiquitin. The ubiquitination of the endogenous POLR2A was determined by anti-HA immunoprecipitation of de novo ubiquitinated proteins, followed by anti-POLR2A immunoblotting. As shown in Fig. 8 f, the mutations in ARMC5 at R334C, G422S, P559L, and R793Q all caused a significant reduction of POLR2A ubiquitination compared to that of WT ARMC5. Consistent with the result of the binding study, in which the ARMC5 P33S and R406Q mutations had no impact on POLR2A binding, these mutations did not alter POLR2A ubiquitination (Fig. 9 F). These binding and ubiquitination studies demonstrate that four of the six variants tested are functional ones affecting the POLR2A-specific E3 activity, corroborating the implication of POLR2A in MM pathogenesis. The remaining two variants are likely false positives in our human MM genetic studies.
Discussion We reported here that Armc5 deletion in mice significantly augmented NTD risks. We discovered that ARMC5, being a substrate recognition subunit of a novel POLR2A-specific E3, was responsible for the degradation of almost all the 12 Pol II subunits, and its deletion caused a bona fide enlarged Pol II pool. ARMC5 KO only selectively influenced the transcription of 106 genes in NPCs, some of which, such as Folh1 , are known to be involved in processes critical for neural tube development. Our human genetic study confirmed that ARMC5 mutations are modifiers of human NTD risks, and four variants were validated as functional ones. ARMC5-CUL3-RBX1 is a novel POLR2A-specific E3 and is responsible for the degradation of most Pol II subunits Several POLR2A-specific E3 in mammalian cells have been reported, such as Nedd4 [ 67 ], Wwp2 [ 38 ], VHL/ElonginBC/Cul3/RBX1 [ 41 , 68 ], and ElonginA/ElonginBC/Cul5/RBX2 [ 69 ]. These POLR2A-specific E3s have demonstrable activities after massive DNA damage and cellular stress caused by irradiation or chemicals when there is an excessive need to remove stalled Pol II. The activity of pVHL-EloB/EloC-CUL2-RBX1 and WWP2 can be detected in the absence of exogenous DNA damage in cell lines [ 38 , 41 ] but has not been extended to tissues or organs. None of the previously identified POLR2A-specific E3s are involved in the degradation of the other Pol II subunits, with the exception of pVHL-EloB/EloC-CUL2-RBX1, which acts on POLR2G [ 44 ]. Theoretically, there must be an E3 (or E3s) in tissues and organs under physiological conditions, i.e., without artificially induced DNA damage or cellular stress, to degrade all the Pol II subunits. Such E3s are needed to remove misassembled Pol II, misfolded Pol II subunits, permanently paused Pol II (to resume transcription), or DNA-bound Pol II during DNA replication (to resolve collision between the transcription and DNA replication machinery during the S phase). Our study has discovered an ARMC5-based novel E3 that is responsible for the degradation of almost all the 12 subunits of Pol II under a physiological condition. It is to be noted that after ARMC5 deletion, the protein accumulation of one of the Pol II subunits, POLR2C, had not reached significance, and one subunit, Polr2h , presented a moderate but significant increase (about twofold) in its mRNA levels in MEFs. However, the tendency of POLR2C protein increase is consistent, and a larger sample size will probably make the increase significant. On the other hand, the twofold mRNA increase seen in Polr2h is at best of marginal interest when we analyze RNA-seq or RT-qPCR data. Therefore, it is almost certain that this E3 is the major ubiquitin ligase that controls the degradation of all 12 Pol II subunits. Other POLR2A-specific E3s may be needed only when there are excessive demands, such as significant DNA damage or severe cellular stress. The effect of ARMC5 on the degradation of all 12 Pol II subunits can be achieved in two possible ways. First, these subunits might directly associate with ARMC5 and are additional substrates of this novel E3. While we cannot totally rule out this scenario, a more likely possibility is that the other 11 Pol II subunits are brought to the vicinity of this E3 via the interaction of ARMC5 and POLR2A. In supporting this latter hypothesis, we previously demonstrated the direct interaction between ARMC5 and POLR2A by yeast 2-hybrid assay [ 19 ], while none of the other Pol II subunits were identified in that assay. The result of both possibilities is the same: in the absence of ARMC5, this E3 is not functional, leading to compromised degradation of all Pol II subunits and, hence, a bona fide enlargement of the Pol II pool under physiological conditions. The effect of the E3 on all the Pol II subunits, particularly POLR2D and PLOR2G, which are quite distant from POLR2A, raises interesting possibilities that this E3 might not only act on Pol II but other nearby components of the transcription machinery, such as the preinitiation complex (PIC) or the Mediator complex. Studies in these aspects are ongoing. According to ChIP-seq, the majority of genes with differential peak density (56 out of 59) had increased gene-binding POLR2A in KO NPCs. Although POLR2A was used as a surrogate marker of Pol II in the assay, the accumulation of all the Pol II subunits in the KO cells suggests that increased POLR2A peak density represents an enlarged functional gene-associating Pol II pool. The accumulated Pol IIs were prominently located in the TSS region, raising an intriguing possibility that under physiological conditions, one arm of the dimeric E3 binds one Pol II complex via POLR2A while the other empty arm of this dimeric E3 wraps around the Pol II complex, using its catalytic RBX1 core to ubiquitinate other Pol II subunits, PIC components, or Mediator subunits. The augmented Pol II peak density in some genes in the KO cells raises other interesting questions: does this E3 play a role in resolving the replication-transcription conflict under a physiological condition, and does its absence hamper the conflict resolution, leading to slower proliferation? Indeed, we observed that different types of KO cells we tested so far, such as NPCs in this study, T cells in our previous study [ 19 ], and MEFs (unpublished data), all have reduced proliferation. This possibility is under active investigation. It will be prudent to state that in addition to Pol II, this ARMC5-CUL3-RBX1 E3 might have other substrates in addition to those in the transcription machinery and even in the cytosol, and the dysfunction such substrates after Armc5 KO or mutation contributes to some of the observed phenotypes in the KO mice and patients with ARMC5 mutations. The other ARMC5-binding partners found by yeast 2-hybrid assay [ 19 ] and ARMC5 immunoprecipitation (Figs. 3 A and 9 E) are good candidates for future validation studies. The effect of compromised Pol II degradation on the transcriptome During mRNA transcription, if the transcription machinery encounters template DNA damage or cellular stress, Pol II will stall until the damage is repaired or the stress relieved [ 45 ]. It is believed that persistent Pol II stalling prevents transcription from resuming unless the stalled Pol II is degraded by proteasomes [ 28 , 29 , 73 ]. It follows that if POLR2A ubiquitination is compromised, there will be a general decrease in mRNA transcription. We assessed the transcriptome of KO NPCs by RNA-seq, but to our surprise, only 63 genes out of the 16,475 expressed genes in the KO NPCs showed reduced mRNA levels. We validated RNA-seq results in selected genes by the nuclear run-on assay, which measures the transcription rate. The results were compatible with their steady-state mRNA levels, suggesting that the steady-state mRNA levels determined by RNA-seq largely reflected the transcription rates. The pausing index, which is the ratio of Pol II density in the promoter region versus that in the gene body, is often used to gauge the transcription activity of a gene [ 74 , 75 ]. However, the pausing indices between the KO and WT NPCs had no significant difference (Fig. 7 G). These results collectively indicate no general decrease in the transcription rate in the KO cells despite the failed POLR2A degradation. These findings suggest two non-competing possibilities. It is possible that Pol II stalling is an insignificant event under a physiological condition. Even without this dominant E3, some other Pol II-specific E3s are sufficient to remove the small amount of stalled Pol II. Consequently, transcription is not systemically compromised. Equally possible is that our current knowledge about removing stalled Pol II by proteasomes is based on experiments using cells with massive DNA damage [ 76 ] or based on in vitro experiments [ 77 , 78 ]. Maybe the ubiquitination and proteasome-mediated Pol II degradation are not needed to remove the stalled Pol II at all, and there are other mechanisms to recycle stalled Pol II. Indeed, this is the case in yeasts [ 79 , 80 ]. A recent computer modeling study shows that Pol II can come off the damaged DNA template and recycle instead of being degraded [ 30 ]. The end result of both scenarios is the same: the loss of the major Pol II-specific E3 does not cause generalized Pol II stalling under a physiological condition. One of the functions of this E3 is likely to control the Pol II pool size. How the Pol II pool size affects transcription and cellular function is a question infrequently visited. Intuitively, we would believe that since the same Pol II works for all the genes, its pool size shall universally affect all of them and probably increase their transcription. However, this is not the case. The accumulation of POLR2A due to Armc5 KO in NPCs only selectively influenced a limited number of genes (46 upregulated and 63 downregulated). How does Pol II pool size affect the expression of a subset of genes? First, it does not work alone but needs additional tissue-specific transcription factors to modulate the transcription rate jointly. Recently, Vidakovic reported that an enlarged Pol II pool due to POLR2A K1268R mutation results in the upregulation of more than 1600 genes in HEK293 cells but the downregulation of only a few hundred genes [ 30 ]. We similarly observed predominantly upregulated genes in the KO adrenal glands [ 45 ]. The abnormally expressed genes in these different cell types (NPCs (our current RNA-seq data), HEK293 cells [ 30 ], and adrenal glands [ 45 ]) showed vast differences. Likely, the enlarged Pol II pool and tissue-specific transcription factors jointly influence gene expression. The exact mechanism is unknown, but we can offer some speculations. For the upregulated genes, their promoters might contain more Pol II docking motifs, favoring more active transcription if the Pol II pool is enlarged. In supporting this hypothesis, most ChIP-seq significant genes had upregulated Pol II peak densities (56 out of 59 genes; SI-Table 3). Higher Pol II peak density was found in the promoter region in many upregulated genes (Fig. 8 E). It is to be noted that among the 56 genes with significantly increased POLR2A peak density, only five had increased mRNA levels according to RNA-seq in the KO NPCs. How do we explain the lack of significant overlap of genes with both upregulated POLR2A peak density and upregulated mRNA? We conducted RT-qPCR of a selected group of genes with upregulated POLR2A peak density but without mRNA increase according to RNA-seq. As shown in Fig. 7 F, with a larger number of biological replications ( n = 6 to 14), mRNA upregulation became significant in the KO NPCs, indicating that they were false negatives in RNA-seq. The inadequate statistical power due to a small number of biological replications ( n = 3) used in RNA-seq combined with the low difference between the KO and WT cells might be the reasons for false negatives. Conceivably, RT-qPCR validation with a larger number of biological replications might find more overlap between the genes with higher Pol II peak density and genes with upregulated mRNA. It is more difficult to understand why a larger Pol II pool causes the downregulation of some genes. It is possible that some of these downregulated genes are indirectly controlled by the larger Pol II pool via some upregulated ones or are regulated by other substrates of this E3. ARMC5 mutations as an NTD risk modifier In Armc5 KO mice, 26% of KO fetuses/mice suffered from defective neural tube development. Since NTD penetrance in the KO mice was not 100%, it means that the Armc5 mutation alone is not sufficient to cause NTD. Our mice were in a C57BL/6 × CD1 background. We previously described the phenotype of the Armc5 KO mice in the C57BL/6 background, in which NTD was not obvious [ 19 ]. Nakazawa et al. generated mice in the C57BL/6 background with POLR2A K1268R knock-in [ 31 ], which caused defective POLR2A degradation and hence POLR2A accumulation. NTD was not reported in those knock-in mice either. We noticed that WT mice with a 50% CD1 genetic background were more prone to develop NTD because 3.7% of the WT mice in the CD1 x C57BL/6 background manifested kinky tails, while 0% of the mice in the pure C57BL/6 background did so. This indicates that Armc5 is an NTD risk modifier, and its mutations need to interact with other genetic factors, such as those in the CD1 genetic background, to cause NTD. The KO mice presented a small body size, which reflects abnormal bone development throughout the body, among other things. Is the kinky tail part of such abnormal bone development? Small body size frequently occurs in about 31% of the 1997 strains of viable knockout strains surveyed [ 81 ], but the vast majority of the 620 stains of small KO mice have no kinky tail phenotype. In the case of ARMC5 KO mice, when they were in the C57BL/b6 background, in spite of their small body size, they showed no kinky tails. This literature and observations from our laboratory indicate that kinky tails are not caused by abnormal bone development but rather a phenotype of NTD. The reduced proliferation of KO cells likely contributed to NTD development in mice. The compromised KO NPC proliferation and cell cycle progression based on colorimetric assay and flow cytometry were evident. However, immunofluorescence Ki-67 staining of e9.5 neural plates with exencephaly failed to detect a significant difference in the KO and WT tissues. This is likely due to different assay sensitivities. Immunofluorescence assay (including Ki-67 staining) of tissue samples has much higher noise compared to colorimetric in vitro cell proliferation assays and flow cytometry. A lack of detection of reduced proliferation in KO tissue sections does not mean such reduction does not happen. In fact, the KO embryos, including their neural tubes, are much smaller than the WT counterparts. This is clear evidence that all the KO tissues have reduced proliferation in vivo. Our human genetic study detected nine highly deleterious ARMC5 SNVs in MM patients. This finding confirms the relevance of our mouse data to human NTD. In humans, ARMC5 with four functionally validated SNVs caused the reduced binding to POLR2A and reduced ubiquitination of the latter, suggesting the role of POLR2A as an effector downstream of ARMC5 in causing MM, although further experiments are required to validate this supposition. The enlarged Pol II pool probably dysregulates actual culprit genes further downstream, such as Cdkn1a , Gadd45b , Mafa , and Pcdh8 , critical for processes (e.g., proliferation and apoptosis) essential for proper neural tube development [ 82 – 86 ]. Our data clearly demonstrated that the ARMC5 was critical for Pol II homeostasis. However, the cause-and-effect relationship between an ARMC5 deletion/mutation-induced enlarged Pol II pool and NTD pathogenesis remains to be established, and further investigation of the putative culprit effector genes is required. The NTD phenotypes of mice and humans were not identical. Several reasons can explain such differences. First, the null mutation occurred in mice in both alleles, and the KO mice had no functional ARMC5 protein. In contrast, in humans, the mutations were monoallelic, and some of them were point mutations. Therefore, some functional ARMC5 proteins were still present. Such quantitative differences may contribute to different NTD phenotypes in mice and humans. Secondly, 60% of KO embryos/fetuses died between e13.5 and 3 weeks of age. The survival ones were likely those with a milder phenotype. We have no information on how many premature deaths occurred in individuals with ARMC5 mutations (monoallelic or biallelic). Thirdly, humans and mice are two different species. Other genetic and environmental factors are needed to induce NTD, and these factors are quite different in mice and humans, causing different NTD phenotypes. As described in the result section, even KO mice in a different genetic background (i.e., C57BL/6) did not have an NTD phenotype. Armc5 KO or mutation can affect genes in other organs and tissues whose function might be indirectly needed for proper neural tube development. Reduced FOLH1 expression in the KO intestine is a case in point. FOLH1 is a transmembrane protein and is a glutamate carboxypeptidase [ 87 ]. It is well established that sufficient folate is required for proper neural tube development [ 8 ]. Folate needs to be absorbed as an essential nutrient from food [ 8 ]. Dietary folate deficiency and dysfunction of folate absorption and metabolism increase NTD risks [ 9 – 11 ]. Dietary folate exists in a polyglutamate form and needs to be digested by FOLH1 into monomers to be absorbed by the small intestine [ 88 ]. Homozygous Folh1 KO in mice was embryonically lethal, indicating the vital function of FOLH1 in development [ 89 , 90 ]. Several human studies showed that FOLH1 mutations are associated with low serum folate levels and increased NTD risks [ 87 , 91 , 92 ]. Thus, the reduced Folh1 expression in the KO intestine, either due to failed Pol II degradation or the reduced function of mutant ARMC5 on other substrates, might contribute to the increased NTD risks. Fetuses obtain foliate from their mothers via the placentas. Obviously, the dysregulated folate metabolism needs to occur in the mothers to cause NTD in the fetuses. So, the reduced expression of FOLH1 in the KO mice is not the direct cause of their own NTD. Rather, the ARMC5 haploinsufficiency in the mother of the KO mice or humans is likely an NTD risk modifier. Additional experiments are needed to prove this hypothesis and to fully establish the cause-and-effect relationship between the ARMC5 KO/mutation-induced low FOLH1 expression and NTD pathogenesis.
Conclusions Our findings revealed that Armc5 KO or mutation was associated with NTD risks in mice and humans. ARMC5-CUL3-RBX1 was a novel dominant E3 controlling the degradation of all 12 Pol II subunits under physiological conditions. Armc5 KO or mutation caused an enlarged Pol II pool. A subset of genes was dysregulated due to ARMC5 mutations, either as a consequence of the enlarged Pol II pool or due to failed degradation of other substrates of this E3. Among this subset of genes, some, such as FOLH1 expressed in the intestine, might be effector genes culpable for the pathogenesis of NTD. Further investigation is needed to determine the cause-and-effect relationship between an enlarged Pol II pool and NTD and to validate the potential role of downregulated FOLH1 expression in NTD pathogenesis.
Background Neural tube defects (NTDs) are caused by genetic and environmental factors. ARMC5 is part of a novel ubiquitin ligase specific for POLR2A, the largest subunit of RNA polymerase II (Pol II). Results We find that ARMC5 knockout mice have increased incidence of NTDs, such as spina bifida and exencephaly. Surprisingly, the absence of ARMC5 causes the accumulation of not only POLR2A but also most of the other 11 Pol II subunits, indicating that the degradation of the whole Pol II complex is compromised. The enlarged Pol II pool does not lead to generalized Pol II stalling or a generalized decrease in mRNA transcription. In neural progenitor cells, ARMC5 knockout only dysregulates 106 genes, some of which are known to be involved in neural tube development. FOLH1, critical in folate uptake and hence neural tube development, is downregulated in the knockout intestine. We also identify nine deleterious mutations in the ARMC5 gene in 511 patients with myelomeningocele, a severe form of spina bifida. These mutations impair the interaction between ARMC5 and Pol II and reduce Pol II ubiquitination. Conclusions Mutations in ARMC5 increase the risk of NTDs in mice and humans. ARMC5 is part of an E3 controlling the degradation of all 12 subunits of Pol II under physiological conditions. The Pol II pool size might have effects on NTD pathogenesis, and some of the effects might be via the downregulation of FOLH1. Additional mechanistic work is needed to establish the causal effect of the findings on NTD pathogenesis. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-023-03147-w. Keywords
Supplementary Information
Acknowledgements We thank Christian Poitras and the Proteomics team at the IRCM, coordinated by Denis Faubert, for their constant help and assistance. We also thank Professors Hamish S. Scott and David J. Torpy of the University of Adelaide for insightful discussions related to PBMAH. Review history The review history is available as Additional file 2 . Peer review information Andrew Cosgrove was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Authors’ contributions HL and JW conceived and designed the experiments. HL, LL, KSA, HN, DF, MSG, BC, WS, LG, MCBVF, and JP performed the experiments. HL, LL, HX, KSA, HN, MSG, BC, IB, and JW analyzed the data. HL, LL, KSA, BC, and JW wrote the manuscript. Funding This work was supported by the Jean-Louis Levesque Foundation to JW. It was also funded in part by grants from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2017–04790), the Arthritis Society of Canada, the Canadian Institutes of Health Research (PJT-180284), and the Canadian Rare Disease Models and Mechanisms Network to JW, by a grant from NIH/NICHD (R01HD073434) to KSA, and by a development grant from the Ministère de l’Économie, de l’Innovation et de l’Énergie, Québec to BC. Availability of data and materials The mouse RNA-seq and ChIP-seq datasets have been deposited to the Gene Expression Omnibus of NCBI (accession numbers GSE169350 and GSE169582, respectively) [53, 70]. Microscopy images have been submitted to Figshare [46, 49, 54]. Mass spectrometry data are available via proteomeXchage (accession numbers: PXD047533 and PXD047572) [72]. Unique propagatable materials used in this study are available to qualified researchers upon request. Declarations Ethics approval and consent to participate All the animal studies (Protocol IP20018JWs were approved by the Animal Protection Committee (Comité institutionnel d'intégration de la protection des animaux) of the CRCHUM. Human subjects were recruited with written consent to the research studies. The protocol (HSC-MS-00–001) for this human study was approved by the University of Texas Health Science Center at Houston Committee for the Protection of Human Subjects. The protocol complies with the Helsinki Declaration. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Genome Biol. 2024 Jan 15; 25:19
oa_package/a0/27/PMC10789052.tar.gz
PMC10789053
38221607
Introduction Cancer is a group of diseases characterized by the uncontrolled growth and spread of abnormal cells in the body. It is one of the leading causes of death worldwide, with millions of new cases and deaths each year [ 1 ]. Despite significant advances in cancer research and treatment over the years, the disease remains a major public health challenge and a substantial burden on patients, families, and society. Cancer research is crucial to develop new and effective treatments, improve patient outcomes, and find cures for this disease. As understanding of cancer biology and genetics continues to evolve, so do the approaches used to diagnose, treat, and prevent the disease. However, there is still much to learn about the complex mechanisms underlying cancer development and progression and the unique challenges posed by different types of cancers [ 2 ]. In addition, there is a need to develop more personalized and targeted therapies that can improve patient outcomes and minimize side effects. As such, cancer research must continue to innovate and advance to keep pace with the evolving understanding of the disease. This includes exploring new treatment modalities, developing more sophisticated diagnostic tools, and understanding the genetic and molecular mechanisms involved in its development and progression [ 3 ]. Two-dimensional (2D) cell culture is a commonly used technique to grow and maintain cells in the laboratory. Cancer research extensively uses it to study cells under controlled conditions, where they are grown on a flat surface supplied with a nutrient-rich liquid medium that provides the necessary nutrients for cell growth and survival. The growth medium used in cell culture varies depending on the type of cancer being studied and the desired goals of the study. One of the most critical aspects of cell culture for cancer research is maintaining cell viability and function, as cancer cells are highly susceptible to environmental changes [ 4 ]. Another challenge facing cell culture for cancer research is the ability to accurately model the complexity of human tumors. These are typically highly heterogeneous, comprising different cell types, including cancer, stromal, and immune cells. Understandably, 2D cell culture does not accurately mimic tumors’ three-dimensional (3D) environment [ 5 ]. The architecture and organization of cells in a 3D environment differ from those in a 2D environment, which can affect cell behavior and drug response. Recreating this complexity in a laboratory setting is difficult, as it requires the development of culture conditions that promote the growth and interaction of multiple cell types in a multifaceted environment [ 6 ]. Therefore, 3D cell culture models were developed as they offer sophisticated platforms that mirror the structural and functional complexities of in vivo tissues, providing valuable insights for cancer research and drug development. This review article highlights the key advantages of 3D cell cultures, the most common scaffold-based 3D culturing techniques, pertinent literature about applications in cancer research, and the challenges associated with these culturing techniques. Due to the topic’s vastness, this paper focuses on examining scaffold-based models of 3D cell cultures.
Conclusion To conclude, scaffold-based 3D cell culture has emerged as a valuable tool in cancer research, providing a more physiologically relevant environment for studying tumor behavior, drug responses, and interactions between cancer cells and the surrounding microenvironment. Various scaffold materials, including polymers, decellularized tissue, hydrogels, and hybrids with microfluidics, have been explored to create complex and biomimetic 3D models. Polymer-based scaffolds offer tunable mechanical properties and are relatively easy to fabricate, making them versatile for 3D cell culture. The choice of polymers can influence cell behavior, proliferation, and migration, allowing researchers to study cancer progression and metastasis in a more realistic context. Additionally, incorporating bioactive molecules into polymer scaffolds can enable the controlled release of drugs and growth factors, facilitating drug screening and targeted therapy development. Furthermore, hydrogels offer high biocompatibility and can be functionalized with bioactive signals to direct cell behavior and tissue formation. In cancer research, hydrogels provide a platform to investigate the effect of mechanical cues on tumor growth, immune cell infiltration, and angiogenesis. Additionally, the ease of incorporating multiple cell types within hydrogels enables the study of tumor-stroma interactions. Likewise, decellularized tissue scaffolds retain native ECM composition, topography, and mechanical properties, closely mimicking the natural tumor microenvironment. As a result, cancer cells cultured in decellularized tissue scaffolds can exhibit more accurate tumor behaviors, including invasion and angiogenesis. Moreover, these scaffolds can be derived from patient-specific tissues, enabling personalized medicine approaches and improving the predictability of drug responses. Lastly, hybrid scaffolds that integrate microfluidic channels offer unique advantages for cancer research. By combining 3D cell culture with microfluidics, researchers can study tumor angiogenesis, metastasis, and drug penetration in a more physiologically relevant manner. Furthermore, microfluidics can facilitate high-throughput screening of anticancer drugs, enabling rapid and cost-effective testing of potential therapies.
Three-dimensional (3D) cell cultures have emerged as valuable tools in cancer research, offering significant advantages over traditional two-dimensional (2D) cell culture systems. In 3D cell cultures, cancer cells are grown in an environment that more closely mimics the 3D architecture and complexity of in vivo tumors. This approach has revolutionized cancer research by providing a more accurate representation of the tumor microenvironment (TME) and enabling the study of tumor behavior and response to therapies in a more physiologically relevant context. One of the key benefits of 3D cell culture in cancer research is the ability to recapitulate the complex interactions between cancer cells and their surrounding stroma. Tumors consist not only of cancer cells but also various other cell types, including stromal cells, immune cells, and blood vessels. These models bridge traditional 2D cell cultures and animal models, offering a cost-effective, scalable, and ethical alternative for preclinical research. As the field advances, 3D cell cultures are poised to play a pivotal role in understanding cancer biology and accelerating the development of effective anticancer therapies. This review article highlights the key advantages of 3D cell cultures, progress in the most common scaffold-based culturing techniques, pertinent literature on their applications in cancer research, and the ongoing challenges. Graphical Abstract Keywords
Physiological relevance of 3D cell cultures to the ECM Tumors are complex structures composed of cancer cells, non-cancerous cells (i.e., immune cells, fibroblasts, endothelial cells, etc.), and various extracellular matrix (ECM) components. The ECM plays a crucial role and contributes to the hallmarks of cancer in tumor progression, metastasis, and response to therapy [ 7 , 8 ]. The ECM can (1) secrete growth factors and cytokines that promote cell proliferation and survival [ 9 ], (2) modulate the expression of genes involved in cell cycle regulation and apoptosis [ 10 ], (3) control the expression of telomerase, an enzyme that extends the telomeres of chromosomes, (4) secrete angiogenic factors that promote the formation of new blood vessels, thereby providing the tumor with the nutrients and oxygen it needs to grow [ 11 ], (5) promote the epithelial-to-mesenchymal transition (EMT), a process by which epithelial cells acquire the ability to migrate and invade other tissues [ 12 ], and (6) temper the immune response by influencing the recruitment and function of immune cells in the TME [ 13 ]. Romero-López and colleagues [ 14 ] tested how the ECM derived from normal and tumor tissues impacted blood vessels and tumor growth using reconstituted ECM. Tumor tissue obtained from liver metastases of colon tumors was subjected to hematoxylin and eosin (H&E) staining to confirm the successful decellularization of both colon and tumor tissues. Subsequently, significantly distinct protein composition and stiffness were observed among the reconstituted matrices, leading to notable variations in vascular network formation and tumor growth in both in vitro and in vivo. Fluorescence Lifetime Imaging Microscopy was employed to evaluate the free/bound ratios of the nicotinamide adenine dinucleotide (NADH) cofactor in tumor and endothelial cells to indicate cellular metabolic state. Notably, cells seeded in tumor ECM exhibited elevated levels of free NADH, indicating an increased glycolytic rate compared to those seeded in normal ECM. These findings underscore the substantial influence of ECM on cancer cell growth and the accompanying vasculature (e.g., increased vessel length, increased vascular heterogeneity). Alterations in the composition of tumor ECM, such as augmented deposition and crosslinking of collagen fibers, can be attributed to communication between tumor cells and tumor-associated stromal cells. Every tissue type has a distinct ECM composition, topology, and organization [ 15 ]. These factors play a significant role in controlling cell function, behavior, and interactions with the microenvironment, as they generate spatial gradients of biochemicals and metabolites that, in turn, may elicit distinctive cell-mediated responses (e.g., differentiation, migration) [ 16 ]. Langhans [ 17 ] analyzed the chemical components of ECM and reported that it contains water, carbohydrates, and proteins, such as fibrous matrix proteins, glycoproteins, proteoglycans, glycosaminoglycans, growth factors, protease inhibitors, and proteolytic enzymes. Thus, ECM organization can influence cell genotypes and phenotypes, where such effects can be explored through 3D cell cultures [ 16 , 18 ]. For example, variations in the gene and protein expression and activity of the epidermal growth factor receptors (EFGR), phosphorylated protein kinase B (phospho-AKT), and p42/44 mitogen-activated protein kinases (phospho-MAPK) in colorectal cancer cell lines (e.g., HT-29, CACO-2, DLD-1) affected the genotype and phenotype of cells in 3D cultures, as compared to 2D monolayers [ 19 , 20 ]. Moreover, the ECM can influence cell morphology and expression of chemokine receptors. Kiss et al. [ 21 ] showed that 3D cultured prostate cancer cells (e.g., LNCaP, PC3) exhibited a high level of interaction between the cells and ECM, which resulted in the upregulation and overexpression of the CXCR7 and CXCR4 chemokine receptors. While 2D cell culture has been the mainstay of laboratory cancer research, it has become increasingly clear that this approach is inadequate in replicating the in vivo conditions that cells experience in the human body. As a result, researchers have been turning to 3D cell culture as a more physiologically relevant model for studying cellular processes and disease. A key advantage of 3D models for cancer research is that they can better mimic the complex microenvironment of tumors, including tumor morphology and topography, upregulation of pro-angiogenic proteins, dispersion of biological and chemical factors, cell–cell and cell–matrix interactions, gradients of oxygen and nutrients, and a more realistic ECM composition [ 6 , 22 , 23 ]. Necrotic, hypoxic, quiescent, apoptotic, and proliferative cells are often found in spheroid cell clusters at different phases of development [ 24 ]. Since the outer layer of the spheroid has greater exposure to the nutrient-supported medium, it contains a higher number of proliferating cells. Cells in the spheroid core are hypoxic and often quiescent as they receive less oxygen, growth agents, and nutrients from the media. This results in more physiologically relevant gradients in tissue composition that can better inform drug discovery and development [ 24 ]. Furthermore, 3D cell culture accurately depicts the cellular response to drugs and other therapeutic agents. Such a model’s spatial and physical characteristics influence the transmission of signals between cells, which alters gene expression and cell behavior [ 25 ]. Loessner et al. [ 26 ] demonstrated a flexible 3D culture method where a synthetic hydrogel matrix with crucial biomimetic properties provided a system for studying cell–matrix dynamics related to tumorigenesis. The 3D cultured cells overexpressed mRNA for receptors on their surface (e.g., protease, α3, α5, β1 integrins) compared to 2D cultured cells. Moreover, spheroid progression and proliferation depended on the cells’ ability to proteolytically transform their ECM and cell-integrin interactions. Consequently, the 3D spheroids showed higher survival rates in contrast to 2D monolayers after exposure to the chemotherapeutic agent paclitaxel, which indicates that it better stimulates in vivo chemosensitivity and pathophysiological events. Table 1 below summarizes studies using different 3D models to investigate different types of cancers. Figure 1 summarizes the main characteristics of 2D and 3D cell cultures. The shift to 3D cell culture is a significant advancement in laboratory research, as it provides a more physiologically relevant model for studying cellular processes and disease. While some challenges remain to be addressed, the advantages of 3D culture outweigh the limitations of 2D culture. As technology continues to evolve, 3D culture is likely to become an increasingly crucial tool in cancer research and other fields of biomedical science. Table 2 below provides a comprehensive overview of 2D, 3D, and other model systems employed in cancer research. Besides 2D and 3D cell cultures, tissues and organs present structural and functional intricacies, capturing organ-specific responses but posing challenges in maintenance and accessibility. Furthermore, model animals mimic in vivo systemic responses, yet ethical concerns, high costs, and species differences limit their utility. While clinically relevant, patient-derived samples present challenges in experimental control and sample heterogeneity [ 43 ]. It is noteworthy to highlight the difference between spheroids and organoids as both are commonly used terms within the scope of 3D cell cultures [ 44 ]. Organoids and spheroids are different 3D cell culture models that can be cultured with different techniques. Organoids, characterized by intricate structures replicating real organs or tissues, are composed of multiple cell types that self-organize to mirror tissue-like architecture, deriving from stem cells or tissue-specific progenitors. Due to their high biological relevance, they find applications in disease modeling, drug testing, and understanding organ development. Beyond organoids, tumoroids (i.e., tumor-like organoids), derived from patient cancer tissues containing tumor and stroma cells of the TME, are becoming advanced 3D culture platforms for personalized drug evaluation and development. In contrast, spheroids are simpler spherical cellular aggregates lacking the distinct organ-like structures of organoids. Comprising one or multiple cell types, spheroids are used to study fundamental cellular behaviors and drug responses in a 3D environment. While both contribute to 3D cell culture studies, organoids closely resemble real organs compared to the simpler cellular aggregates represented by spheroids [ 44 ]. Patient models are valuable tools that aim to replicate the complexities of human tumors, providing insights into disease mechanisms, therapeutic responses, and personalized treatment strategies. They can be utilized in Patient-Derived Xenografts (PDX), organoids and 3D cultures, patient-derived cell lines, liquid biopsies, and clinical trials [ 45 ]. Cell sources and 3D culture heterogeneity In 3D cell culture, achieving an optimal balance between homogeneity and heterogeneity is intricately linked to the cellular source, considering stem cells, induced Pluripotent Stem Cells (iPSCs), or mixed primary cells derived from tissues [ 46 ]. Stem cells in in vitro cell culture encompass embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and adult or somatic stem cells. Embryonic stem cells exhibit high pluripotency, capable of differentiating into any cell type, but their use raises ethical concerns due to their origin from embryos. iPSCs are generated from somatic cells (e.g., skin or blood cells) through reprogramming, reverting them to an embryonic-like pluripotent state, but face reprogramming efficiency and potential tumorigenicity challenges. This transformation creates an extensive and diverse reservoir of human cells, capable of developing into any cell type required for therapeutic applications. Human-induced pluripotent Stem Cells (HiPSCs) are particularly relevant in cancer research (Table 3 ) [ 47 ]. Thus, the reprogramming process pioneered by Shinya Yamanaka has opened new avenues for advancing cancer biology, drug discovery, and regenerative medicine in cancer treatment. Lastly, adult or somatic stem cells are tissue-specific, mirroring the characteristics of their origin, and present fewer ethical concerns as they are derived from adult tissues. However, they have limited differentiation potential and a finite lifespan in culture. The selection of the cell source significantly influences the composition and behavior of the 3D culture. Stem cells and iPSCs, known for their pluripotency, introduce an inherent heterogeneity due to their ability to differentiate into various cell types [ 45 , 46 ]. Furthermore, primary cells, derived directly from living organisms, possess unique characteristics that make them invaluable for in vitro studies. Maintaining biological relevance, these cells closely mimic the tissue or organ from which they are isolated, reflecting the intricacies of in vivo conditions. With donor-specific variability, primary cells allow researchers to explore genetic diversity’s impact on cell behavior, disease susceptibility, and drug responses. Retaining tissue-specific functions, differentiated primary cells are crucial for studying specific physiological processes and diseases associated with particular tissues [ 46 ]. However, these cells have challenges, including a limited lifespan and sensitivity to culture conditions. The finite replicative capacity and sensitivity contribute to the heterogeneity observed in 3D cell cultures, emphasizing the importance of carefully considering culture conditions and donor-specific variations to accurately represent in vivo scenarios. Despite these challenges, primary cells are vital in advancing our understanding of cell biology, disease mechanisms, and therapeutic development. Similarly, using mixed primary cells derived from tissues can contribute to a more heterogeneous cellular composition, resembling the complexity found in native tissues. Striking the right balance is crucial, as an excessive degree of heterogeneity may obscure specific responses, while too much homogeneity might oversimplify the representation of the tissue microenvironment. Therefore, a nuanced understanding of the cellular source is essential for tailoring 3D cell culture models to accurately reflect the intricacies of actual tissues and organs. Scaffold-based techniques for 3D cell culture As explained above, developing 3D cell culture techniques that more accurately model the TME is a major area of focus in cancer research [ 6 , 59 ]. Different approaches for 3D cell cultures exist and can be generally divided into scaffold-based and scaffold-free methods. Scaffold-free 3D cell culture refers to a cell culture technique in which cells are cultured and assembled into 3D structures without external scaffold material. Instead of being embedded within a supportive matrix, the cells self-assemble and interact with neighboring cells to form 3D tissue-like structures. Such cultures allow for more accurate cell–cell interactions, spatial organization, and physiological responses, making them valuable tools for various applications, including drug testing. They also usually have higher cell densities than scaffold-based models, which can influence cellular behavior, gene expression, and cellular functions. Lastly, non-scaffold models offer versatility and customizability in terms of cell types, culture conditions, and experimental designs. However, it is essential to consider that scaffold-free approaches might have limitations in providing mechanical support, shape control, and reproducibility compared to scaffold-based 3D cell culture methods [ 60 ]. As such, researchers often select the appropriate 3D cell culture method based on their specific research goals and the tissue or organ system they aim to model or engineer. Due to the topic’s vastness, the paper’s purviews' are limited to the examination of scaffold-based models of 3D cell cultures. Scaffolds are essential components in 3D cell culture systems, as they provide a 3D environment for cells to grow and interact with each other and their surroundings [ 61 , 62 ]. Biomaterials employed in such models can be categorized into the following primary groups: polymer scaffolds, hydrogels, decellularized tissue scaffolds, and hybrid scaffolds (e.g., incorporating microfluidic devices). Tables 4 and 5 summarize the advantages and limitations of commonly used scaffold-free and scaffold-based 3D cell culture techniques, respectively. Polymer-based scaffolds Polymer scaffolds revolutionize 3D cell culture by providing a biomimetic environment imitating the natural ECM, fostering cell proliferation and differentiation, often with remarkable efficiency and precision. These scaffolds offer a versatile platform for studying complex cell behaviors and hold immense promise in cancer research applications. They can be generally classified as natural or synthetic-derived (see Fig. 2 ). Natural polymer scaffolds are made from naturally occurring polymers. They can be processed into various forms, including fibers, films, or porous structures. They can be further classified into two main categories: protein-based and polysaccharide-based scaffolds. Protein-based scaffolds are derived from large molecules composed of amino acids (e.g., collagen, silk, gelatin, fibronectin [ 93 , 94 ]). Due to their bioactive properties, these scaffolds provide cell adhesion sites and can regulate cell behavior and tissue development. A 3D cell culture platform using collagen scaffolds was developed to investigate the tumorigenicity of cancer stem cells (CSCs) in breast cancer [ 95 ]. The study revealed that the 3D cell culture system demonstrated increased expression of pro-angiogenic growth factors, indicating a potential role in promoting blood vessel formation. Moreover, the overexpression of CSC markers such as OCT4A and SOX2, as well as breast cancer stem cell markers including SOX4 and JAG1, was observed in the 3D scaffolds, suggesting that the 3D model successfully replicated the molecular characteristics associated with CSCs. In terms of behavior, the 3D model more closely mimicked the characteristics of CSCs compared to an in vivo model, indicating its effectiveness in capturing the tumorigenic properties of CSCs. Therefore, the collagen scaffold-based 3D cell culture platform provided a valuable tool for studying CSC tumorigenicity in breast cancer, demonstrating the upregulation of pro-angiogenic growth factors, the overexpression of CSC and breast cancer stem cell markers, and a close resemblance to CSC behavior when compared to an in vivo model. Another study by McGrath et al. [ 96 ] used a 3D collagen matrix (GELFOAMTM) to create an endosteal bone niche (EN) model, referred to as 3D-EN, for studying breast cancer cells’ quiescence and dormancy behaviors. The 3D-EN model effectively facilitated the identification of several genes associated with dormancy-reactivation processes, where among the tested cell lines, only MDA-MB-231 cells exhibited dormancy behavior, suggesting that they have a propensity for entering a dormant state in the simulated physiological conditions. On the other hand, polysaccharide-based scaffolds are composed of long chains of sugar molecules (e.g., chitosan and hyaluronic acid). They are biocompatible, biodegradable, and can often be modified to adjust their physical and biological properties. Arya et al. [ 97 ] developed a 3D cell culture model using a chitosan scaffold, a natural polymer derived from chitin, to study breast cancer behavior. The scaffold was cross-linked with genipin, a natural cross-linker, to enhance its stability. The study found that the chitosan–gelatin (GC) scaffold provided a suitable environment for the growth of MCF-7 breast cancer cells, with the cells showing good adhesion and proliferation. The scaffold also supported the formation of cell clusters, which are more representative of in vivo tumor conditions compared to 2D cultures. The study concluded that the chitosan/gelatin scaffold could be useful for studying breast cancer in vitro, providing a more physiologically relevant model than traditional 2D cultures. GC scaffolds have been shown to support the formation of tumoroids that mimic tumors grown in vivo , making them an improved in vitro tumor model. These scaffolds have been successfully used to study lung cancer, as well as other types of cancer, such as breast, cervix, and bone [ 98 ]. These scaffolds have demonstrated gene-expression profiles similar to tumors grown in vivo, indicating their potential for studying cancer progression and drug screening for solid tumors [ 99 ]. The GC scaffolds have also been shown to improve the predictivity of preclinical studies and enhance the clinical translation of therapies [ 100 ]. Overall, the GC scaffolds provide a valuable tool for studying tumor development and evaluating the efficacy of anti-cancer drugs in an in vitro setting. Synthetic polymer scaffolds (e.g., polylactic acid (PLA), polyglycolic acid (PGA), and polycaprolactone (PCL) can be tailored to have specific mechanical and biochemical properties. However, they can be less biocompatible than natural polymers and may require surface modifications to promote cell attachment and growth [ 60 ]. Palomeras et al. [ 101 ] tested the efficiency of 3D-printed PCL scaffolds for the culture of MCF7 breast cancer cells. The researchers found that the scaffold’s design, specifically the deposition angle, significantly influenced cell attachment and growth. Scaffolds with a deposition angle of 60° showed the highest cell counting after treatment with trypsin. Furthermore, the study found that the 3D culture in PCL scaffolds enriched the cancer stem cell (CSC) population compared to 2D culture control, increasing their Mammosphere Forming Index (MFI). The study concluded that 3D PCL scaffold culture could spur MCF7 cells to generate a cell population with CSC properties. This suggests its potential for studying CSC properties and screening new therapeutic agents targeting CSC populations. These efforts highlight the potential of natural polymer scaffolds in creating more physiologically relevant 3D cell culture models for cancer research. Using these scaffolds can enhance the understanding of cancer cell behavior and potentially lead to the discovery of more effective therapeutic strategies. Similarly, Rijal et al. [ 88 ] utilized modified gas foaming-based synthetic polymer scaffolds from poly(lactic-co-glycolic) acid (PLGA) and PCL for conducting 3D tissue cultures and animal models in breast cancer research. The research group investigated the response of MDA-MB-231 cells to anticancer drugs, their viability, morphology, proliferation, receptor expression, and ability to develop in vivo tumors using the 3D scaffolds. MDA-MB-231 cells were cultured on PLGA-coated 2D microscopic glass slides and in 3D-porous PLGA scaffolds to examine cancer cells’ survival on the polymeric substrata. The number of dead cells detected on the PLGA-coated glass slides and PLGA 3D scaffolds was negligible on Day 1. However, a significant increase in the number of dead cells was observed on the PLGA-coated glass slides compared to the 3D scaffolds on day 14. Additionally, the expression of ECM proteins and cell surface receptors on the synthetic polymers was investigated, where strong staining signals of type I collagen and integrin α2 were detected in both cell types using immunofluorescence (IF) microscopy. It is worth noting that integrin α2β1, which acts as a primary receptor for type I collagen, displayed a basal expression level in the 3D model. This expression pattern may promote breast cancer cell migration and tumor growth, as high levels of the integrin receptor tend to inhibit cancer cell migration. Notably, integrin α2 receptors showed a prominent colocalization with type I collagen, particularly around the cell edges, suggesting local deposition of type I collagen and subsequent binding of integrin α2 receptors, facilitating cell attachment and migration. Lastly, to evaluate the tumor formation capabilities of the polymeric porous scaffolds in mice, MDA-MB-231 cells were coated onto porous PLGA scaffolds and implanted into the mammary fat pads of NOD/SCID mice. Blank scaffolds without cells served as the negative control. As anticipated, the proliferating cell nuclear antigen biomarker Ki-67 was not detected in the blank scaffold implants. At the same time, its expression was significantly high within the tumors derived from the MDA-MB-231 cell-laden PLGA scaffolds. This finding suggested that the cancer cell population within the scaffolds exhibited rapid proliferation when embedded in the native breast tissues. Hydrogel scaffolds Hydrogels are 3D networks of hydrophilic polymers (can be natural, synthetic, or hybrid), that can absorb large amounts of water or biological fluids while maintaining their structural integrity [ 102 ]. Figure 3 shows common techniques for culturing with hydrogel scaffolds. In the dome technique (see Fig. 3 A), cells are mixed with temperature-sensitive hydrogels and then seeded as droplets within a cell culture vessel. This technique relies on careful temperature control to allow the hydrogel to polymerize and form a dome structure. Once the hydrogel has polymerized and the cell-hydrogel droplet is stabilized, it is delicately covered with cell culture media. This allows for a localized 3D cell culture in a larger vessel and can create multiple individual cell clusters or spheroids in a single plate. However, the maintenance of dome integrity can be challenging over time and might be affected by changes in temperature or physical disturbance. Also, it may not be suitable for long-term culture or cells requiring complex structural support due to the relatively simple and isolated 3D structure. Figure 3 B illustrates the insert wells technique, which consists of porous inserts to hold the cell-hydrogel mixture while cell culture media is added to the well surrounding the insert. This separation creates a differential environment, allowing for nutrient exchange while maintaining a distinct 3D culture within the insert. Heterogeneous spheroids will eventually form on the insert bottom due to gravitational pull and cell–cell interactions. Such a model can be used to study cell invasion or migration by placing the cell-hydrogel mixture on one side of a permeable membrane and chemo-attractants on the other. The gel-bottom support method (see Fig. 3 C) involves creating a thick layer of hydrogel at the bottom of a culture well, on top of which the cell suspension is placed. For instance, this method can be used for embedding cells within macroporous hydrogel scaffolds, such as AlgiMatrix ® (Thermo Fisher Scientific/Life Technologies, Carlsbad, USA)—an ionically gelled and dried scaffold that is conveniently provided in sterile pre-loaded disc format in standard cell culture well plates [ 103 , 104 ]. To initiate the cell culture, a concentrated cell suspension in culture media is seeded on top of the hydrogel, where it is subsequently absorbed, resulting in the entrapment of the cells within the porous structure of the hydrogel. Lastly, in the embedding technique (see Fig. 3 D), the cells are mixed with a hydrogel and directly placed at the bottom of a culture vessel, followed by a layer of culture media, allowing the cells to grow within the matrix of the hydrogel, thereby more accurately mimicking the in vivo 3D environment. This technique is beneficial for studying cell–cell and cell–matrix interactions, invasion, migration, and drug responses. However, it can be more technically challenging to embed cells evenly throughout the hydrogel; retrieving cells from the matrix for downstream analysis can be challenging. The permeability of the hydrogel to nutrients, gases, and wastes may need careful optimization to avoid creating a hypoxic environment or nutrient deprivation for cells located in the interior of the gel. Each of these methods must be selected based on the needs of the specific experiment and the type of cells being cultured. Additionally, the hydrogel composition and mechanical properties should be tuned according to the native ECM properties of the cell type of interest. Due to their adjustable properties, synthetic hydrogels offer notable benefits in 3D cell culture. The RADA16-I peptide is a self-assembling peptide derived from a segment of Zuotin, a left-handed Z-DNA-binding protein originally discovered in yeast. This peptide has emerged as a novel nano-biomaterial due to its ability to form nanofiber scaffolds. Consequently, these scaffolds provide a supportive framework that promotes cell growth and fosters a conducive 3D milieu for cell culture. The peptide sequence can be modified to incorporate specific functional groups, thus fine-tuning the mechanical, chemical, and biological attributes of the resultant scaffold. This remarkable flexibility enables customization to align precisely with the unique demands of the cultured cells or the intended experimental objectives. These scaffolds, which are about 10 nm in diameter, are driven by positively and negatively charged residues through complementary ionic interactions. When dissolved in water, the RADA16-I peptide forms a stable hydrogel (nanofiber networks with pore sizes of about 5–200 nm) with extremely high water content at concentrations of 1–5 mg/mL, which closely mimics the porosity and gross structure of ECMs, making it suitable for the fabrication of artificial cell niches for applications in tumor biology. Yang and Zhao [ 105 ] prepared a RADA16-I peptide hydrogel that provided an elaborate 3D microenvironment for ovarian cancer cells in response to the surrounding topography. The 3D cell cultures exhibited a two to five-fold increase in drug resistance (paclitaxel, curcumin, and fluorouracil) compared to the 2D monolayers, which showed a good representation of the primary tumor and were likely to simulate the in vivo biological characteristics of ovarian cancer cells. Similarly, Song et al. [ 106 ] also proved that RADA16-I hydrogels can provide prominent and dynamic nanofiber frameworks to sustain robust cell growth and vitality. HO-8910PM cells, metastatic human ovarian cancer cells, were cultured in three hydrogel biomaterials, namely RADA16-I hydrogel, Matrigel, and collagen I. The specially designed RADA16-I peptide exhibited a well-defined nanofiber network structure within the hydrogel, providing a nanofiber-based cellular microenvironment similar to Matrigel and collagen I. Notably, the HO-8910PM cells exhibited distinctive growth patterns within the three matrices, including cell aggregates, colonies, clusters, strips, and multicellular tumor spheroids (MCTS). Moreover, the molecular expression of integrin β1, E-cadherin, and N-cadherin in 3D-cultured MCTS of HO-8910PM cells was elevated, and their chemosensitivity was reduced to cisplatin and paclitaxel in comparison to the 2D cell culture, evidenced by IC 50 values and inhibition rates. Furthermore, polyvalent hyaluronic acid (HA) hydrogels are considered synthetic, as they are typically created through chemical modification of HA molecules, introducing crosslinking agents or functional groups that enable the formation of a gel-like structure. This modification allows for control over the physical and mechanical properties of the hydrogel, such as its stiffness, degradation rate, and bioactivity. Suo et al. [ 107 ] engineered an ECM-mimicking hydrogel scaffold to replicate the native breast cancer microenvironment to provide an effective in vitro model for studying breast cancer progression. HA hydrogels from polyvalent HA derivatives were prepared through an innovative dual crosslinking process involving hydrazone and photo-crosslinking reactions. Hydrazone crosslinking is a versatile, reversible process that allows for rapid gelation, while photo-crosslinking stabilizes the formed hydrogel. Using this approach, they could efficiently produce HA hydrogels in under 120 s. It was found that the developed HA hydrogels closely resembled the topography and mechanical properties of breast tumors, and their characteristics (i.e., rigidity and porosity) could be fine-tuned by adjusting the amount of aldehyde-HA in the hydrogel formulation. This ability to modulate the mechanical properties of the hydrogels opens up possibilities for modeling different stages of tumor progression or different types of tumors. Moreover, a critical feature of the developed HA hydrogels was their dual-responsive degradation behavior, which was found to be responsive to glutathione and hyaluronidase. The glutathione responsiveness allows for degradation in response to the redox environment, which is often disturbed in cancer cells. Meanwhile, responsiveness of hyaluronidase makes the hydrogels sensitive to an enzyme that is typically upregulated in invasive cancer cells. Significantly, the HA hydrogel-cultured MCF-7 cells displayed upregulated expression of vascular endothelial growth factor (VEGF), interleukin-8 (IL-8), and basic fibroblast growth factor (bFGF) compared to their 2D cultured counterparts. These molecules are key mediators of angiogenesis and inflammation in cancer, suggesting that the HA hydrogel environment better replicates the conditions that promote these processes in tumors. Besides, the hydrogel-cultured cells exhibited enhanced migration and invasion abilities, which are key hallmarks of aggressive cancer cells. In vivo studies supported these results and confirmed the superior tumorigenic capacity of the MCF-7 cells cultured in HA hydrogels compared to those cultured in 2D. The outcomes of this research are anticipated to have far-reaching implications for both the in vitro study of breast cancer and the development of effective therapeutic strategies. Another investigation by Wang et al. [ 108 ] supported that the level of methacrylation significantly influenced the hydrogel’s microstructure, mechanical characteristics, and capacity for liquid absorption and degradation. The refined hydrogel, synthesized through the photocrosslinking of methacrylated HA, displayed a highly porous structure, a high equilibrium swelling ratio, appropriate mechanical properties, and a degradation process responsive to hyaluronidase. It was found that the HA hydrogel promoted the growth and proliferation of MCF-7 cells, which formed aggregates within the hydrogel. In addition, 3D-cultured MCF-7 cells showed an increased expression of VEGF, bFGF, and interleukin-8, and enhanced invasion and tumorigenesis capabilities compared to their 2D-cultured counterparts. As such, the HA hydrogel has proven to be a dependable alternative for constructing tumor models. Gelatin methacryloyl (GelMA) is another commonly used natural biomaterial for 3D hydrogel scaffolds in cancer research. GelMA is derived from gelatin, a natural protein obtained from collagen-rich sources. It is modified by adding methacryloyl groups that enable it to undergo photocrosslinking when exposed to ultraviolet (UV) light. This property allows GelMA to form stable hydrogel networks, making it suitable for creating 3D scaffolds that mimic the tumor microenvironment (TME). The tunable mechanical and biochemical properties of GelMA hydrogels, biocompatibility, and ability to support cell growth make them valuable tools for studying cancer cell behavior, tumor invasion, drug screening, and other aspects of cancer research. Kim et al. [ 109 ] developed a 3D cell culture model for the bladder by employing a novel acellular matrix and bioreactor. GelMA was utilized as a 3D scaffold for the bladder cancer cell culture, with an optimal scaffold height of 0.08 mm and a crosslinking time of 120 s [ 110 ]. Subsequently, 5637 and T24 cells were cultured in 2D and 3D environments and subjected to rapamycin and Bacillus Calmette-Guérin (BCG) drug treatments. It was found that the 3D bladder cancer cell culture model exhibited a faster establishment process and greater stability when compared to the 2D model. Moreover, the 3D-cultured cancer cells demonstrated heightened drug resistance and reduced sensitivity compared to the 2D-cultured cells. Additionally, the researchers observed cell-to-cell interaction and basal activity in the 3D model, closely resembling the in vivo environment. Along the same lines, Arya et al. [ 111 ] investigated the suitability of GelMA hydrogels as in vitro 3D culture systems for modeling key characteristics of metastatic progression in breast cancer, specifically invasiveness and chemo-responsiveness. The mechanical and morphological properties of the hydrogels were tuned by varying the percentage of GelMA used. Compression testing revealed that the stiffness of 10% GelMA hydrogels was within the range reported for breast tissue, making them suitable matrices for mimicking the breast viscoelasticity in vitro, as cells cultured on 10% GelMA hydrogels exhibited a higher proliferation rate compared to 15% GelMA in both cell lines tested, making them robust systems for long-term cell culture. Furthermore, proliferation studies showed that the GelMA hydrogels could sustain breast cancer cells longer than 2D cultures. Overexpression of genes associated with invasiveness was also observed in 3D cultured breast cancer cells, suggesting potential changes important for metastatic progression. The response to chemotherapeutic drugs was evaluated, and it was observed that 3D spheroids of breast cancer cells cultured on GelMA hydrogels exhibited decreased sensitivity to taxane drugs like paclitaxel. The study highlighted the importance of an adequate matrix pore size for cell penetration, migration, proliferation, exchanging oxygen, nutrients, and waste materials in and out of the 3D culture scaffolds. Significantly, these studies emphasized the importance of the 3D cancer cell culture model in establishing a patient-like model. Utilizing such models can achieve a more precise evaluation of drug responses, potentially leading to advancements in cancer treatment and other diseases. Cells are known to respond to their mechanical environment in a process known as mechano-transduction, where they transmute mechanical stimuli into biochemical signals, subsequently prompting alterations in cellular behavior and functional operations. Curtis et al. [ 112 ] investigated the influence of mechanical stimuli on the cell proliferation, growth, and protein expression of 4T1 breast cancer cells, serving as a model for cells that metastasize to bone. The researchers used 4T1 breast cancer cells and implanted them in gelatin-mTGase hydrogels that mimicked the mechanical properties of bone marrow. The hydrogels had different moduli of either 1 or 2.7 kPa. The cells were cultured under different conditions, including static culture, perfusion of media through the hydrogel, and combined perfusion with cyclic mechanical compression for 1 h per day for 4 days. Control samples were cultured under free-swelling conditions. Immunostaining techniques were used to analyze the protein expression within the cell spheroids formed during the culture. The study found that mechanical stimuli significantly influenced the behavior of the 4T1 breast cancer cells. The cells formed spheroids during the culture period, with larger spheroids observed in statically cultured constructs than those exposed to perfusion or compression. In the stiffer gelatin, compressed constructs resulted in smaller spheroids compared to perfusion alone, while compression had no significant effect in the softer gelatin. The immunostaining revealed the expression of proteins associated with bone metastasis within the spheroids, including osteopontin, parathyroid hormone-related protein, and fibronectin. The proliferative marker Ki67 was present in all spheroids on day 4. The intensity of Ki67 staining varied depending on the culture conditions and gelatin stiffness. It highlighted the mechanical sensitivity of 4T1 breast cancer cells and demonstrated how mechanical stimuli can impact their proliferation and protein expression within soft materials that mimic the mechanical properties of bone marrow. The findings emphasized the role of the mechanical environment in the bone for both in vivo and in vitro models of cancer metastasis. Understanding the influence of mechanical factors on cancer cell behavior is crucial for developing effective strategies to prevent and treat metastasis to bone, potentially leading to improved clinical outcomes for patients with advanced cancer. Similarly, Cavo et al. [ 113 ] investigated the impact of substrate elasticity on breast adenocarcinoma cell activity using mechanically tuned alginate hydrogels. The study evaluated the viability, proliferation rates, and cluster organization of MCF-7 breast cancer cells in 3D alginate hydrogels compared to standard 2D environments. The elastic moduli of the different alginate hydrogels were measured using atomic force microscopy (AFM). The results demonstrated that substrate stiffness directly influenced cell fate in 2D and 3D environments. In the 3D hydrogels with an elastic modulus of 150–200 kPa, the MCF-7 cells exhibited uninhibited proliferation, forming cell clusters with 100 μm and 300 μm diameters after 1 and 2 weeks, respectively. This unimpeded cell growth observed in softer hydrogels mimicked the initial stages of solid tumor pre-vascularization and growth. Furthermore, the multicellular, cluster-like conformation observed in the 3D hydrogels closely resembled the in vivo organization of solid tumors, demonstrating the advantage of 3D cancer models for replicating cell–cell and cell–matrix interactions. The study also highlighted the influence of microenvironment dimensionality on cellular morphology, as cells displayed a flat shape in 2D cultures while adopting a round shape in the 3D environment. Cell proliferation in the 3D setting depended highly on substrate stiffness, which impacted nutrient diffusion and intracellular signaling through a mechano-transduction mechanism. The findings underscore the importance of considering substrate stiffness in the design of 3D cancer models, as it directly affects cell viability, proliferation, and organization. By understanding the relationship between substrate stiffness and cellular behavior, researchers can develop more realistic in vitro models that better mimic the microenvironment of solid tumors. These models can advance our understanding of cancer development and aid in the development of targeted therapies by allowing for the investigation of cell–cell and cell–matrix interactions in a more accurate setting. Decellularized tissue scaffolds Decellularized tissues have had their cellular components removed, leaving behind the ECM. Decellularized tissues can be used as scaffolds for 3D cell culture, providing a natural environment for cells to grow and interact [ 114 ]. The use of decellularized tissues as 3D cell culture scaffolds offers several advantages. Firstly, they retain the intricate ECM composition, including structural proteins, growth factors, and signaling molecules, which play critical roles in cell behavior and tissue organization. This enables cancer cells to interact with the ECM more akin to in vivo conditions, influencing their adhesion, migration, invasion, and differentiation. Moreover, decellularized tissues offer spatial organization and architectural cues that guide cellular behavior. Preserving tissue-specific topography, such as vasculature, allows for studying angiogenesis and vascularization processes in cancer progression. These scaffolds also provide mechanical support and stiffness that influence cellular mechanotransduction, impacting cell morphology, proliferation, and gene expression patterns. They can be derived from various sources, including solid organs, such as the liver or lung, or specific tissue compartments, such as the ECM-rich decellularized basement membrane (see Fig. 4 ). Landberg et al. [ 115 ] hypothesized that using a pre-clinical platform based on decellularized patient-derived scaffolds as growth substrates to account for hidden clinically relevant information and aid in modeling the individualized properties of microenvironments could be optimized for personalized treatment planning. Different decellularization techniques, such as chemical, physical, or enzymatic methods, remove cellular components while preserving the ECM integrity (see Table 6 ) [ 116 ]. The choice of decellularization method depends on the tissue type, desired scaffold characteristics, and the specific requirements of the study. Combinations of different techniques may also be employed to achieve optimal decellularization outcomes. However, challenges remain in the field. The immunogenicity and biocompatibility of decellularized tissues must be carefully considered to prevent adverse reactions when introducing foreign matrices into cell culture systems. Standardization and reproducibility of decellularization protocols are also crucial to ensure consistency across studies and facilitate comparison of results. Integration with advanced technologies, such as microfluidics or organ-on-a-chip systems, can further enhance the functionality and relevance of decellularized tissue models. D’Angelo et al. [ 117 ] developed a more representative 3D model of colorectal cancer liver metastasis using patient-derived scaffolds. These scaffolds, created by decellularizing tissue-specific ECM, retain the metastatic microenvironment’s biological properties and structural characteristics. The HT-29 CRC cell line was cultured within these scaffolds, obtained explicitly from cancer patients. The study observed increased cell proliferation and migration in the cancer-derived scaffolds, highlighting their ability to provide a more conducive environment for tumor cell growth and spreading. Furthermore, the 3D culture system demonstrated a reduced response to chemotherapy. HT-29 cells cultured in the cancer-specific 3D microenvironments showed decreased sensitivity to treatment with 5-fluorouracil and a combination of 5-fluorouracil with Irinotecan, when used at standard IC50 concentrations. The use of patient-derived scaffolds allows for the study of colorectal cancer metastasis progression and the assessment of their response to chemotherapy agents, to develop new therapeutic strategies and personalized treatments. Additionally, it provides an opportunity to identify potential prognostic biomarkers and therapeutic targets specific to peritoneal metastasis. Varinelli et al. [ 118 ] conducted a study that employed a tissue-engineered model for investigating peritoneal metastases (PM) in vitro, yielding similar conclusions. The model involved seeding PM-derived organoids onto decellularized extracellular matrices (dECMs) sourced from the peritoneum, enabling the exploration of intricate interactions between neoplastic cells and the ECM in the PM system. Both neoplastic peritoneum and corresponding normal peritoneum tissues were utilized to generate 3D-dECMs. Utilizing confocal reflection and polarized light microscopy techniques, the study observed disparities in tissue texture and the distribution and integrity of individual collagen fibers between normal and neoplastic-derived tissues obtained from three distinct PM patients. The results demonstrated that 3D-dECMs derived from neoplastic peritoneum exhibited a notably higher proportion of Ki-67-positive cells after 5 and 12 days. Furthermore, expression levels of specific genes critical for tissue architecture, stiffness, ECM remodeling, fibril generation, epithelial cell differentiation, resistance to compression, and regulation of angiogenesis were found to be elevated in 3D-dECMs generated from neoplastic tissue compared to those from normal tissue or Matrigel-based models. In summary, by utilizing patient-derived scaffolds and cutting-edge techniques, the researchers successfully developed more physiologically relevant models that significantly contribute to our comprehension of colorectal cancer and PM biology. These models, alongside others [ 119 – 122 ], offer valuable insights into the intricate interplay between tumor cells and the ECM, paving the way for the potential discovery of novel therapeutic targets and the development of personalized treatment strategies for peritoneal metastases. Furthermore, decellularized tissue scaffolds provide an efficient platform to study the interactions between different components abundantly found in the ECM, like macrophages and endothelial cells. Macrophages and endothelial cells are known for their involvement in cancer progression in the context of the ECM within solid tumors, as they are often found in large numbers [ 123 ]. Macrophages within the tumor (often referred to as tumor-associated macrophages or TAMs) can be “hijacked” by cancer cells and reprogrammed to support tumor growth and progression. For example, they can promote cancer cell proliferation, enhance blood vessel formation (angiogenesis), assist in tissue remodeling, and suppress the immune response against the tumor. Pinto et al. [ 123 ] investigated how human colorectal tumor matrices influence macrophage polarization and their subsequent role in cancer cell invasion. To facilitate this, a novel 3D-organotypic model was utilized using decellularized tissues from surgical resections of colorectal cancer patients. This model preserved native tissue characteristics, including major ECM components, architecture, and mechanical properties, while removing DNA and other cellular components. The study found that macrophages within tumor matrices displayed an M2-like anti-inflammatory phenotype, characterized by higher expression of IL-10, TGF-β, and CCL18, and lower expression of CCR7 and TNF. Furthermore, it was observed that tumor ECM-educated macrophages effectively promoted cancer cell invasion through a mechanism involving CCL18, as demonstrated by Matrigel invasion assays. The high expression of CCL18 at the invasive front of human colorectal tumors correlates with advanced tumor staging, underscoring its clinical significance. The findings highlight the potential of using tumor-decellularized matrices as exceptional scaffolds for recreating complex microenvironments, thereby enabling a more comprehensive understanding of cancer progression mechanisms and therapeutic resistance. Besides TAMs, endothelial cells express various adhesion molecules and chemokines, such as selectins, integrins, and members of the immunoglobulin superfamily, which can interact with ligands on cancer cells, facilitating their adhesion to the endothelial cell layer. This adhesion is a critical step in the extravasation process, where cancer cells exit the bloodstream and invade surrounding tissues to form metastases. Moreover, endothelial cells can signal and recruit macrophages and other immune cells to the tumor site. Once there, macrophages can be “educated” by the tumor to adopt a pro-tumor phenotype, suppressing the immune response and promoting tumor growth. Therefore, decellularized matrices are suitable for studying such interactions as they closely resemble the natural tumor environment, including native adhesion sites, signaling molecules, and mechanical cues. Helal-Neto et al. [ 124 ] examined the influence of dECM produced by a highly metastatic human melanoma cell line (MV3) on the activation of endothelial cells and their intracellular cell differentiation signaling pathways. The researchers studied the differences in the ultrastructural organization and composition of melanocyte-derived ECM (NGM-ECM) and melanoma-derived (MV3-ECM). Higher levels of tenascin-C and laminin and lower fibronectin expression were detected in MV3-ECM. Moreover, endothelial cells cultured in the MV3-ECM underwent morphological transformations and exhibited increased adhesion, mobility, growth, and tubulogenesis. The interaction between the endothelial cells and decellularized matrix induced integrin signaling activation, resulting in focal adhesion kinase (FAK) phosphorylation and its association with Src (a non-receptor tyrosine kinase protein). Src activation, in turn, stimulated the activation of vascular endothelial growth factor receptor 2 (VEGFR2), enhancing the receptor’s response to VEGF. The activation of VEGF and the association between FAK and Src was inhibited by blocking the αvβ3 integrin, which reduced tubulogenesis. In conclusion, the findings suggested that the interaction of endothelial cells with melanoma-ECM triggered integrin-dependent signaling, which led to the activation of the Src pathway that sequentially potentiated VEGFR2 activation and enhanced angiogenesis. Thus, progress in cancer biology relies on understanding the specific cellular responses influenced by the matrix signals within the ECM, as its nature inherently imposes spatial variations on cellular signaling, composition, topography, and biochemical factors. Table 7 summarizes some studies using hydrogel and decellularized tissue scaffolds for 3D cell cultures. Hybrid scaffolds Integrating multiple scaffold types offers the potential to create 3D cell culture systems that closely mimic the physiological conditions of living tissues. This approach enables researchers to develop more accurate and biologically relevant models for studying cellular behavior, disease progression, and therapeutic responses. By combining different scaffold materials, such as natural and synthetic polymers or hydrogels, researchers can replicate the complexity and heterogeneity of the native tissue microenvironment. These hybrid scaffolds can provide a range of physical, chemical, and mechanical cues that influence cell behavior, including cell adhesion, migration, proliferation, and differentiation. Additionally, the combination of scaffolds can enhance the functionality of the 3D cell culture systems by incorporating specific features, such as the controlled release of growth factors or the inclusion of microvascular networks. Utilizing diverse scaffold types in 3D cell culture offers an innovative and promising approach for advancing our understanding of tissue biology, disease mechanisms, and developing more effective therapies. Bassi et al. [ 98 ] addressed the limitations of conventional therapies for osteosarcoma, a type of bone cancer, by introducing two innovative approaches in tumor engineering that aim to improve therapy outcomes. The study utilized hydroxyapatite-based scaffolds that mimic the in vivo TME, specifically emphasizing the CSC niche. Two types of scaffolds were employed: a biomimetic hybrid composite scaffold obtained through biomineralization, involving the direct nucleation of magnesium-doped hydroxyapatite (MgHA) on self-assembling collagen fibers (MgHA/Coll), and porous hydroxyapatite scaffolds (HA) produced by a direct foaming process. These scaffolds provided a framework for the subsequent investigation of the biological performance of human osteosarcoma cell lines (MG63 and SAOS-2) and enriched CSCs within these complex 3D cell culture models. Immunofluorescence and other characterization techniques were employed to evaluate the response of the osteosarcoma cell lines and CSCs to the biomimetic scaffolds. The results demonstrated the successful formation of sarcospheres, which are stable spheroids enriched with CSCs, with a minimum diameter of 50 μm. Comparing the advanced 3D cell culture models with conventional 2D culture systems, the study revealed the former’s superiority in mimicking the osteosarcoma stem cell niche and enhancing the predictivity of preclinical studies. The findings underscore the significance of the TME and emphasize the potential of combining CSCs with biomimetic scaffolds as a promising approach to developing novel therapeutic strategies for osteosarcoma. Further efforts can be focused on developing more sophisticated 3D models that accurately replicate the heterogeneity of the osteosarcoma microenvironment, incorporating patient-derived cells and elements such as immune cells and vasculature. Additionally, the advanced 3D cell culture models can serve as valuable tools for drug screening and personalized medicine approaches, further contributing to advancing osteosarcoma research and treatment strategies. A unique cell culture technique known as “sequential culture” was used to establish a biomimetic bone microenvironment that facilitated the EMT of metastasized prostate cancer cells [ 141 ]. The approach involved incorporating bioactive factors from the osteogenic induction of human mesenchymal stem cells (MSCs) within porous 3D scaffolds, specifically polymer–clay composite (PCN) scaffolds, by incorporating hydroxyapatite (HAP) clay into PCL. The researchers also modified sodium clay Montmorillonite (Na-MMT) clay using 5-amino valeric acid to create HAPclay through in situ hydroxyapatite biomineralization into the intercalated nano clay. They performed RNA extraction and quantitative real-time polymerase chain reaction (qRT-PCR) analysis to investigate gene expression changes. Additionally, they conducted a comparative analysis of bone metastasis between the low and high metastatic cell lines, providing insights into their differential responses to the bone microenvironment. It was shown that both, the highly metastatic prostate cancer cell line PC-3 and the non-metastatic cell line MDAPCa2b, underwent MET transition when exposed to the biomimetic bone microenvironment in the 3D scaffold model. However, notable differences were observed in their morphological characteristics and cell–cell adhesion, suggesting distinct responses to the microenvironment. Additionally, quantitative variations in gene expression were observed between tumors generated using the two cell lines in the bone microenvironment. These findings are essential for developing targeted therapeutic strategies against prostate cancer bone metastasis. Bai et al. [ 142 ] conducted a study in which they incorporated graphene oxide (GO) onto a copolymer of polyacrylic acid-g-polylactic acid (PAA-g-PLLA) to create a stimuli-responsive scaffold. This scaffold, combined with PCL and gambogic acid (GA), exhibited a selective response towards tumors and demonstrated a significant accumulation of GO/GA in vitro breast tumor cells (MCF-7 cells) under acidic conditions (pH 6.8), while showing minimal impact on normal cells (MCF-10A cells) at physiological pH (pH 7.4). The study further revealed that the synergistic use of pH-responsive photo-thermal conversion was more effective in inhibiting tumor growth than independent treatments. In vivo experiments showed remarkable tumor suppression (99% reduction within 21 days) through tumor tissue disintegration, degeneration, and overall tumor suppression when treated with GO-GA scaffolds combined with photo-thermal therapy, in comparison to control groups or those treated with either GO-GA scaffolds or near-infrared (NIR) irradiation alone. Microfluidics provide a versatile platform for 3D cell culture, offering both scaffold-based and scaffold-free approaches. Researchers can tailor the platform to suit the specific requirements of their experiments, whether involving cell-laden scaffolds or the aggregation of cells to form spheroids or organoids. The microfluidic setup allows for precise control over the microenvironment, including the flow of nutrients and oxygen, as well as the ability to introduce gradients of specific molecules. Lee et al. [ 143 ] utilized soft lithography to fabricate a 7-channel microchannel plate using poly-dimethylsiloxane (PDMS). Within separate channels, PANC-1 pancreatic cancer cells and pancreatic stellate cells (PSCs) were cultured within a collagen I matrix. The study observed the formation of 3D tumor spheroids by PANC-1 cells within five days. Intriguingly, the presence of co-cultured PSCs resulted in an increased number of spheroids, suggesting a potential influence of PSCs on tumor growth. In the co-culture setup, PSCs exhibited heightened expression of α-smooth muscle actin (α-SMA), a marker associated with fibroblast activation, as well as various EMT-related markers, including vimentin, transforming growth factor-beta (TGF-β), TIMP1, and IL-8. These findings indicated that PSCs may induce an EMT-like phenotype in PANC-1 cells, potentially promoting tumor invasiveness, chemoresistance, and metastasis. Upon treating the co-culture with gemcitabine, the survival of the spheroids did not exhibit significant changes. However, when combined with paclitaxel, the tumor spheroids demonstrated a notable inhibitory effect on growth. The model revealed a complex interplay between PANC-1 cells and PSCs within the TME. Nonetheless, the combination of gemcitabine and paclitaxel showed promise to overcome resistance and inhibit tumor growth. The implications of these findings are significant for understanding the complex interplay between tumor cells and the surrounding stromal cells within the TME. Tumor-stroma interactions play a critical role in cancer progression and therapy response. Using microfluidic-based 3D co-culture models allows researchers to better recapitulate the in vivo conditions, providing a more accurate representation of tumor behavior and therapeutic responses. Likewise, Chen et al. [ 144 ] developed a microchannel plate-based co-culture model to recreate the in vivo TME by combining Hepa1-6 tumor spheroids with JS-1 stellate cells (liver cancer)—the novel model aimed to mimic key aspects of EMT and chemoresistance observed in tumors. The integration of these cell types in 3D concave microwells allowed for the formation of 3D tumor spheroids in 3 days. The experimental setup was optimized to ensure optimal culture proliferation conditions and appropriate interactions between Hepa1-6 and JS-1 cells. Co-cultured JS-1 cells displayed noticeable changes in cellular morphology, including an increase in the expression of α-SMA. In contrast, the co-cultured Hepa1-6 spheroids exhibited higher expression levels of TGF-β1 than those cultured alone. These findings suggested that JS-1 stellate cells induced an EMT-like phenotype in the Hepa1-6 cells, potentially contributing to increased invasiveness and resistance to chemotherapy. Jeong et al. [ 145 ] conducted a similar study involving the formation of 3D spheroids composed of human colorectal carcinoma cells (HT-29) using a microfluidic chip. They reported a notable enhancement in HT-29 growth when co-cultured with fibroblasts (see Fig. 5 ). This enhancement was demonstrated by a 1.5-fold increase in the percentage change in spheroid diameter over 5 days. Furthermore, after 6 days of culture, the co-cultured spheroids exhibited reduced expression of Ki-67, a marker associated with proliferation, while showing increased fibronectin expression. These findings indicated altered cellular behavior compared to the spheroid monocultures. The presence of fibroblasts in the co-culture environment also led to their activation, as evidenced by an upregulation in the expression of α-smooth muscle actin (α-SMA) and an increase in migratory activity. This reciprocal interaction between the spheroids and fibroblasts within a microfluidic chip established a dynamic relationship. Additionally, when exposed to paclitaxel, the co-culture displayed a survival advantage over 2D monoculture, suggesting the potential role of fibroblasts in conferring drug resistance. Integrating the 3D tumor spheres and CAFs within a collagen matrix-incorporated microfluidic chip provided a valuable tool for studying the TME and evaluating drug screening and efficacy. This approach allowed for the replication of essential interactions between tumor cells and stromal components, which are known to influence cancer progression and therapeutic response. By utilizing the proposed microfluidic chip-based model, researchers can delve into the intricate dynamics of the TME and explore novel therapeutic approaches. The ability to control and better mimic the in vivo conditions within the chip provides a valuable platform for investigating drug responses and evaluating the effectiveness of anticancer treatments. Further exploration and refinement of this model could lead to significant advancements in our understanding of tumor biology and the development of targeted therapies for improved patient outcomes. Table 8 summarizes some studies using microfluidic-based systems to develop 3D cell cultures. Challenges and future prospectives While 3D cell culture offers many advantages over traditional 2D culture, it also presents some unique challenges that must be addressed to realize its potential for advancing research fully. One significant challenge is maintaining a stable and reproducible culture system. 3D cell culture systems often require specialized equipment, such as bioreactors and microfluidic devices, which can be expensive and difficult to use. These systems can be more challenging to reproduce compared to 2D systems due to the increased complexity and high heterogeneity of the culture environment, as cells are often embedded in matrices or scaffolds, making it difficult to control factors such as temperature, pH, and the presence of growth factors and/or other signaling molecules [ 149 ]. In addition, there is often a high degree of variability between different batches of cells and between experiments, making it difficult to draw statistically supported conclusions. Considering 3D cell cultures, adhering to Good Manufacturing Practices (GMP) principles is essential for translating these advanced models from research to clinical and commercial applications. However, several challenges and considerations arise when implementing GMP standards, including standardization of culture conditions, scalability, quality control, raw materials and biologics sourcing, regulatory compliance, data integrity, and documentation. GMP-compliant manufacturing processes require high reproducibility and control over critical parameters such as cell sourcing, culture media, culture supplements, and environmental conditions [ 150 , 151 ]. As mentioned above, achieving this consistency can be challenging, given the inherent biological variability of primary cells and the sensitivity of 3D cultures to slight changes in culture conditions. Furthermore, meeting regulatory requirements is a paramount challenge in translating 3D spheroid cultures to clinical applications. Regulatory bodies, such as the Food and Drug Administration (FDA) in the United States and the European Medicines Agency (EMA) in Europe, have specific guidelines for the use of cell-based therapies and products [ 152 ]. GMP compliance is necessary to navigate these regulatory pathways and obtain approval for clinical trials and commercialization. Moreover, oxygen accessibility is a critical consideration in 3D cell culture methods, and its heterogeneity within these environments poses a significant challenge in replicating physiological conditions and obtaining accurate experimental results. Cells located in the interior of 3D structures, such as spheroids, often encounter limited oxygen availability due to microenvironmental factors (i.e., tumor spheroids naturally develop hypoxic regions due to irregular vascularization in tumors) and diffusion barriers (e.g., densely packed cells, ECM, scaffolding matrices) [ 153 ]. As cells proliferate and form 3D structures, the demand for oxygen increases due to the larger volume that oxygen must traverse. Oxygen diffusion from the surrounding culture medium becomes progressively hindered as the distance from the culture surface to the interior of the 3D structure increases. This results in an oxygen gradient, where cells near the periphery have sufficient oxygen, but those in the core encounter oxygen deficiency, leading to hypoxia. Hypoxic core cells often exhibit altered gene expression, reduced proliferation, and changes in metabolic pathways as they enter a dormant state and cease cycling when deprived of oxygen and nutrients. This reduced activity renders them relatively resistant to cytostatic drugs that predominantly target actively dividing cells, leading to increased drug resistance, as is often observed in solid tumors [ 154 , 155 ]. Confocal microscopy can be used to visualize dormant cells by labeling them with a nucleoside analog, allowing for their quantification and distinction from actively proliferating cells. This analog gets diluted in actively dividing cells. Still, it remains retained in quiescent, non-dividing cancer cells, thus providing a valuable tool for distinguishing them from the surrounding actively proliferating cells [ 156 ]. Leveraging this characteristic of 3D spheroids, they offer potential avenues for developing novel therapeutics targeting cancer cells resistant to cytostatic anticancer drugs. Wenzel et al. [ 157 ] cultivated T47D breast cancer cells in 3D cultures and used confocal imaging to differentiate cells within the inner core from those in the surrounding outer core. Cells in the inner core, experiencing limited access to oxygen and nutrients, exhibited reduced metabolic activity compared to their counterparts in the outer core. Through screening small molecule libraries against these 3D cultures, the authors identified nine compounds that selectively targeted and killed the inner core cancer cells while sparing the more actively proliferating outer cells. The identified drugs primarily affected the respiratory chain pathway, aligning with the altered metabolic activity of oxygen-deprived cells transitioning from aerobic to anaerobic metabolism. Hence, compounds selectively targeting dormant cancer cells significantly improved the effectiveness of commonly employed cytostatic anticancer drugs. Alternatively, the use of microfluidic devices that enable the creation of controlled oxygen gradients within cultures, the incorporation of oxygen-permeable materials, and the addition of oxygen-releasing compounds to provide a more uniform distribution of oxygen in vitro. However, it is important to acknowledge that these strategies may not fully replicate the complexity of oxygen gradients in real tissues [ 158 ]. Boyce et al. [ 159 ] presented the design and characterization of a modular device that capitalized on the gas-permeable properties of silicone to create oxygen gradients within cell-containing regions. The microfabricated device was constructed by stacking laser-cut acrylic and silicone rubber sheets, where the silicone not only facilitated oxygen gradient formation but also served as a barrier, separating the flowing gases from the cell culture medium to prevent evaporation or bubble formation during extended incubation periods. The acrylic components provided structural stability, ensuring a sterile culture environment. Using oxygen-sensing films, gradients with varying ranges and steepness in the microdevice can be achieved by adjusting the composition of gases flowing through the silicone elements. Furthermore, a cell-based reporter assay illustrated that cellular responses to hypoxia were directly proportional to the oxygen tension established within the system, proving efficacy. Another practical challenge in 3D cultures arises from the intricacy of extracting cells from biomaterial-based 3D constructs. Typically, the construction of degradable hydrogel scaffolds involves integrating breakable crosslinks and/or cleavable components into the polymer structure or incorporating naturally biodegradable ECM constituents such as hyaluronic acid, laminin, fibronectin, and collagen [ 160 ]. Yet, traditional dissociation techniques prove to be notably inefficient and are influenced by the inherent structural complexities of the culture system. Enzymatic degradation, for example by collagenase, is a widely employed method for retrieving cells from 3D cell culture collagen-based scaffolds. The enzyme is selected to match the specific collagen type in the scaffold. During incubation, collagenase enzymatically cleaves the collagen fibers, releasing cells that were embedded or adhered to these fibers. Once the collagen has been broken down, the cells are collected as a suspension in the culture medium [ 161 ]. Cell viability and functionality assessments are typically performed to maintain the cells’ health and functionality. While using enzymatic degradation for 3D cell culture scaffolds is common, it remains an intricate approach associated with several limitations. It is important not to underestimate the impact of collagenase or other enzymes on cell viability and functionality. Careful optimization of digestion time and enzyme concentration is essential to balance efficient scaffold degradation and preserving cell quality [ 162 ]. Additionally, potential changes in cell phenotype during digestion are a significant concern, necessitating diligent monitoring of digestion parameters. In complex 3D scaffolds, particularly those with intricate structures, enzymatic digestion may be less effective, prompting researchers to explore alternative retrieval methods or adapt the digestion process. Ethical considerations also come into play, especially when working with human or animal-derived cells, raising concerns about using enzymes like collagenase. Adherence to ethical guidelines and institutional regulations is crucial for maintaining responsible and ethical research practices. Hence, extensive research efforts have been directed toward developing improved techniques for cell retrieval from scaffold-based 3D cell cultures without compromising the cells’ integrity. For instance, Kyykallio et al. [ 163 ] developed an innovative pipeline for extracting extracellular vesicles (EVs) from 3D cancer spheroids using nanofibrillar cellulose (NFC) scaffolds as a cell culture matrix. This pipeline encompassed two distinct approaches: a batch method optimized for maximal EV yield at the conclusion of the culture period, and a harvesting method designed to facilitate time-dependent EV collection, allowing integration of EV profiling with spheroid development. Both approaches provided convenient setup, quick execution, and reliably produced a significant number of electric vehicles (EVs). Compared to scaffold-free 3D spheroid cultures on ultra-low affinity plates, the NFC-based approach demonstrated similar EV production per cell, offering scalability, preserved cell phenotype and integrity, and greater operational simplicity, ultimately leading to higher EV yields. Another approach is based on cell-mediated degradation of hydrogel scaffolds, where living cells actively break down the hydrogel structure [ 164 ]. This degradation mechanism is particularly relevant in tissue engineering and regenerative medicine. When cells are encapsulated within a hydrogel scaffold, they can secrete enzymes and other molecules that interact with its components, leading to its gradual breakdown. As cells proliferate and remodel their microenvironment, they may alter the scaffold’s properties and eventually facilitate its degradation. This dynamic process allows for the controlled release of cells, growth factors, and other bioactive substances within the hydrogel, making it a valuable technique for drug delivery applications. While synthetic degradable polymer scaffolds are significant for developing 3D cell culture models, a concern regarding their in vitro and in vivo biocompatibility pertains to the presence of potentially toxic elements and chemicals utilized during the polymerization of synthetic hydrogels or the crosslinking of natural polymer hydrogel precursors, especially when the reaction conversion is less than 100%. These substances release unreacted monomers, stabilizers, initiators, organic solvents, and emulsifiers. These are integral to the hydrogel preparation process but may pose harm if they seep into the seeded cells or tissues [ 165 , 166 ]. For instance, widely employed free radical photo-initiators (e.g., Irgacure) have been observed to diminish cell viability, even at minimal concentrations [ 167 , 168 ]. Consequently, hydrogel scaffolds intended for embedding cells in 3D cultures typically require purification (e.g., by dialysis or solvent washing) to eliminate any residual hazardous chemicals before seeding. However, in certain scenarios, the purification of hydrogel scaffolds is more challenging or unfeasible, particularly when dealing with hydrogels generated through in situ gelation. In such cases, cells are introduced to the reactants necessary for hydrogel synthesis while still in a pre-polymer solution. As a result, when employing in situ gelation techniques, utmost caution must be exercised to ensure that all components are non-toxic and safe. Furthermore, another challenge associated with 3D cell culture is the difficulty characterizing the cellular response to drugs and other therapeutic agents. In 2D cell culture, cells are typically analyzed using a range of standard assays that are well-established and easy to interpret. However, in 3D cell culture, there is often a lack of such standardized assays and protocols. Fang and Eglen [ 169 ] highlighted that the cultures’ complex morphology, functionality, and architecture hampered the application of some well-developed biochemical assays to 3D systems. Cells tend to aggregate into dense and/or large clusters over time, even in macroporous scaffolds, causing diffusional limitations when carrying out in situ characterization assays. Limitations arise due to the impeded diffusion and confinement of gases, nutrients, waste, and reagents within the system, compounded by challenges when quantifying and normalizing data between different biomimetic cultures [ 170 – 172 ]. For instance, Totti et al. [ 173 ] demonstrated that assessing a culture of pancreatic cancer cells in macroporous polyurethane foam-type scaffolds with the 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTS) assay showed minimal differences between various scaffold conditions (e.g., ECM coatings on the scaffolds). However, sectioning, immunostaining, and imaging revealed clearer cell proliferation, morphology, and growth distinctions between the conditions. Likewise, the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay failed in capturing the differences in pancreatic cells’ viability cultured in polyurethane scaffolds after drug and irradiation screening, which were realized using advanced microscopy and imaging [ 174 ]. Hence, it is crucial for researchers to carefully consider the appropriate analytical approach that aligns with their study objectives before commencing the analysis of any 3D cultures. Also, they must be aware that some of the classical gold-standard approaches used in 2D cultures may not be directly applicable in 3D settings, as Hamdi et al. [ 175 ] showed that it is unfeasible to extract cells from spheroids for colony formation assays, which are used for developing post-treatment survival curves. Consequently, the researchers suggested in situ characterization readouts, which are novel and/or different from the existing 2D culture protocols. Using stem cells and differentiated markers is crucial for characterizing and monitoring the cellular composition and differentiation status within 3D spheroids. These markers can help researchers achieve specific goals and outcomes, such as assessing the differentiation potential of stem cells, tracking the progression of differentiation, and studying the dynamics of cell populations in the spheroids [ 176 , 177 ]. However, using such markers in 3D spheroid cultures presents certain challenges that need to be addressed for accurate and meaningful results. One primary challenge is the heterogeneity of stem cells within spheroids. Spheroids often comprise a mixture of stem cells and differentiated cells, so the stem cell markers may not exclusively identify and isolate the stem cell population, leading to difficulty in studying the specific behavior of stem cells within the spheroid. Another challenge is the variability in the expression of stem cell markers. These markers’ expression can fluctuate spatially and temporally within the spheroid, making it complex to track and interpret changes in marker expression over time. Additionally, in larger spheroids, stem cell markers may not effectively penetrate the core of the spheroid, limiting the ability to assess the stem cell population in the inner regions [ 176 , 177 ]. Researchers can employ several strategies to overcome these challenges and effectively use stem cell markers in 3D spheroid cultures [ 178 , 179 ]. An alternative method involves combining stem cells and other cellular markers to better understand the cellular composition within the spheroid. This multi-marker approach can help mitigate the issues related to marker heterogeneity. Moreover, live imaging techniques, such as confocal microscopy, can provide real-time insights into the dynamics of marker expression within spheroids. Controlling the size of spheroids is another strategy to enhance marker penetration and access to the innermost cells. Utilizing microfluidic techniques allows for the accurate regulation of spheroid size, ensuring effective penetration of markers throughout all regions of the spheroid [ 178 , 179 ]. Additionally, single-cell analysis methods, such as single-cell RNA sequencing and proteomic analysis, enable the characterization of individual cells within spheroids. This approach can identify unique gene or protein expression patterns and shed light on the behavior of stem cell populations. Another valuable strategy is creating spheroids with genetically encoded stem cell reporters, which produce fluorescent or luminescent signals in stem cells, making them more visible and trackable. Lastly, mimicking the stem cell niche or microenvironment within 3D culture conditions can help maintain stemness and marker expression in spheroids [ 179 ]. Although imaging provides valuable information about cell distribution and binding, quantitative measurements using image analysis in 3D cultures are often lacking because they require cell count consistency across samples [ 180 ]. The challenge lies in the inability to visualize the whole-cell population, leading to difficulties obtaining accurate and reliable data from the entire culture. This is due to the hampered diffusion of fluorescent markers, primarily due to their large size, governed by the inherent heterogeneity of 3D cultures. One potential solution is to measure cell number from imageable cross-sections; however, Sirenko et al. [ 181 ] noted that light interferences and dye diffusion limitations resulted in unreliable results, as the number of cells counted substantially differed from the number of cells seeded. In addition, technical limitations such as prohibitive costs and limited scalability must also be considered [ 149 ]. Implementing 3D culture systems may incur higher costs compared to 2D culture systems, attributed to the requirement for specialized equipment, materials, and expertise [ 182 , 183 ]. Similarly, scaling up 3D culture systems for industrial or clinical applications can be challenging due to the increased complexity of the culture environment and the need for specialized equipment [ 184 ]. This can limit the potential for the widespread adoption of 3D culture techniques in these settings. Significant strides have been made in creating dynamic scaffolds that can respond to or guide resident cells [ 185 ]. For example, thermoresponsive hydrogels like poly-N-isopropylacrylamide (pNIPAm) have been proven effective for cell population harvesting [ 186 , 187 ]. Moreover, the fusion of microscale technologies for cell culture with adaptable hydrogel designs has facilitated various investigations. These include investigating cell migration within microfluidic hydrogels and establishing high-throughput screening platforms to explore interactions between cells and materials [ 188 ]. Notably, the mechanobiology field is intrigued by various mechanically dynamic hydrogels that can either stiffen, soften, or reversibly transition between these states to examine cellular responses. These dynamic substrates offer a means to scrutinize how mechanical cues influence cell behavior, similar to the study of soluble factors over decades [ 189 ]. Techniques for introducing heterogeneity and multiple cell types within 3D constructs are also advancing. This includes innovative methods where hydrogels serve as bio-inks to print cells, either layer-by-layer from a 2D base or directly within a 3D space enclosed by another hydrogel. As these platforms progress, they are expected to become more widely accessible [ 190 , 191 ]. In the interim, it remains crucial to maintain an open and collaborative dialogue between cell biologists, materials scientists, and engineers. This collaborative effort will ensure that the next generation of scaffold-based 3D cell culturing systems is well-equipped to address the significant challenges posed by the increasing biological and technical complexities.
Acknowledgements The authors would like to acknowledge the financial support of the American University of Sharjah Faculty Research Grants, the Al-Jalila Foundation [AJF 2015555], the Al Qasimi Foundation, the Patient’s Friends Committee-Sharjah, the Biosciences and Bioengineering Research Institute [BBRI18-CEN-11], GCC Co-Fund Program [IRF17-003], the Takamul Program [P OC-00028-18], the Technology Innovation Pioneer (T IP) Healthcare Awards, Sheikh Hamdan Award for Medical Sciences [MRG-57-2019-2020], and the Dana Gas Endowed Chair for Chemical Engineering. We also would like to acknowledge student funding from the Material Science and Engineering Ph.D. program at AUS. Author contributions WHA drafted the manuscript. GAH and WGP reviewed and edited the manuscript. All authors read and approved the final version. Data availability Not applicable. Declarations Competing interests The authors declare no competing interest.
CC BY
no
2024-01-16 23:45:34
J Biomed Sci. 2024 Jan 14; 31:7
oa_package/f2/c6/PMC10789053.tar.gz
PMC10789054
0
Background The increase of antibiotic resistance in the era of modern medicine represents one of the most important challenges that the global health community is facing. At the center of this problem is the Haemophilus influenzae ( H. influenzae ) bacterium, which historically has been the leading cause of bacterial meningitis and other invasive conditions in pediatrics [ 1 ]. This Gram-negative coccobacillus presents with a variety of pathologies, causing conditions ranging from relatively benign otitis media to severe diseases such as septicemia [ 2 ]. Before to the advent of the H. influenzae type b (Hib) vaccine, the global burden of invasive Hib diseases was notably significant. Although this vaccine is advantageous in reducing the challenges caused by Hib, the apparent emergence of non-typeable H. influenzae (NTHi) strains has been implicated, particularly in respiratory pathologies [ 3 , 4 ]. The presence of many challenges can lead to the acceleration of antibiotic resistance of this bacterium. The velocity and extent of antibiotic resistance, especially in bacterial species like H. influenzae are disconcertingly high [ 1 , 5 ]. This intensification of resistance can be attributed to a combination of factors including, excessive use of antibiotics, self-administration of drugs, short treatment courses, and unrestricted antibiotic procurement in certain areas [ 6 – 8 ]. The implications of these actions are profound; the ineffectiveness of monotherapy leads to prolonged disease duration, increased health care costs, and increased mortality [ 9 ]. According to the registered reports, the phenomenon of antibiotic resistance in H. influenzae contains various pharmaceutical agents, from traditional drugs such as ampicillin and chloramphenicol to newer compounds like fluoroquinolones [ 10 , 11 ]. The genetic basis of such resistance lies mainly in the absorption of resistance-causing genetic elements, which is facilitated through mechanisms such as conjugation and transformation. This evolving genetic perspective poses significant challenges to existing treatment strategies for clinicians, and complicates the treatment path for what was once a simple bacterial infection [ 12 ]. In the present meta-analysis we conducted a comprehensive study on the global antibiotic resistance of this bacterium, based on geographical distribution. We focused on MDR H. influenzae strains, which in turn highlights new antibiotic stewardship strategies against this pathogen in healthcare settings.
Methods Search strategy In the present study, we conducted a comprehensive systematic review and meta-analysis on the prevalence of multidrug-resistant H. influenzae worldwide, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist [ 13 ]. Major electronic databases, namely Medline, ISI Web of Science, Scopus, EMBASE, Google Scholar, and ProQuest were scoured. Aligned with the Medical Subject Headings (MeSH), Key terms such as “ Haemophilus influenzae ”, “ H. influenzae ”, “Antibiotic resistance”, “Multi-drug resistance”, and “MDR”, were integrated in this research. The search had no restrictions on language or date of publication. To ensure completeness, article citations were manually checked to identify any potentially overlooked studies. Study selection according to the Clinical and Laboratory Standards Institute In order to assess the eligibility of documents, all content of articles including title, abstract, and full text of relevant studies were evaluated. Inclusion criteria were: 1) original studies that investigated the prevalence of MDR H. influenzae in clinical samples; 2) articles related to H. influenzae infection in human subjects; 3) retrospective as well as cross-sectional studies; 4) articles that evaluated antimicrobial/antibiotic susceptibility testing (AST) according to the Clinical and Laboratory Standards Institute (CLSI) guideline. Our exclusion criteria were as follows: 1) duplicate studies; 2) article types (e.g. letters to the editor, case reports, reviews, and congress abstracts); 3) animal studies; 4) studies with insufficient information. Two independent authors participated in this step and discrepancies were resolved through discussion. Quality appraisal and data extraction The Joanna Briggs Institute (JBI) checklist was used to assess the quality assessment of relevant studies [ 14 ]. In this context, studies were included if they achieved at least 6 scores. Next, the required information was extracted from eligible studies, including: I) first author, II) publication year, III) country, IV) infection type, V) number of H. influenzae isolates, VI) prevalence of antibiotic resistance to ampicillin, amoxicillin, tetracycline, chloramphenicol, cefotaxime, ciprofloxacin, rifampin, sulfamethoxazole, cefuroxime, azithromycin, cefotaxime, ceftriaxone, levofloxacin, meropenem, VII) prevalence of beta-lactamase strains, VIII) prevalence of MDR H . influenzae , as well as IX) diagnostic method. Two independent authors were involved in the process, and discordance was determined by a third author. Statistical analysis Data were synthesized using the Comprehensive Meta-Analysis (CMA) software, version 2.2 (Biostat, Englewood, NJ). The Cochrane Q-test ( p < 0.05) and the I-squared ( I 2 ) index were used to measure the heterogeneity of studies. In case of significant heterogeneity, random-effects model based on the DerSimonian and Laird approach was applied. In addition, meta-regression techniques were used to investigate the impact of potential moderators. Publication bias was evaluated through the Egger’s p value test, Begg’s p value test, and funnel plot. If significant publication bias was detected, the trim-fill method was used to estimate any potential missing studies.
Results Literature search Overall, 375 pertinent documents were retrieved from database searches (Fig. 1 ). After initial evaluation of titles and abstracts, 209 articles were excluded. The main reasons for removing duplicates were included: non-original researches, animal-based studies, and the absence of reports on MDR H. influenzae . A comprehensive evaluation of the full text of 83 papers was then performed for potential inclusion. Upon further scrutiny, and supplemented by manual bibliographic searches, a total of 16 studies met the criteria for inclusion in our systematic review and meta-analysis. The data of these studies are summarized in Table 1 [ 1 , 10 , 11 , 15 – 27 ]. Characteristics of included studies These investigations focused on the prevalence of MDR H. influenzae in different regions: Spain ( n = 3), Portugal ( n = 1), China ( n = 5), Taiwan ( n = 2), Bangladesh ( n = 1), Iran ( n = 1), Japan ( n = 1), Ethiopia ( n = 1), and Australia ( n = 1). The time range of these studies covers from 2003 to 2023. The methods used in the studies included the evaluation of antibiotic susceptibility of clinical isolates of H. influenzae ; for example disc diffusion, E-test, and broth dilution techniques. These clinical species of H. influenzae were isolated from a wide range of disorders including invasive infections, meningitis, acute otitis media (AOM), and acute respiratory infections. Cumulatively, in our analysis, we incorporated data from 19,787 H. influenza e clinical isolates, which encompassed both Hib and non-Hib serotypes. It is noteworthy that two studies exclusively assessed the AST of the Hib serotype infections [ 16 , 19 ]. Of the total H. influenzae isolates, approximately 46.8 ± 8.9% were identified as Hib strain. Characteristics of H. influenzae antibiotic resistance In this meta-analysis, the antibiotic resistance trends of H. influenzae were determined as follows: amoxicillin, 6.3% (95% CI: 2.5–15; I 2 : 84.33; p = 0.01; Egger’s p = 0.01; Begg’s p = 0.02), ampicillin, 36% (95% CI: 25.6–48; I 2 : 94.38; p = 0.01; Egger’s p = 0.01; Begg’s p = 0.01), azithromycin, 15.3% (95% CI: 6.7–31.1; I 2 : 89.28; p = 0.01; Egger’s p = 0.03; Begg’s p = 0.05), ceftriaxone, 1.4% (95% CI: 0.2–10.2; I 2 : 73.91; p = 0.01; Egger’s p = 0.1; Begg’s p = 0.4), cefotaxime, 3.6% (95% CI: 1.3–9.5; I 2 : 80.17; p = 0.01; Egger’s p = 0.03; Begg’s p = 0.1), cefuroxime, 19.1% (95% CI: 9.7–34.0; I 2 : 91.14; p = 0.01; Egger’s p = 0.03; Begg’s p = 0.01), chloramphenicol, 17.2% (95% CI: 10.3–27.1; I 2 : 90.59; p = 0.01; Egger’s p = 0.1; Begg’s p = 0.01), ciprofloxacin, 1.7% (95% CI: 0.3–8.8; I 2 : 88.27; p = 0.01; Egger’s p = 0.01; Begg’s p = 0.5), levofloxacin, 7.5% (95% CI: 1.9–25.5; I 2 : 90.78; p = 0.01; Egger’s p = 0.09; Begg’s p = 0.3), meropenem, 4.3% (95% CI: 0.6–26.0; I 2 : 92.77; p = 0.01; Egger’s p = 0.07; Begg’s p = 0.5), rifampin, 8.9% (95% CI: 2.5–27.2; I 2 : 92.31; p = 0.01; Egger’s p = 0.01; Begg’s p = 0.3), sulfamethoxazole, 45.6% (95% CI: 34.9–56.7; I 2 : 92.39; p = 0.01; Egger’s p = 0.2; Begg’s p = 0.3), and tetracycline, 19.9% (95% CI: 8.3–40.4; I 2 : 95.3; p = 0.01; Egger’s p = 0.08; Begg’s p = 0.1). Characteristics of MDR H. influenzae The global prevalence of beta-lactamases producing H. influenzae and MDR H. influenzae was established at 34.9% (95% CI: 24.0-47.7; I 2 : 93.56; p = 0.01; Egger’s p = 0.01; Begg’s p = 0.01) and 23.1% (95% CI: 14.7–34.4; I 2 : 93.9; p = 0.01; Egger’s p = 0.04; Begg’s p = 0.01), respectively (Fig. 2 ). Furthermore, our data revealed an increasing trend in the prevalence of beta-lactamases producing H. influenzae , from 22.1% (95% CI: 10.4–40.9) during 2003–2007 to 48.1% (95% CI: 35.5–61.0), a more than twofold increase, during 2019–2023. In contrast, the trend for MDR H. influenzae has remained stable over the past two decades. Specifically, the pooled prevalence rates of MDR H. influenzae were 22.8% (95% CI: 13.0-36.8), 20.8% (95% CI: 16.5–25.9), and 27.8% (95% CI: 11.8–52.5), during 2003–2007, 2008–2012, and 2019–2023, respectively. After analyzing the geographical distribution, it was found that the prevalence of MDR H. influenzae in Asian countries was significantly higher than in Western regions, with rates of 24.6% (95% CI: 12.9–41.8) and 15.7% (95% CI: 6.7–32.6), respectively. When the study was classified according to the type of infection, the incidence of MDR H. influenzae was most pronounced in cases of meningitis with 46.9% (95% CI: 40.1–53.9) and the lowest prevalence was related to cases of AOM with 0.5% (95% CI: 0.0-7.4). The overall prevalence of MDR H. influenzae for invasive infections was 24.1% (95% CI: 12.0-42.5), while for acute respiratory infections it was 18.2% (95% CI: 6.6–41.1). In addition, a meta-regression analysis was performed to examine the potential effects of several moderating factors including, publication year, methodology, geographical latitude, and type of infection, on the pooled estimates. The results showed that the year of publication had a discernible impact on the aggregated estimates pertaining to MDR H. influenzae infection, as described in Table 2 . Publication bias In the present meta-analysis, potential publication bias was carefully assessed using both Begg’s and Egger’s p value tests. Moreover, any observed asymmetry in the funnel plot was interpreted as indicating significant publication bias. Collective evidence from these methods confirms the presence of publication bias in the studies included in this analysis. Notwithstanding this, the trim-fill method was used to rectify and stabilize the overall effect estimates. The results after applying the trim-fill method further bolstered the strength of the pooled estimates (Fig. 3 ).
Discussion Our meta-analysis sheds light on the complex global patterns of MDR H. influenzae outbreaks. To the best of our knowledge, this is the first global meta-analysis on the prevalence of MDR H. influenzae . One of the notable observations is the high prevalence of this bacterium in Asian countries (24.6%) compared to Western regions (15.7%). This notable difference can be caused by both internal and external factors in the healthcare system. Previous research on the H. influenzae infection in Taiwan identified the presence of extensively drug-resistant (XDR) H. influenzae strains as early as 2007, and documented the consistent drug resistance status maintained by these strains [ 1 ]. Also, another study that focused on antibiotic susceptibility in Africa from 1978 to 2011 was consistent with our findings; emphasizing the non-susceptibility of H. influenzae isolates to several useful antibiotics; according to their statistics, the rate of non-susceptibility to erythromycin, trimethoprim/sulfamethoxazole, tetracycline, ampicillin (or penicillin) was measured as 69.8%, 48.1%, 37.5%, and 34.7%, respectively [ 28 ]. In addition, the results of resistance to several antibiotics in Africa such as ampicillin (34.7%), ceftriaxone (0.9%), cefotaxime (2.6%), and trimethoprim/sulfamethoxazole (48.1%) were almost similar to our results in the present meta-analysis. The number of isolates and the type of geographical region are considered as the two main reasons for the difference between the studies. Some antibiotic-resistant bacteria such as sulfonamide-resistant Streptococcus pyogenes and penicillin-resistant Staphylococcus aureus , primarily have been linked to hospitals, places where we are faced with high consumption of antibiotics [ 29 ]. Such environments may serve as centers for the development and spread of drug resistance. Several factors, such as antibiotic prescribing habits, access to health care, and population density, all influence resistance patterns [ 30 ]. Notably, there is a significant variation in the prevalence of MDR H. influenzae among different infections. To illustrate, the occurrence of meningitis at 46.9% in contrast to AOM at 0.5% may be attributed to changes in bacterial pathogenicity, antibiotic utilization, laboratory diagnostic techniques, and host response. In essence, these factors collectively contribute to the observed difference in the incidence of meningitis and AOM [ 31 ]. Adding to the problem is that certain regions such as Asia and Central/Southern Europe have reported significantly lower incidence rates compared to other global regions. Interestingly, a stepwise logistic regression analysis from a study on 2091 H. influenzae isolates with disc diffusion-based AST elucidated specific demographic patterns, showing that male patients were less likely to harbor MDR H. influenzae strains [ 1 ]. Despite our focus on MDR H. influenzae , in many cases, antimicrobial prescriptions are made without knowledge of the causative organism. Vancomycin plus cefotaxime or ceftriaxone is the standard empirical antibiotic therapy for bacterial meningitis in children and newborns. On the other hand, azithromycin and clarithromycin are also alternative treatments in patients with AOM who have penicillin allergy [ 12 , 32 ]. In our study, the prevalence of resistant H. influenzae to azithromycin was 15.3%. Given our initial concern about multi-drug resistance in H. influenzae , it is important to monitor changes in resistance patterns against a broader range of antibiotics. While the prevalence of invasive infections (24.1%) and acute respiratory infections (18.2%) is at steady state, it indicates a uniform degree of antibiotic resistance spread in these categories, which requires ‎equal attention. ‎After meningitis, childhood pneumonia and bacteremia are the most common diseases caused by Hib strains, and pneumonia is particularly dominant in developing countries [ 33 ]. The potential effect of seasonality on the prevalence of antimicrobial resistance is consistent. In a systematic review and meta-analysis by Martinez et al., they observed stable antimicrobial resistance rates in colder months for S. pneumoniae [ 34 ]. This observation aligns with our findings and suggest that respiratory infections, which are more common in colder seasons, maintain a consistent antibiotic resistance. In our meta-analysis, the global prevalence of beta-lactamases producing H. influenzae was established at 34.9%, which should be considered as a serious concern. This enzyme confers resistance against a variety of penicillin-based drugs by hydrolyzing their beta-lactam ring structure. Our findings are in line with a systematic review and meta-analysis operated by Mather et al., where resistance in Gram-negative bacteria, including H. influenzae , was frequently reported in terms of their ability to produce beta-lactamase enzymes [ 35 ]. Also, a particularly alarming observation from another study by Ginsburg et al. is the increasing trend of beta-lactamase production among Hib isolates, indicating a new concern for the African continent [ 28 ]. In addition, we showed that the prevalence of beta-lactamase producing H. influenzae is increasing, from 22.1% in 2007 − 2003 to 48.1% in 2023 − 2019. This surge emphasizes the heightened clinical reliance on beta-lactamases for treating H. ‎influenzae -mediated infections. Ceftriaxone, cefotaxime, or cefuroxime are also suggested for the treatment of pneumonia and bacteremia caused by beta-lactamase producing H. influenzae strains; on the other hand, ampicillin is suggested for beta-lactamase-negative strains [ 36 ]. Based on our results, the prevalence of resistance to ceftriaxone, cefotaxime, cefuroxime, and ampicillin was 1.4%, 4.1%, 19.1%, and 36%, respectively. In another meta-analysis conducted by Vaez et al., the prevalence of H. influenzae strains resistant to these antibiotics was estimated at 33.1%, 22.3%, 13.7%, and 54.8%, respectively [ 37 ]. Despite the modest increase in the pooled prevalence of both beta-lactamase producing H. ‎influenzae and MDR H. influenzae , it is clear that we stand on precarious ground.‎ It seems necessary to formulate and implement an antibiotic stewardship strategy to neutralize the emergence of XDR H. influenzae strains. Although the present study is comprehensive, it is not without limitations; the observed publication ‎bias, despite adjustments, may still affect the final pooled estimates. Furthermore, meta-analysis relies on published data, which may not represent unpublished studies or gray literature, potentially leading to over- or underestimation of true prevalence. In addition, inherent changes in the included studies in terms of methodology, sample size, and demographic distribution can cause heterogeneity in the results. On the other hand, the strengths of this study lie in its expansive scope, detailed methodological approach, incorporation of a broad range of geographies, and infection types. We believe this study can offers a robust overview of the global landscape of MDR H. influenzae , serving as a pivotal resource for clinicians, researchers, and policymakers. Finally, while this meta-analysis offers pivotal insights into the prevalence of MDR H. influenzae across geographies and infection types, continued vigilance and updated research are essential to track, understand, and mitigate the spread of antibiotic resistance globally.
Conclusions The global health community is facing a daunting challenge in the field of antibiotic resistance, with H. influenzae at the forefront. Our comprehensive meta-analysis shows an alarming increase in resistance, especially for beta-lactamase producing strains, which almost doubled from 2003 to 2023. Although the rate of MDR H. influenzae has remained relatively stable over the past two decades, its continued prevalence is particularly concerning in cases of meningitis. According to our results, it seems that there is a higher prevalence of MDR H. influenzae in Asian countries as compared to the Western countries. In general, therapeutic measures include implementation of stewardship programs, appropriate use of antibiotics, public awareness campaigns, and conducting new treatment research.
Background In recent decades, the prevalence of antibiotic resistance is increasing in Haemophilus influenzae ( Haemophilus influenzae ), which poses important challenges to global health. This research offers a comprehensive meta-analysis of the global epidemiology of multi-drug resistant (MDR) H. influenzae . Methods In this study, we conducted a meta-analysis based on PRISMA checklist. Electronic databases including PubMed, ISI Web of Science, Scopus, EMBASE, and Google Scholar were reviewed using keywords related to H. influenzae and antibiotic resistance. Eligible studies were selected based on stringent inclusion and exclusion criteria. Then, data from these studies were analyzed using the Comprehensive Meta-Analysis (CMA) software. Results Of 375 retrieved articles, 16 met the inclusion criteria. These studies were conducted from 2003 to 2023 and analyzed data from 19,787 clinical isolates of H. influenzae . The results showed different levels of resistance of H. influenzae to different antibiotics: ampicillin (36%), azithromycin (15.3%), ceftriaxone (1.4%), etc. The global prevalence for beta-lactamases producing H. influenzae and MDR H. influenzae was measured 34.9% and 23.1%, respectively. The prevalence rate of MDR H. influenzae was higher in Asian countries (24.6%) compared to Western regions (15.7%). MDR H. influenzae had the highest prevalence in meningitis cases (46.9%) and the lowest prevalence in acute otitis media (0.5%). Conclusions The prevalence of MDR H. influenzae has been increasing worldwide, especially in Asian regions. This highlights the urgent need for monitoring and implementation of effective antibiotic stewardship programs globally. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08930-5. Keywords
Supplementary Information
Abbreviations Haemophilus influenzae Multidrug-resistant Multidrug-resistant H. influenzae Preferred Reporting Items for Systematic Reviews and Meta-Analyses Joanna Briggs Institute Comprehensive Meta-Analysis Confidence interval Odds ratio Acknowledgements We appreciate from both Jiroft University of Medical Sciences and Iranshahr University of Medical Sciences. Authors’ contributions MA has contributed to design of the work and analysis of data. MA and MK1 have drafted the work and substantively revised it. MA and MK2 have reviewed and revised the draft manuscript. All authors read and approved the final manuscript. We confirmed all authors have contributed in all stages of manuscript such as literature search, data extraction, quality assessment, etc. Funding We have received a grant from Jiroft University of Medical Sciences. Grant ID: IR.JMU.REC.1402.064. Availability of data and materials All data generated or analyzed during this study are included in this published article. Declarations Ethics approval and consent to participate Not applicable (this paper was provided based on researching in global databases). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Infect Dis. 2024 Jan 15; 24:90
oa_package/65/1a/PMC10789054.tar.gz
PMC10789055
0
Background The role of educational institutions in the development of human capital is crucial for the progress of any nation [ 1 ]. The academic staff within a university plays a vital role, and the number, quality, and effectiveness of faculty members greatly impact the quality of education provided [ 2 ]. It is widely recognized that the success of any organization is closely tied to the abilities and contributions of its employees [ 3 ]. The employment landscape in the education sector has become increasingly competitive, with institutions striving to maintain their reputation and gain a strategic advantage [ 4 ]. With the rise in job opportunities in higher education, retaining competent faculty members has become essential. Failing to retain employees can have severe repercussions for an organization [ 5 ]. Employee turnover has become a chronic issue across different types and sizes of organizations, and numerous studies emphasize the importance of retaining talented individuals [ 6 ]. In the field of education, replacing human capital, particularly in universities, is an expensive endeavor. Therefore, universities and governments must promptly and earnestly address talent turnover [ 7 ]. While satisfaction is a well-studied concept, colleges continue to face the challenge of motivating and satisfying their faculty members [ 8 , 9 ]. Career motivation is considered one of the key individual factors impacting the quality of work life. Therefore, it is important to improve the quality of work life by fostering an environment that respects employees, encourages their active participation in decision-making processes, addresses their needs, and seeks to build trust with officials [ 10 ]. Fewer studies have focused on faculty members of medical sciences, who often encounter issues such as overcrowded classrooms, time pressure, and increased workload [ 11 , 12 ]. Academic faculty members are considered national assets and understanding their intention to leave their positions is of utmost importance. The departure of experienced faculty members poses serious problems for universities, particularly regarding the quality of educational and research services they provide [ 13 ]. In today's academic landscape, faculty members bear significant responsibilities in education, research, therapeutic services, executive activities, and personal development [ 14 ]. Balancing multiple roles within a university, alongside external pressures from both the organization and the community, can significantly influence their perceived work-life balance satisfaction. This, in turn, impacts their job satisfaction and their intention to leave the organization [ 15 ]. On one hand, faculty satisfaction relies on the levels of satisfaction experienced by students, colleagues, and administrators [ 16 ]. On the other hand, Weale et al. (2019) suggested that faculty satisfaction is influenced by both work and non-work aspects of their lives [ 17 ]. Job satisfaction is a critical factor in motivating faculty members, as it reflects their personal contentment and fulfillment within their roles [ 12 ]. Building on the previous work of Kalkins et al., (2019) satisfaction levels strongly predict the intention of faculty members to leave academia. Additionally, the intent to leave one's position is a significant predictor of the intent to leave one's institution [ 18 ]. Turnover intention refers to an employee's intent to voluntarily leave their job or organization [ 19 , 20 ]. Voluntary turnover is associated with decreased individual performance and increased costs for organizations [ 11 ]. Johnsrud (1996) proposed that faculty work-life can be influenced by their professional priorities, perceived institutional support, and erosion of quality of life over the course of their careers. Addressing these factors can improve the overall climate and culture within academic institutions [ 21 ], ultimately impacting faculty morale and their likelihood of leaving their positions or careers [ 22 ]. Multiple workplace roles undertaken by university academics, coupled with pressures from the organization and the community, are often considered significant factors that impact their perceived work-life balance satisfaction. This, in turn, influences their overall occupational attitudes, including job satisfaction, organizational commitment, and their intention to leave the organization [ 23 ]. Relatively few studies have examined the intent or inclination of faculty members in the field of Medical Sciences to leave their current positions [ 24 , 25 ]. Some studies indirectly address this issue by exploring the faculty's intent to stay, either at their current institution or within public colleges in general [ 25 , 26 ]. Despite the fundamental importance of faculty retention, there is limited understanding of the factors related to satisfaction, professional work life, and institutional work life that can explain faculty members' intentions to leave at a national level. As a result, this study aimed to investigate the intention to leave among academics and their work-life quality and satisfaction to response the following questions: What are the academic members' perceptions of work- life quality and satisfaction? What is the role of various individual, social, and occupational characteristics on academic members' perceptions of work- life quality and satisfaction? What is the role of various individual, social, and occupational characteristics; and work- life quality and satisfaction in shaping faculty members' intentions to leave.
Methods Design and participants The current study is a cross-sectional descriptive study. All faculties affiliated to Urmia University of Medical Sciences, including Nursing and Midwifery, Health, Medicine, Dentistry, Pharmacy, and Health Management & Medical Information were included in the study. From these faculties, faculty members who had at least one years of teaching experience at the university; at least a master's degree; and were willing to complete the study instruments were enrolled. Those who worked part-time, or hourly were not included in the study. From March to June 2022, eligible faculty members were approached to participate in the study. The sample size ( n = 115) was calculated in G power based on point biserial correlations between main outcomes satisfaction and intention to leave, considering a power = 0.80, an α = 0.05 based on the amount of correlation reported in a similar study [ 27 ]. Considering 20% attrition rate, the sample size increased to at least 138 participants. The participants were selected through stratified random sampling considering a sample size in strata proportionate to the number of faculty members in the colleges. Data collection Survey instruments The data collection instruments included three main parts: first part with socio-demographic characteristics including age, sex, degree, grade, full timing, discipline, and teaching hours in Undergraduate, Master's, and PhD's teaching. Questionnaires that were incomplete by 10% or more were excluded from the study. Among the 145 questionnaires that were distributed and after discarding the distorted or incomplete questionnaires, 120 questionnaires were valid and the response rate in this study was 82.75%. Work-life quality and satisfaction scale Work-life quality and satisfaction scale used in this study was developed by Rosser in an institutional climate study [ 27 , 28 ]. The Rosser used the National Study of Postsecondary Faculty (NSOPF) database related to a survey sponsored by the National Center for Educational Statistics and the National Science Foundation to measure the various issues and topics concerning the quality of faculty members’ professional and institutional work-life in higher education institutions. He conceptualized the individual-level perceptions of faculty members’ work-life quality and satisfaction on their intent to leave [ 27 ]. Work-life quality and satisfaction scale is designed with 23 items in two sections and measures two related constructs including work-life quality and satisfaction. The items are measured on a 6-point scale (1 – Strongly Disagree, 2 – Disagree, 3 – Slightly Disagree, 4 – Slightly Agree, 5 – Agree, and 6 – Strongly Agree). The work-life quality section was measured by three dimensions including the professional development (alpha = 0.87), administrative support (alpha = 0.91), and technology support (alpha = 0.88). Respondents were asked to indicate from 1–6 score, indicating poor to excellent, statements regarding the quality of their professional and institutional work-life [ 27 ]. The satisfaction section in this scale was measured by three interrelated dimensions including the advising and course workload, benefits and security, and overall satisfaction. The first dimension with five statements was satisfaction with advising and course workload (alpha = 0.97). The second dimension with six items focused on their benefits and security (alpha = 0.76). Faculty members were also asked to self-report their overall level of satisfaction on a scale of 1–6, with 1 indicating very dissatisfied and 6 indicating very satisfied [ 27 ]. Intention to leave The intention to leave scale used in this study was designed by Rosser and Johnsrud in a study that aimed to conceptualize the effect of work environment variables and morale on the intention to leave [ 22 ]. The scale consists of four items, which ask faculty the likelihood to which they will leave their current position, their current institution, the teaching profession, and higher education. Items were measured on a 6-point scale (1–Highly Unlikely, 2–Unlikely, 3–Somewhat unlikely, 4–Somewhat likely, 5–Likely, and 6–Highly likely), where higher scores reflect individuals who possess a greater intent to leave. Statistical analyses IBM SPSS Statistics software (version 20) (IBM SPSS Statistics, IBM Corp, Armonk, USA) was used to analyze the data at an alpha level of 0.05. Socio-demographic characteristics were summarized using frequency (percentage) and mean (standard deviation) for categorical and numeric variables, respectively. Independent t-test, one-way ANOVA analysis followed by Tukey post hoc tests, and Pearson's correlation were used to investigate the difference across the demographic characteristics for intension to leave scores. Uni- and multivariable linear regression analyses were employed to determine predictors of the intention to leave, with variables found to be significant in the univariable model ( P -values < 0.05) included as independent variables in the multivariate model. Categorical variables were coded into dummy variables prior to regression analysis. Assessment of skewness (within ± 1.5) and kurtosis (within ± 2) indicated that the intention to leave data adhered to a normal distribution. The validity of the regression analysis was ensured by verifying assumptions, including the normality of residuals, homoscedasticity, and linearity of the variable relationships, which were confirmed [ 29 ]. Ethical considerations The present study was approved by the National Agency for Strategic Research in Medical Education, Tehran, Iran (code: 990,295). The study followed accepted ethical standards, as outlined in the Declaration of Helsinki. Participants were given detailed explanation on the study purpose, voluntary participation, and offered a written informed consent to obtain signature before presenting the study self-reporting questionnaires to be completed.
Results Table 1 presents the categorical socio-demographic characteristics and their association with work-life quality and satisfaction, and intention to leave. The results indicated that married faculties (M = 3.70, SD = 0.50) had a slightly higher work-life quality and satisfaction than single faculties (M = 3.47, SD = 0.43). There was no significant difference between subjects according to their gender, degree, or involvement in education. However, faculties who more involved in clinical teaching (M = 3.56, SD = 0.42) were significantly less satisfied with work-life quality than those who were less involved (M = 3.83, SD = 0.58). There was no significant relationship between being full time and having an administrative position with the work-life quality and satisfaction of faculty members. Regarding the discipline, there was no difference between different disciplines in terms of work-life quality and satisfaction. However, faculties from Midwifery (M = 3.59, SD = 0.21), Medicine (M = 3.66, SD = 0.61), and Nursing (M = 3.66, SD = 0.36) disciplines had lower work-life quality and satisfaction compared to Allied health professions (M = 3.67, SD = 0.55) and Health management and medical information (M = 3.71, SD = 0.46) disciplines. Table 1 also shows that nursing faculty members (M = 3.44, SD = 1.55) had a relatively higher intention to leave than faculty members of other disciplines. The rest of the categorical socio-demographic variables did not show a significant relationship with the intention to leave. Besides, the results indicated that there was an inverse relationship between research hours per week ( r = -0.21, p < 0.05) and teaching at undergraduate level ( r = -0.19, p < 0.05) with work-life quality and satisfaction. Also, the variables of age ( r = -0.31, p < 0.05), fulltime duration ( r = -0.20, p < 0.05), and work experience ( r = -0.31, p < 0.05), correlated inversely with intention to leave. While hours spent on research per week ( r = 0.18, p < 0.05) has a direct and significant correlation with intention to leave. As shown in Table 2 , the mean scores of all dimensions of work-life quality and satisfaction and intention to leave were higher than 2.73 (out of 6). At the dimension level, faculties were most satisfied of their “technology support” in work-life quality scale (3.99, SD = 0.86) and average score for overall satisfaction (4.90, SD = 0.89) was higher than the satisfaction for the “advising and course workload” and “benefits and security”. The least satisfaction among faculties was in the “benefits and security” dimension (3.27 ± 0.54). Also, in the scale of intention to leave the highest score among faculties was in statement of “what likely are you leave their current position?” (2.86 ± 1.46). There was a statistically significant inverse relationship between the mean scores in all work-life quality and satisfaction dimensions and intention to leave, except for “advising and course workload” dimension ( p < 0.05). Also, there was a statistically significant direct relationship between total scores of work life quality and satisfaction ( p < 0.001) (Table 3 ). The results of the univariable linear regression analysis indicate that all subscale scores of Work Life Quality, as well as the total score, exhibited a negative correlation with the intention to leave (all P -values < 0.05). Likewise, within the Satisfaction subscales, Benefits and Security, Overall Satisfaction, and their total scores displayed an inverse relationship with the intention to leave (all P -values < 0.05). Examining demographic and background variables, age, work experience, and full-time years demonstrated a negative association with the intention to leave (all P -values < 0.05). Conversely, the number of hours spent by faculty members on research exhibited a positive correlation with the intention to leave ( P -value < 0.05). Additionally, when compared to nursing faculty members as the reference category, Medicine, Midwifery, and Allied Health Professions exhibited lower intention to leave (all P -values < 0.05) (Table 4 ). In the multivariable analysis, the relationship between work life quality, satisfaction subscales, and their total score with the intention to leave was found to be statistically non-significant (all P -values > 0.05). However, work experience and Discipline emerged as independent predictors of the intention to leave (both with P -value < 0.05). Specifically, work experience was negatively associated with the intention to leave, and each year of experience was linked to a 10% decrease in the intention to leave score points. Furthermore, faculty members in the midwifery discipline displayed approximately 1.3 points lower intention to leave scores compared to nursing faculty members (Table 4 ).
Discussion This study was conducted with the aim of investigating the work-life quality and satisfaction of faculty members of Urmia University of Medical Sciences and their relationship with intention to leave. The results revealed that the overall quality of work life was in an average level, with the highest scores observed in the "Technology Support" dimension. It must be acknowledged, in today's digital era, the Internet has emerged as a crucial tool for research development and enhancing the efficiency of academic staff members in universities [ 30 ]. Without utilizing the internet for various tasks, including education, research, and consultation responsibilities, academic staff members can encounter numerous challenges. Meanwhile, with the outbreak of the COVID-19 pandemic and the consequent shift to virtual classes, the necessary technological support has been provided to professors, ensuring their active participation in online teaching [ 31 ]. Previous studies have also reported similar findings, indicating an average level of work-life quality [ 32 – 35 ]. Furthermore, a meta-analysis of domestic articles conducted by Shakibaei (2015) showed that the average score of the quality of work life among academic staff members in higher education institutions is considered to be on a medium level [ 36 ]. However, studies conducted by Farrukhnejad (2012), Mirkamali and Thani (2011), and Noorshahi and Samiei (2023) reported an unfavorable level of quality of work life [ 37 – 39 ]. Another studies highlighted that faculty members experienced a lower quality of work life compared to other university employees, often citing unfavorable working conditions, lack of control and participation in decision-making, and low organizational commitment as contributing factors [ 40 , 41 ]. Additionally, Bakhshi et al. found a direct relationship between the academic staff members' educational level and their perception of quality of work life [ 33 ]. It is important to note that the inconsistencies observed across different studies could potentially be attributed to variations in populations studied [ 40 , 42 ] and the instruments utilized for assessment [ 43 – 45 ]. Based on the findings of this study, the average job satisfaction scores fell within the moderate range. Notably, the highest satisfaction scores were reported in the 'Overall satisfaction' dimension, while the lowest scores were observed in the 'Benefits and security' dimension. In terms of marital status and involvement in clinical teaching, married faculty members without clinical education responsibilities exhibited higher levels of satisfaction compared to their unmarried peers involved in clinical education. These findings align with previous studies which have also reported medium ranges of job satisfaction [ 32 , 34 , 37 , 38 ], thereby corroborating the results of our present study. Noorshahi and Farastkhah's (2012) study identified various factors that contribute to faculty members' job satisfaction, including satisfaction with salaries and wages, the work environment, job security, job prestige and dignity, and facilities and resources. The study revealed that faculty members reported moderate to high satisfaction in terms of job prestige and dignity, whereas satisfaction with salaries and wages, the work environment, and job security were reported as moderate to low [ 30 ]. A positive work environment characterized by independence, role clarity, and community impact fosters higher job satisfaction, whereas dissatisfaction with salaries, weak leadership, and excessive pressure to produce scientific articles can lead to decreased job satisfaction. Job satisfaction is significantly linked to traditional academic values such as a focus on quality, inclusion in decision-making processes, unwavering commitment to work, and recognition of faculty members [ 40 ]. Additionally, faculty members' perception of organizational support enhanced job satisfaction [ 46 ]. In Moloantoa's study, it was observed that salary did not significantly impact job satisfaction, but dissatisfaction stemmed from insufficient benefits, inadequate support for teaching, learning, and research, lack of resources, and subpar university management [ 47 ]. In a study conducted by Ferron (2017) found that when nursing department managers actively supported academic professionals, recognized their efforts, and ensured fair work procedures, nurses' job satisfaction increased [ 48 ]. There is a statistically significant relationship between job satisfaction and the quality of work life [ 34 , 49 ]. One of crucial aspect of the quality of work life is work-life balance [ 50 ]. Numerous studies worldwide have reported a positive association between job satisfaction, the quality of work life, and work-life Balance [ 51 – 58 ]. A study by Kim (2023) revealed that the high stress experienced by faculty members in Thailand mediates the relationship between high workload and job satisfaction [ 59 ]. The average score for the intention to leave was in the medium range. The item 'How likely are you to leave your current position?' scored the highest. Nursing faculty members exhibited a relatively higher tendency to consider leaving. The intention to leave was directly correlated with the number of research hours and inversely with work experience, full-time employment and age. Aboudahab's study on private universities in Egypt revealed that common factors contributing to the intention to leave included low talent management, high workload and anxiety, poor communication between faculty members and managers, lack of recognition and appreciation, and work-family imbalance [ 60 ]. Ferron (2017) found that intention to leave increased with aging among nursing faculty, as well as part-time employment. On the other hand, more work experience decreased the intention to leave and increased the desire to remain in nursing schools [ 48 ]. The low level of job satisfaction among faculty members is considered a warning sign, as it increases the likelihood of leaving a job if greater satisfaction is found elsewhere [ 60 ]. There was a statistically significant inverse relationship between the mean scores in all dimensions of satisfaction with work-life quality and the intention to leave, except for the "Advising and course workload" dimension. In the same line, other studies have also demonstrated a decrease in the quality of work life and job satisfaction leading to an intention to leave [ 24 , 46 , 57 , 58 , 60 – 66 ]. Job satisfaction plays a pivotal role in the retention of faculty members in universities [ 67 ]. In Rezaee’s study (2019) among Iranian doctors, a significant inverse relationship was found between the quality of work life and the intention to leave. When the quality of work life improves, it reduces the intention to leave and increases employee satisfaction [ 68 ]. Therefore, organizations can provide personal and social support to make employees feel valued and proud [ 66 ].
Conclusion, implications, and recommendations The findings of this study highlighted that the faculty members’ work-life quality and satisfaction, and the intention to leave were in an average level. There is a negative correlation between Work-Life Quality and Satisfaction subscales, along with demographic factors, and the intention to leave, while work experience and Discipline were significant independent predictors of intention to leave. These results emphasize the need to prioritize and improve the conditions that foster job satisfaction in academia, as it plays a vital role in training the next generation and advancing education in universities. Of particular concern is the high intention to leave among nursing lecturers, which signifies the immense work pressure they face. Without proper support from nursing schools in terms of human resources, there is a risk of a decline in the nursing workforce due to an increasing number of faculty members leaving their positions. This could ultimately lead to a reduction in the quality of undergraduate nursing education in the long run. These findings offer valuable insights for academic institutions, highlighting the importance of fostering a supportive work environment and retaining faculty members. By addressing the factors influencing job satisfaction and intention to leave, institutions can enhance the overall satisfaction of their faculty members and promote longevity in their academic careers. Considering these results, it is recommended that future research delve into additional variables and interventions to further augment faculty satisfaction and mitigate the intention to leave within the academic setting. These efforts can contribute to the overall improvement of the academic environment, ensuring quality education and sustained academic excellence in the years to come. Considering the high rate of intention to leave among nursing faculty members, it is advisable to conduct qualitative studies to explore the nature of the quality of work life, job satisfaction, and the underlying causes of the intention to leave within this group. Additionally, experimental studies can be conducted to investigate the effects of organization-oriented interventions aimed at enhancing the quality of work life, job satisfaction, and reducing the intention to leave." Limitations This study was conducted among the faculty members of Urmia University of Medical Sciences, and its results cannot be generalized to other universities. It is recommended to conduct similar studies in other medical science universities. Due to the relatively small sample size and the study being limited to one university, the generalizability of the current study is restricted. Thus, future studies with a larger and more diverse population are suggested.
Background Despite the importance of faculty retention, there is little understanding of how demographic variables, professional and institutional work-life issues, and satisfaction interact to explain faculty intentions to leave. This study aimed to investigate the intention to leave among academics and their Work-Life Quality and Satisfaction. Methods This is a descriptive cross-sectional study conducted by 8 faculties affiliated to Urmia University of Medical Sciences located in Urmia, West Azarbaijan province, Iran. The participants in the study were 120 faculty members from Nursing and Midwifery, Medicine, Allied health professions, and Health management and medical information faculties. The Work-Life Quality and Satisfaction scale, and the intension to leave scale were used for data collection. Uni- and multivariable linear regression analyses were employed to determine predictors of the intention to leave ( P -values < 0.05). Results The mean scores of all dimensions of Work-Life Quality and Satisfaction scale, and intention to leave were in an average level. There is a negative correlation between Work-Life Quality and Satisfaction subscales, along with demographic factors, and the intention to leave ( P < 0.05), while multivariate analysis showed that work experience and Discipline were significant independent predictors of intention to leave ( P < 0.05). Conclusions In order to improve education in universities, it is necessary to pay attention to the conditions of creating job satisfaction in academics. Considering the high intention to leave among Nursing lecturers, without sufficient support of nursing schools in terms of human resources, it may suffer by the lack of academic staff; eventually the quality of education will reduce in undergraduate nursing in the long term. Keywords
Abbreviations Standard deviation Mean Statistical Package for Social Sciences Acknowledgements This project was funded by the National Agency for Strategic Research in Medical Education, Tehran, Iran (Grant No. 990295). We also appreciate on the all-faculty members participated in study and managers of Urmia University of Medical sciences. Authors’ contributions AG, AH, AR, and AN conceived the idea and designed the study. AR, AN, AH and AG performed pilot study for validating data gathering tools. AG and MAJ performed the statistical analysis and interpretation. PA performed data collection. AG, FB and PA drafted the manuscript. MAJ, PA, FB and AG reviewed the manuscript. MAJ critically revised the manuscript. AG reviewed and revised the idea and study design, received the grant. All authors have read and approved the final manuscript. Funding This project was funded by the National Agency for Strategic Research in Medical Education, Tehran, Iran (Grant No. 990295). We appreciate on the funder and all faculty members participated in study. Availability of data and materials The datasets used and/or analysed the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was approved by the National Agency for Strategic Research in Medical Education, Tehran, Iran (Ethic code: 990295). The study followed accepted ethical standards, as outlined in the Declaration of Helsinki. From March to June 2022, eligible faculty members were approached to participate in the study. Prior to their involvement, the purpose of the study and instructions for completing the questionnaire were explained to them, and informed consent was obtained from all participants. To ensure confidentiality, the survey was conducted anonymously, safeguarding the privacy of the respondents. Consent to participate Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Nurs. 2024 Jan 15; 23:43
oa_package/b9/31/PMC10789055.tar.gz
PMC10789056
0
Introduction The relationship between salt and blood pressure (BP) has been characterized as a nuanced equilibrium, and the precise nature of the dose–response association remains a subject of debate [ 1 – 3 ]. Extensive evidence indicates that a diet high in salt leads to increased BP, a prominent risk factor for cardiovascular disease (CVD) [ 4 ]. Global mortality data on CVD reveals that exceeding the recommended salt intake is accountable for approximately 1.65 million deaths each year [ 5 ]. Nevertheless, a minority of researchers have posited that the advantages of sodium restriction for individuals with normal BP are modest and may potentially raise blood lipid levels and mortality risk [ 6 ]. Notably, China's daily salt consumption surpasses 12 g [ 7 ], exceeding the recommended limit of 5 g per day set by the World Health Organization (WHO) [ 8 ]. In China, it is recommended to use salt restriction spoons (SRS) as part of interventions aimed at reducing salt consumption among hypertensive individuals [ 9 ]. SRS are available in various sizes and shapes (e.g., 2, 3, and 6 g), enabling users to regulate their salt intake by estimating the required amount [ 10 ]. Nevertheless, there is a dearth of epidemiological data concerning the correlation between salt consumption, the utilization of SRS, and the hypertensive status of patients with poorly controlled hypertension in China. The most precise method for assessing salt intake is through the collection of urine over a 24-h period, despite the inconvenience and impracticality of this approach in population-based epidemiological surveys [ 11 ]. The burden placed on participants has led to the exploration of alternative, more manageable methods for estimating 24-h urinary sodium excretion using spot urine samples, such as the Kawasaki [ 12 ], INTERSALT [ 13 ], and Tanaka methods [ 14 ], which are commonly employed. Moreover, WHO advocates for the adoption of spot urine techniques as a viable approach to estimating salt consumption in developing countries [ 8 ]. As a result, our community-based epidemiological field study utilized three spot urine methods, one of which is the Tanaka method, which has been endorsed by the Japanese Society of Hypertension in their clinical practice guidelines, despite its deviation from accurate salt intake measurement [ 15 ]. The primary objectives of this research endeavor were to assess salt intake levels and explore potential correlations between sodium consumption and various factors, with a particular focus on hypertension status. We conducted additional analysis to examine the non-linear association between salt intake and BP. It should be noted that our restricted cubic spline regression model may not yield accurate estimates when spot urine methods are utilized. By utilizing this information, it is possible to estimate the salt intake of individuals with uncontrolled hypertension, thereby facilitating efforts to promote salt reduction in China.
Materials and methods Participants A cross-sectional study was carried out in twenty communities/villages spanning ten streets/towns. A total of 1215 patients aged 35–75 years, who had primary and poorly controlled hypertension based on two or more consecutive measurements, were recruited for follow-up from January to August 2021. These patients were referred by their primary care physician for the management of uncontrolled hypertension and had been taking antihypertensive medications for at least one year. To prevent the manifestation of specific conditions, individuals who had additional organic cardiopulmonary vascular diseases, secondary hypertension caused by systemic diseases, psychiatric disorders, or intellectual disability were excluded from the study. Participants were requested to fulfill a standardized questionnaire, undergo a physical examination, and undergo laboratory testing. The research protocol received approval from the Ethical Review Committee of the Huzhou Center for Disease Control and Prevention. Standardized questionnaire Trained data collectors administered a questionnaire to patients, gathering information on their (1) socio-demographic characteristics, (2) smoking and alcohol habits, physical activity levels, (3) current usage of antihypertensive medications, and (4) knowledge, attitude, and behavior regarding salt consumption and eating habits. Some of the questions were selected based on prior research [ 16 ]. Individuals who had smoked more than 100 cigarettes in the past were considered former smokers, while current smokers were those who had smoked at least one cigarette daily for six consecutive months. Alcohol consumption was determined by a response indicating a frequency of at least once per week in the previous year, while individuals who did not meet this criterion were considered nondrinkers. Physical activity was defined as ≥ 150 min per week of moderate-intensity or a combination of moderate and high-intensity exercise or ≥ 75 min of high-intensity exercise [ 16 ]. Physical measurements Following standardized protocols, measurements of height, weight, and BP were conducted. To ensure consistency in the measurement of height and weight for every participant, the Huachao Hi-Tech comprehensive height scale, with a precision of 0.1 cm for height and 0.1 kg for weight, was employed. BP was measured using the Omron HBP-1300 electronic sphygmomanometer with an accuracy of 1 mmHg, using the BP measurement method recommended in The Chinese Guidelines for the Prevention and Treatment of Hypertension (2018 edition) [ 9 ]. Patients are requested to seat for at least 5 min in a quiet room before BP measurements, and keep the upper arm at the heart level. Following the computation of the body mass index (BMI), the resulting values were categorized into various classifications based on Chinese guidelines, including underweight (BMI < 18.5), normal weight (18.5 ≤ BMI < 24), overweight (24 ≤ BMI < 28), and obese (BMI ≥ 28) [ 17 ]. Additionally, hypertension was classified into three distinct groups based on the recorded BP levels during this period, adhering to the global guidelines established by the International Society of Hypertension in 2020 [ 18 ]. The categories encompass the normal blood pressure group (Normal BP), which is defined by a BP reading below 140/90 mmHg. Additionally, there is the grade 1 hypertension group, characterized by a systolic blood pressure (SBP) ranging from 140 to 159 mmHg and/or a diastolic blood pressure (DBP) ranging from 90 to 99 mmHg. Lastly, there is the grade 2 hypertension group, characterized by a SBP equal to or exceeding 160 mmHg and a DBP equal to or exceeding 100 mmHg. Collection of urine and laboratory analysis Respondents were given a 10 ml of standard urine collection container, and the fasting spot urine samples were obtained in the early morning after the first void, and all participants gave their informed consent by signing the necessary document. The urine samples were then deposited in 4 °C cooler containers and sent within 24 h to the central laboratory, where immediate analyses were conducted. C501 automatic biochemical analyzer (Roche Company of the United States) was utilized to detect urinary creatinine concentration by enzymatic method. Estimation of 24-h sodium excretion from spot urine samples The Kawasaki formula [ 12 ] is as follows: Where Pr 24hCr for men = 15.12 × W + 7.39 × H – 12.63 × Y – 79.9 and for women = 8.58 × W + 5.09 × H – 4.72 × Y – 74.95. The INTERSALT formula [ 13 ] is given as: The Tanaka formula [ 14 ] is given as: Where Pr 24hCr = 14.89 × W + 16.14 × H – 2.04 × Y – 2244.45. In the above formula: Pr 24hNa is predicted 24-h urinary sodium excretion value (mmol/day); Pr 24hCr is predicted 24-h urinary creatinine excretion value (mg/day); Su Na is spot urine sodium (mmol/l); Su k is spot urine potassium (mg/day); Su Cr is Spot urine concentration (mg/dl); W is weight (kg); H is high (cm); Y is age (years); BMI is body mass index (kg/m 2 ). By using the following equation, we can convert urine sodium excretion values (mmol/day) into urine salt excretion values (g/day) [ 19 ]: Statistical analysis Statistical analyses were performed using SPSS software version 21.0 (IBM, Armonk, New York, United States), GraphPad Prism 8 software and R version 4.2.3. Normally distributed data were presented as the mean (standard deviation) through one-factor analysis of variance (ANOVA). Spot urine sodium, potassium, and concentration were reported as the median (M) and interquartile range (IQR) for non-normally distributed data, analyzed using the Kruskal–Wallis test. The chi-square test was employed to compare categorical variables among participants. A univariate ordinal logistic regression model was employed to examine the risk factors associated with hypertension among the participants. Following the univariate analysis, variables with a significance level of p < 0.1, along with sex, were incorporated as independent variables in the model, while hypertension status served as the dependent variable. The model successfully passed the parallelism test, and for the subsequent multivariate analysis, ordinal logistic regression was utilized. To assess the shape of the relationship between sodium and BP, we created restricted cubic spline plots adjusted for age, sex, region, BMI, alcohol consumption, and number of antihypertensive medications. Data were fitted by a linear regression model, and the model was conducted with 4 knots at the 5th, 35th, 65th, 95th percentiles of age (reference is the 5th percentile). Statistical significance was determined at a threshold of p < 0.05.
Results General demographic characteristics of participants Table 1 presents the descriptive analysis of the principal characteristics and laboratory parameters of the study population. 1215 patients with poorly controlled hypertension were eligible for inclusion in the study, of which 53.66% were men; 54.24% had a low education level (primary or below education), 60.33% were rural dwellers, and the mean (SD) age of the participants was 60.83 (7.76) years, the mean (SD) BMI was 25.96 (3.75) kg/m 2 . Regarding the hypertension status, normal BP, grade 1, and grade 2, the rates were 26.67%, 47.33%, and 26.01%, respectively. Of all participants, 37.78% had previously or were currently utilizing salt-restriction utensils, 37.86% had physical activity, 34.65% consumed alcohol, 62.72% were non-smokers. Participants who had taken multiple antihypertensive medications were more prone to have uncontrolled hypertension ( p < 0.001). Mean SBP and DBP were 146.67 (16.33) mm Hg and 87.77 (9.85) mm Hg, respectively. The median (IQR) concentrations of sodium, potassium, and urine were 124.00 (70.00) mmol/L, 26.78 (20.82) mmol/L, and 10.01 (8.07) mg/dl, respectively. Comparison of estimated 24-h urinary sodium excretion and salt intake The estimated 24-h sodium excretion mean (SD) values by the three methods (The Kawasaki, INTERSALT, and Tanaka formula) were 208.70 (65.65), 154.78 (33.91), and 162.61 (40.87) mmol/day, equal to a salt intake of 12.21 (3.85), 9.05 (1.99), and 9.51 (2.39) g/day, respectively. Figure 1 depicts the estimated distribution of 24-h urinary sodium excretion based on hypertension status, those who at grade 2 hypertension had the highest 24-h urinary sodium excretion of all hypertension grade groups ( p < 0.001). The average sodium consumption among males tended to be higher than that among females (INTERSALT method, 10.08 g/day vs 7.87 g/day, p < 0.001; Tanaka method, 9.68 g/day vs 9.36 g/day, p < 0.05; no difference in Kawasaki method, respectively). Individuals with higher levels of education exhibited lower sodium intake compared to those with primary education (Kawasaki method, 11.14 g/d vs 12.43 g/d, p < 0.001; INTERSALT method, 8.90 g/d vs 9.34 g/d, p < 0.001; Tanaka method, 8.73 g/d vs 9.71 g/d, p < 0.001, respectively). Obese participants had a higher daily sodium intake compared to underweight or normal-weight participants (Kawasaki method, 13.05 g/d vs 11.58 g/d, p < 0.001; INTERSALT method, 10.22 g/d vs 8.17 g/d, p < 0.001; Tanaka method, 10.07 g/d vs 9.10 g/d, p < 0.001, respectively), and individuals with grade 2 hypertension consumed more sodium than those with normal blood pressure (Kawasaki method, 12.97 g/day vs 11.69 g/day, p < 0.001; INTERSALT method, 9.40 g/day vs 8.74 g/day, p < 0.001; Tanaka method, 9.97 g/day vs 9.21 g/day, p < 0.001, respectively). Furthermore, participants who had previously used or were currently using SRS demonstrated a lower average daily sodium intake than those who had not (Kawasaki method, 12.03 g/day vs 12.94 g/day, p < 0.05; INTERSALT method, 8.86 g/day vs 9.42 g/day, p < 0.001, Tanaka method, 9.42 g/d vs 9.89 g/d, p < 0.05, respectively). Overall, responses exhibited a similar salt intake pattern, stratified by three methods (Table 2 ). Associations between estimation of salt intake and SRS and hypertension status in patients with poorly controlled hypertension The results of the univariate analysis indicated that a lower salt intake (the Q1 vs Q4 of salt intake, Kawasaki method, crude OR = 0.52, p < 0.001; INTERSALT method, crude OR = 0.55, P < 0.001; Tanaka method, crude OR = 0.59, P < 0.001, respectively); an urban origin (crude OR = 0.56, p < 0.001); using or used SRS utensil (crude OR = 0.77, p < 0.05), were protective factors for BP control. It was also showed that body mass index (crude OR = 1.05, p < 0.05); alcohol consumption (crude OR = 1.55, p < 0.001); increasing number of medications (crude OR = 1.45, p < 0.001, 2 vs. 1; crude OR = 2.64, p < 0.002, 2 vs. ≥ 3) were risk factors for BP control (Table 3 ). After controlling for confounding variables such as age, sex, region, BMI, alcohol consumption, and number of antihypertensive medications, a multiple logistic regression analysis was employed to examine the statistical significance of differences in salt intake and SRS usage in relation to hypertension status. Figure 2 illustrates that individuals with a lower estimated salt intake (specifically, those in the first quartile compared to the fourth quartile) according to the three measurement methods were more inclined to exhibit a lower level of hypertension (Kawasaki adjusted OR = 0.58, 95% CI = 0.43–0.79, P < 0.001, INTERSALT adjusted OR = 0.62, 95% CI = 0.41–0.92, p < 0.05, Tanaka adjusted OR = 0.61, 95% CI = 0.45–0.92, P < 0.05). Our findings suggest that the utilization of a SRS for cooking was associated with a reduction in salt consumption, resulting in a deceleration of BP decline (adjusted OR = 0.79, 95% CI = 0.64–0.99, P < 0.05). Relationship between estimation of sodium and BP Figure 3 illustrates the dose–responsive correlation between estimation of sodium and BP from restricted cubic splines with 4 knots at the 5th, 35th, 65th, 95th percentiles of sodium (reference is the 5th percentile). The β of SBP and DBP exhibited an upward trend as sodium levels increased (P-overall association < 0.05; P-non-linear association > 0.05). According to Kawasaki, INTERSAL, and Tanaka's formulas on urinary sodium measurement, the spline analysis showed comparable patterns in the correlation between sodium intake and BP, without any discernible flattening of the curve at extreme levels of sodium exposure. However, the magnitude of this association varied across different methods of urinary sodium measurement when employing the same statistical model. Questionnaire-based survey of knowledge, attitudes, and behaviors about salt intake by hypertension status The findings indicate that a significant proportion of respondents (79.18%) demonstrated awareness regarding the adverse consequences associated with excessive salt consumption. Notably, individuals who had utilized SRS reported significantly higher levels of knowledge in this regard compared to those who had not used SRS ( p < 0.001). It is worth mentioning that a considerable portion of researchers (37.78%) lacked awareness regarding the recommended daily sodium intake, which should ideally be less than 6 g of salt per day. Furthermore, a substantial majority of patients (88.48%) expressed agreement with the ongoing promotion of low-salt diets. With the exception of the misperception that low-sodium intake leads to decreased strength, which was approved by 33.50% of respondents, SRS users demonstrated significantly better performance in addressing this attitude question ( p < 0.001). In terms of dietary behaviors, a significant majority of patients (78.35%) expressed an intention to decrease their salt intake, while a majority of participants (67.90%) reported regular consumption of salty condiments (more than 3–5 days per week). Consistently, SRS users exhibited significantly higher scores in knowledge, attitude, and behavior assessments compared to non-users (Table 4 ).
Discussion China is currently facing an escalating burden of hypertension, necessitating the implementation of lifestyle modifications and antihypertensive medications as the fundamental components of an effective hypertension management approach [ 20 , 21 ]. Reducing salt consumption in the diet is of great significance among the different lifestyle measures targeting the reduction of noncommunicable diseases in society [ 4 ]. To achieve favorable outcomes, diverse state and community salt reduction strategies have been devised, such as the implementation of The National Essential Public Health Services Package, which was introduced by the Chinese government in 2009 [ 22 ]. This package offers a range of services, including health records management, screening, and follow-up, thereby contributing to the overall objective of salt reduction. In China, the primary origin of sodium consumption is salt utilized in household cooking, while in western nations, processed food plays a major role in contributing to dietary salt [ 23 ]. Consequently, there is a need for a public campaign aimed at reducing salt usage in cooking. Additionally, it is crucial to develop a suitable assessment methodology for measuring sodium intake and to provide a reference for the formulation of precise policies for the prevention of hypertension. This approach is essential for effectively controlling BP in individuals with hypertension. Multiple studies have demonstrated that the spot urine method is a suitable approach for estimating 24-h urinary sodium excretion, enabling the investigation of the correlation between salt intake and hypertension and other diseases within the general population. Groenland E.H. et al. [ 24 ] employed Kawasaki formulae to assess the relationship between estimated salt intake and BP. The findings revealed that, for each 1 g/day augmentation in sodium urinary excretion, the average SBP and DBP increased by 1.28 mmHg (95% CI: 0.95–1.62) and 0.46 mmHg (95% CI: 0.28–0.65), respectively. In a study conducted by Goto A et al. [ 25 ], the risk of developing stomach cancer was assessed by comparing it to the estimated salt consumption obtained from spot urine using the Tanaka technique. Similarly, X F Du et al. [ 26 ] employed three spot urine methods to estimate 24-h urine sodium excretion among residents in Zhejiang Province and compared it to actual values (167.10 (74.70) mmol/day). The Kawasaki method yielded an overestimation of 184.61 (57.10) mmol/day, whereas the INTERSALT and Tanaka methods resulted in underestimations of 134.62 (39.21) and 143.20 (35.66) mmol/day, respectively. According to our research, the sodium excretion values estimated using the Kawasaki, INTERSALT, and Tanaka formula for a 24-h period were 208.70 (65.65), 154.78 (33.91), and 162.61 (40.87) mmol/day, respectively. These values exceed the average sodium intake of Zhejiang Province residents [ 26 ]. Despite slight variations between the estimated sodium salt intake and the precise individual value, our study indicates that individuals with uncontrolled hypertension are part of the population with elevated sodium consumption. The habitual salt intake in China is approximately 12 g per day [ 7 ], and our study found that the average difference between higher (Kawasaki method is 12.21 g per day) and lower (9.05 g per day) salt intake was 3 g per day. Considering previous research, the Kawasaki method may overestimate the actual value of 24-h urinary sodium excretion, while the INTERSALT and Tanaka methods tend to underestimate it [ 26 ]. Despite these inherent inaccuracies, we believe that the findings of our study are applicable to real-life conditions. By reducing salt consumption by 7 g (equivalent to approximately one more large-sized SRS), individuals would approach the World Health Organization's recommended level of 5 g per day for the population [ 8 ]. Significantly, the estimations of salt consumption were notably elevated among individuals classified as obese, in comparison to those who were categorized as normal-weight or overweight. Likewise, upon stratifying the responses based on hypertension status, it was observed that participants diagnosed with Grade 2 hypertension exhibited the highest sodium intake. Previous studies have also documented the augmented sodium consumption observed in individuals with a higher body mass index, as well as inadequate BP regulation among hypertensive outpatients [ 19 , 27 ].The potential mechanistic pathways through which insulin resistance and overweight may contribute to the development of isolated systolic hypertension involve an increase in salt sensitivity, leading to endothelial dysfunction, arterial rigidity, and elevated BP [ 28 ]. Furthermore, these pathways may be influenced by suboptimal dietary habits, including a preference for high-fat foods and the use of sodium as a flavor enhancer. Prior studies have shown that the addition of salty condiments to meals can enhance their taste, but this practice may carry the risk of excessive caloric intake and subsequent weight gain [ 29 ]. In order to mitigate salt intake across the entire population of China, the implementation of a multi-faceted initiative, including the adoption of SRS, has been widely employed [ 30 ]. In the present study, the prevalence of SRS utilization was determined to be 30.21%, surpassing the 12.0% reported in a previous survey conducted in 2017 among 7512 individuals residing in China's Zhejiang Province (of whom 35.3% were identified as hypertensive) [ 16 ]. Furthermore, the findings indicate that the use of SRS and hypertension status were positively associated with reduced sodium intake, rather than a lower sodium-to-potassium ratio. This disparity could potentially be ascribed to the fact that the participants consisted of individuals with poorly controlled hypertension, who were more inclined to receive SRS from the China Center for Disease Control and Prevention (CDC). This initiative was implemented by the national, provincial, municipal, and county levels of the CDC, advocating the utilization of a SRS during cooking as a precautionary measure [ 9 , 31 ]. Additionally, the presence of hypertension is a significant determinant in the alteration of BP induced by sodium restriction. A comprehensive analysis of multiple studies revealed that hypertensive individuals experienced a substantially greater increase in SBP compared to a mixed group of hypertensive and normotensive participants within the same study on dietary salt reduction [ 30 ]. These findings strongly indicate that, when implementing dietary sodium restriction interventions, priority should be given to hypertensive adults. The restricted cubic spline plots provided visual evidence of a consistent upward trend correlation between sodium and BP. A meta-analysis also confirmed this relationship, highlighting that the effect of sodium reduction on BP levels was more significant among individuals with higher baseline BP [ 3 ]. Our study, using various methods of urinary sodium measurement, observed similar patterns in the relationship between sodium intake and BP within the same population. However, the magnitude of this association varied depending on the specific method of urinary sodium measurement when employing the same statistical model. Previous studies have suggested that using spot urine samples to estimate sodium (Na) intake can only provide a rough average approximation of Na intake [ 32 ]. Relying on these methods to establish the connections between Na intake and BP is likely to result in biased estimations. However, despite this limitation, we proceeded with the utilization of this model as our aim was to compare the nature and extent of the relationship between sodium intake and BP, rather than solely obtaining accurate estimates. In the context of high salt intake of the Chinese population, our results support the prevention of rising BP by reducing salt intake in patients with poorly controlled hypertension. Additional findings of the study indicate a higher prevalence of correct knowledge, attitude, and behavior among SRS users, suggesting that possessing positive and accurate beliefs and attitudes serves as a fundamental basis for modifying health-related behaviors. Notably, a considerable proportion (79.18%) of the participants demonstrated awareness regarding the detrimental consequences of excessive salt consumption. However, this knowledge did not seem to translate into effective practices for reducing salt intake, as evidenced by the frequent consumption of salty condiments by a significant portion (67.90%) of the respondents (more than 3–5 days per week). Merely relying on educational initiatives and increasing awareness is unlikely to be adequate in addressing this matter. It is preferable to actively translate this awareness into tangible actions, such as reducing the amount of salt used during cooking through the utilization of SRS or opting for salt alternatives. The study revealed a significantly high percentage of patients (73.33%) with uncontrolled BP, similar to the findings of a survey conducted on 2198 patients in sub-Saharan countries (77.4%) [ 21 ]. Additionally, it was observed that individuals using multiple antihypertensive drugs had a significantly higher prevalence of uncontrolled hypertension ( p < 0.001). Previous studies have indicated that the occurrence of medication errors, drug interactions, and the utilization of high-risk pharmaceuticals tend to increase with the number of medications being administered [ 33 ]. The combination of improper drug utilization and excessive salt intake has the potential to hinder hypertensive individuals in effectively managing their BP, presenting a significant opportunity for intervention in China. Our research findings demonstrate that the implementation of SRS during the cooking process is linked to a decrease in the progression of hypertension, thereby endorsing the SRS-based approach for individuals whose primary source of sodium consumption is domestic culinary practices. There are limitations to the study. Firstly, it is important to acknowledge that this study possesses a cross-sectional design, thereby imposing inherent limitations on the capacity to establish causality. Consequently, we are unable to ascertain the causal relationship between sodium intake and BP, nor comprehend the underlying mechanism through which sodium intake may impact blood pressure. Secondly, the considerable variability in daily salt consumption among individuals introduces the potential for measurement error when converting spot urine sodium measurements to estimated 24-h urinary excretion. To avoid possible bias and enhance the accuracy of estimating sodium excretion, it is recommended to randomly collect spot or 24-h urine samples and repeatedly. The scope of our study was restricted to individuals with poorly controlled hypertension residing in Eastern China. However, it is important to acknowledge that the findings of this study may not be generalizable to populations from different regions within the country or elsewhere, as well as individuals without hypertension. However, it is important to acknowledge that in this particular population, the estimated 24-h urinary excretion may not adequately reflect actual salt intake due to the significant participant burden associated with complete collection. It is advisable for each population to develop and validate their own formula to accurately assess sodium intake from spot urine samples. Thirdly, the findings from a survey utilizing questionnaires to assess smoking habits and physical activity failed to demonstrate any statistically significant correlation with BP status. It was observed that knowledge, attitude, and behavior were somewhat linked to the utilization of SRS, although this association may be influenced by societal expectations. This study possesses several notable strengths, primarily its extensive sample size consisting of community-based patients stratified based on their hypertension status. Previous research has demonstrated the efficacy of salt reduction in cooking for lowering BP, yet only a limited number of studies have taken into consideration the participants' hypertension status or their use of antihypertensive medication when summarizing the outcomes. Furthermore, our study was bolstered by the support of an organized multidisciplinary collaborative network, which facilitated the involvement of Chinese cardiologists and ultimately benefited patients with inadequately controlled hypertension.
Conclusions Hypertension remains poorly controlled in China, with its status being linked to excessive sodium consumption. The implementation of SRS strategies could potentially lead to a reduction in daily salt intake for blood pressure control among hypertensive patients. SRS users were observed to possess a moderate comprehension of the risks associated with excessive salt consumption, although their attitudes and behaviors towards salt reduction were inconsistent. To effectively decrease the prevalence of hypertension, a comprehensive campaign should encompass the promotion of knowledge, attitudes, and behaviors aimed at reducing dietary salt intake, in conjunction with the utilization of sodium reduction strategies.
Background As the prevalence of hypertension increases in China, it is advised to use salt-restriction spoons (SRS) as a lifestyle modification. This study aimed to examine the associations between estimated salt consumption, SRS usage, and the hypertension status in individuals with poorly controlled hypertension. Methods Data was collected in Huzhou City, Zhejiang Province, in 2021 using convenience sampling. The analysis involved ordinal logistic regression and restricted cubic splines to assess the relevant factors. Results The study found that 73.34% of the 1215 patients had uncontrolled blood pressure (BP). Urinary excretion was assessed through the utilization of the Kawasaki, INTERSALT, and Tanaka formulas. The outcomes of these three methodologies revealed average daily sodium excretion values of 208.70 (65.65), 154.78 (33.91), and 162.61 (40.87) mmol, respectively. The prevalence of utilizing SRS was found to be 37.78% in this study. Despite the acknowledgment among SRS users of the potential hazards associated with excessive salt consumption, there exists a contradictory pattern of attitudes and behaviors concerning salt reduction. Among individuals with different levels of salt intake (quartiles 1–4, Q1 vs Q4), there was a positive association between limiting salt and hypertension status when controlling for other variables (Kawasaki adjusted OR = 0.58, 95% CI = 0.43–0.79; INTERSALT adjusted OR = 0.62, 95% CI = 0.41–0.92; Tanaka adjusted OR = 0.61, 95% CI = 0.45–0.92, p < 0.05). Our research also revealed that using or used SRS was a protective factor for blood BP control (adjusted OR = 0.79, 95% CI = 0.64–0.99, P < 0.05). The restricted cubic spline plots illustrated a monotonic upward relationship between estimated 24-h urinary Na and BP (P-overall association < 0.05; P-non-linear association > 0.05). Conclusions The use of dietary SRS could result in decrease in daily salt intake for BP control in patients with poorly controlled hypertension. To reduce the impact of high BP in China, additional studies are required to create interventions that can enhance the results for patients. Keywords
Abbreviations Salt-restriction spoon World Health Organization Cardiovascular diseases Center for Disease Control and Prevention Blood pressure Systolic blood pressure Diastolic blood pressure Body mass index Standard deviation Confidence interval Odds ratio Acknowledgements The authors thank the participants who made this study possible and thank Hongwei Shen, Lijie Shi, Liying Yu, Linyan Wang, Jiasheng Qin for initial feedback on the survey, aiding in its development. Authors’ contributions Conceptualization was done by Z.Q and S.Y.M; methodology was done by Z.Q; validation was carried out by Z.Q; formal analysis was done by Z.Q; investigation was carried out by Y.M.H, Y.Z.R, H.Z, Z.X.F and D.J.Y ; resources was carried out by S.Y.M; data curation was done by Z.Q and S.Y.M; Z.Q and S.Y.M were responsible for original draft preparation; -review & editing was done by Z.Q. visualization was carried out by Z.Q and Y.Z.R; supervision was done by Y.M.H; project administration carried out by S.Y.M; and funding acquisition was done by S.Y.M. All authors have read and agreed to the published version of the manuscript. Funding This study was funded by the Huzhou Science and Technology Bureau (2021GYB67), Huzhou Medical Key Supporting Discipline (Epidemiology), and the Key Laboratory of Emergency Detection for Public Health of Huzhou. Institutional Review Board Statement: This study was conducted in accordance with the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the Huzhou Center for Disease Control and Prevention (protocol Code HZ2021003). Availability of data and materials The datasets generated and analyzed during the current study are available from the corresponding authors upon reasonable request. Declarations Consent for publication Informed consent was obtained from all subjects involved in the study.Written informed consent has been obtained from the patient(s) to publish this paper. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
Nutr J. 2024 Jan 15; 23:9
oa_package/ca/c7/PMC10789056.tar.gz
PMC10789057
0
Introduction Nutritional status is often compromised in the elderly. Physiological and social changes resulting from advanced age, comorbidities, high consumption of drugs, degenerative loss of mobility, psychological and mental distress, and loss of appetite are just some of the factors that affect the nutritional status of this age group [ 1 , 2 ]. Hospitalized elderly patients have the highest risk of being at nutritional risk or becoming malnourished. During hospitalization, multiple factors such as underlying acute or chronic diseases, inflammatory states, and infections increase patients' energy expenditure while reducing their normal nutrient intake [ 3 ]. The consequences of malnutrition in hospitalized elderly result in multiple adverse outcomes such as increased prevalence of Healthcare-Associated Infections (HAIs), decreased functional status, decreased quality of life, longer hospital Length of Stay (LOS), increased healthcare costs, hospital readmission rate, and hospital mortality [ 4 ]. Malnutrition and nutritional risk are common in hospitalized elderly. But unfortunately, is not easily recognizable or distinguishable from the changes in the aging process, which means that a significant percentage of patients are undiagnosed [ 5 ]. The prevalence of malnutrition among the elderly in hospital settings ranges from 11% to 55% internationally [ 6 ]. A hospital-based cross-sectional study was carried out in the medical Intensive Care Unit (ICU) of the internal medicine ward in AL-Zahra University Hospital, Cairo, Egypt. By nutritional assessment, (50%) of patients were malnourished either mild/moderate (35.3%) or severely malnourished (14.7%) [ 7 ]. Another study carried out at Zagazig University Hospitals, Egypt reported that (51.5%) of the studied elderly were at risk for malnutrition [ 8 ]. Malnutrition underdiagnosis can be prevented, possibly reducing the prevalence of malnourished hospitalized elderly patients. This happens using various nutritional screening tools which become an essential step to classify those patients who are at nutritional risk from hundreds of patients attending tertiary care hospitals, especially in developing countries like Egypt. Then intervene immediately by developing appropriate nutritional care plans that could improve their prognosis [ 9 ]. There are many tools for nutritional screening and identifying nutritional risks in the elderly population. Among the validated measures, are the Malnutrition Inflammation Score (MIS) and the Subjective Global Assessment (SGA). Both are based on medical history and clinical findings, and they need subjective assessment and judgment by the highly trained examiner to verify consistent results among different examiners and at different times [ 10 ]. Other nutritional screening tools include Mini Nutritional Assessment–Short Form (MNA-SF) [ 11 ], Malnutrition Universal Screening Tool (MUST) [ 12 ], Malnutrition Screening Tool (MST) [ 13 ], and Nutritional Risk Screening 2002 (NRS-2002) [ 14 ]. Although the method recommended by the European Society of Parenteral and Enteral Nutrition (ESPEN) for assessing the nutritional status of older people is the Mini Nutritional Assessment (MNA) [ 15 ]. But it does not apply to those patients diagnosed with dementia or other communication problems [ 16 ]. Subjective data about the history of weight loss and calculations of the weight loss percentage in MUST, NRS-2002, and MST may be a barrier as they rely on memory and take more time for the busy healthcare staff on the wards [ 17 ]. The Geriatric Nutrition Risk Index (GNRI) is a simple and objective screening index designed specifically for the hospitalized elderly to assess nutritional risk and predict nutrition-related complications [ 18 ]. It allows clinicians to assess patients easily based on two main parameters: serum albumin and the ratio between the current and ideal weight of the individual. It was developed in response to the fact that elderly patients are often unable to participate in questionnaire‐based assessments as used in MNA. Also, it did not depend on a caregiver or memory. Therefore, it is practical and provides reliable assessment in most healthcare settings, especially among elderly patients who have cognitive impairment or delirium and dementia [ 9 ]. A cross-sectional study was conducted in the Geriatrics and Gerontology Department at Ain Shams University Hospital in Cairo Egypt to compare the performance and the accuracy of different nutritional screening tools. It reported that among the several studied assessment tools, NRS-2002 had the highest sensitivity while GNRI had the highest specificity [ 19 ]. Another study was carried out at Alexandria Main University Hospital, and the prevalence of risk of malnutrition among a sample of elderly patients aged ≥65 years as assessed by GNRI was (33.3%) [ 20 ]. Although GNRI has been validated by more than one study, only a few studies were conducted in Egypt, and none studies the role of GNRI in the prediction of nutrition-related complications and mortality after discharge among the elderly population. Thus, this study aimed primarily to investigate whether nutritional risk, as assessed by the GNRI, is associated with multiple adverse outcomes in elderly patients admitted to the geriatric hospital Ain Shams University. Secondly to study the capability of the GNRI to predict adverse outcomes and mortality during hospitalization and up to 90 days after discharge.
Subjects & methods Study design and population This hospital-based prospective cohort study was conducted in the Geriatric Hospital at Ain Shams University, Cairo, Egypt from August 2021 to June 2022. Eligible patients were aged ≥ 60 years and had an anticipated length of stay of at least 48 hours. Exclusion criteria were: (i) presence of well-known liver, renal or neoplastic disorders, (ii) Haemodialysis patient, (iii) Severe swelling affecting body weight (such as asities, decompensated heart failure, generalized edema, and elephantiasis), (iv) Amputation of the lower limb, hemiplegia, and paraplegia, and (v) terminal ill condition (ICU patients). Sample size and technique Using Epi info program version 7 for sample size calculation, setting the confidence interval at 95% and margin of error at 5%, it is estimated that a sample size of 334 patients was enough to detect an expected prevalence of nutritional risk of 68% [ 18 ]. All eligible elderly patients admitted to the internal ward of the Geriatric hospital Ain Shams University were consecutively enrolled until the sample size was obtained. Data collection Data extraction sheet All patients were assessed within 48 hours of admission. The demographic characteristics that were collected included age, gender, level of education, marital status, income, and presence of a caregiver. Patient clinical information and associated comorbidities were also collected. Nutritional assessment Anthropometric measurements The following anthropometric nutritional parameters: actual (present) weight, height, Body Mass Index (BMI) (in kg/m2), Triceps skinfold thickness, Mid-Arm Circumference (MAC), and Calf Circumference (CC) were obtained. Weight was determined on a calibrated scale placed on a hard-floor surface. Participants had to be in light clothing and without shoes, and measurements were recorded to the nearest 0.5 kg. Standing height was measured using a tape measure, the patients stood up straight with heels together and height was recorded to the nearest 0.5 cm. In the case of bedridden Estimated height (EH) was extrapolated from Knee-Heel (KH) length according to the equations [ 21 ]: BMI was calculated as weight (in kg) divided by height squared (by m2). MAC was measured by asking the patient to bend his non-dominant arm at the elbow at a right angle with the palm up; then, the distance between the acromial surfaces of the scapula and the olecranon process of the elbow was measured and the tape at the mid-point on the upper arm tightened snugly. MAC was recorded to the nearest 0.1 cm. Triceps skinfold thickness by a skinfold caliper. CC was measured by asking the patient to sit with the left leg hanging loosely, wrapping the tape around the calf at the widest part, and noting the measurement. CC was recorded to the nearest 0.1 cm [ 22 ]. Blood biomarkers levels Laboratory assessments done were serum levels of albumin (g/dL), total protein (g/dL), hemoglobin (g/dL), C reactive protein (mg/L), and ferritin (ng/mL). All these investigations were done to patients within 48 hours after hospital admission. Geriatric Nutrition Risk Index (GNRI) The nutrition-related risk was evaluated using the GNRI within 48 hours of admission. It was calculated as follows [ 23 ]: Ideal body weight was derived using the following equations of Lorentz (WLo) [ 23 ]: Study participants were categorized into the following three categories: no nutritional risk (GNRI >98), low nutritional risk (92–98), and high nutritional risk (GNRI <92). In total, 356 hospitalized elderly patients who were admitted to the geriatric hospital Ain shams university were assessed, of whom 22 were excluded due to the presence of exclusion criteria. Outcomes Patients were followed starting from the date of assessment, during the hospital stay, and for three months after discharge for the occurrence of selected clinical complications. The primary adverse outcomes that may occur at the hospital were bed sores, HAIs, hospital-acquired Coronavirus disease 2019 (COVID-19) infection, prolonged hospital LOS, and hospital mortality (primary endpoint). HAIs are infection(s) acquired during the process of receiving health care that was not present during the time of admission, such as urinary tract infection, pneumonia, surgical site infection, and bloodstream infection [ 24 ]. Hospital LOS is defined as the actual number of days in the hospital from the day of admission to the day of discharge or death (if death occurred in the hospital) [ 25 ]. It was obtained from hospital charts. The secondary outcomes that occurred after discharge were non-improvement in the medical status, appearance of new medical conditions, hospital readmission, and 90-day mortality (secondary endpoint). Data management and statistical analysis The collected data were revised for completeness, coded, and entered into a personal computer. All data manipulation and statistical analyses were performed using IBM SPSS (Statistical Package for Social Science) software version 24.0. Qualitative categorical variables were expressed as frequencies and percentages. Quantitative variables were expressed as means with the Standard Deviation (SD). One-way Analysis of Variance (ANOVA), Kruskal–Wallis, and Chi-square tests were used. Multivariable logistic regression analyses were performed with GNRI as the independent variable (with GNRI >98, normal nutritional status, as the reference group). Bed sores, HAIs, hospital mortality, post-discharge health complications, and hospital readmission were the dependent variables. Overall Survival (OS) curves were plotted using the Kaplan–Meier method and compared using the generalized log-rank test. The Cox proportional hazards model was conducted to determine the independent predictors of overall mortality in the study participants. Adjusted Hazard Ratios (AHRs) and 95% confidence intervals (CIs) were reported. P ≤ 0.05 was considered statistically significant.
Results The total number of elderly hospitalized patients included in this study is 334. The baseline demographic and clinical characteristics of the patients according to GNRI are provided in Table 1 . The mean age of these patients was 72.35 + 8.1 years and (55.7%) were females. Regarding preadmission status, about half of the patients (51.5%) had no priorly admission and came from home and (44%) were in geriatric hospital ICU and then transferred to hospital wards. The patients with lower GNRI levels had a significantly greater mean age. However, there were no statistically significant differences in gender, education, marital status, presence of a caregiver, and income among nutritional risk categories . Lower GNRI levels were significantly associated with lower serum albumin levels, total Protein, haemoglobin, BMI, triceps skin fold thickness, MAC, and CC. On the other hand, the levels of CRP and Ferritin were significantly higher in the high-risk group than no-risk (Table 1 ). The GNRI score of all patients ranged from 63.00 to 147.90, with a mean value of 95.07 ± 13.63. The prevalence of high, low, and no nutritional risk as measured by GNRI was 45.5% (95% CI, 40%–51%), 18% (95% CI, 13.9%–22.5%), and 36.5% (95% CI, 31.3%–41.9%), respectively . There was a statistically significant difference in the development of bed sores, HAIs, hospital-acquired pneumonia, and urinary tract infection among different nutritional risk groups (p<0.05), with incidence rates worsening as the nutritional risk increased. Patients in the high-risk group had a significantly longer hospital LOS, as median hospital days significantly increased in patients with no, low, and high risk from 8 to 10 and 12 days, respectively. Additionally, hospital mortality significantly increased as nutritional risk increased as the incidence of hospital deaths among patients of the high-risk group was 15.1% (95% CI, 9.8%–21.8%) compared to 3.3% (95% CI, 0.9%–8.1%) mortality rate in no-risk group. Similarly, the incidence rate of deterioration in the medical condition and transfer rate to ICU was significantly higher 18.4% (95% CI, 12.6%–25.5%) among the high-risk group compared to low, no risk (10.0%, 4.1%) respectively. Also, patients at high nutritional risk were less frequently discharged to home compared to patients at no risk (61.2% and 86.1%) respectively (Table 2 ). During the three-month follow-up period, there were 54 patients lost to follow-up. Among the high-risk group (53.5%) of patients reported no improvement in their medical condition compared to (23.7%) in the no-risk group. The appearance of new medical conditions was significantly reported more frequently among the high-risk group compared to no-risk (74.3% and 29.1%) respectively. These differences were statistically significant. Patients in the high-nutritional risk group had higher 90-day hospital readmission and 90-day mortality rates compared to those in the no-risk group. However, the difference was statistically insignificant ( p > 0.05) (Table 2 ). Patients with nutritional risk had increased risk of ICU transferal (Relative Risk (RR): 3.91; 95% CI, 1.57–9.74), hospital mortality (RR: 3.74; 95% CI, 1.33–10.46), and overall mortality (RR: 2. 18; 95% CI, 1.29–3.69) (Table 3 ). In a linear regression where age, body mass index, and presence of comorbidities were adjusted, the nutritional risk was significantly associated with prolonged hospital LOS. On average, patients with a high nutritional risk stayed in the hospital for 3.6 days longer than those with no nutritional risk (Table 4 ). Geriatric Nutritional Risk Index threshold values: <92, high risk; 92 to 98, low risk. In multivariable logistic regression and after controlling for confounding variables, the high nutritional risk was an independent predictor of bed sores developed at the hospital (AOR: 4.89; 95% CI, 1.37–17.45), HAIs (AOR: 3.18; 95% CI, 1.48–6.83), non-improvement in the medical status after discharge (AOR: 3.55; 95% CI, 1.69–7.47), and appearance of new medical problems during follow-up (AOR: 4.99; 95% CI, 2.59–9.61) (Table 5 ). In survival analysis, Kaplan-Meier curves for all-cause death showed that the overall survival rate was significantly worse in the high-risk group than in the no-risk group, and lower mean survival days were observed in the high-risk group compared to the no-risk (103 and 117 days) respectively. The difference between survival rates among nutritional risk groups was tested by log-rank test and was statistically significant ( P = 0.004) (Fig. 1 ). On Cox hazard regression analysis, patients in the high nutritional risk group had a higher risk of overall mortality compared to those in the no-risk groups (AHR: 2.06; 95% CI: 1.10–3.85, P =0.024). Patients with prolonged hospital LOS had an increased risk of overall mortality (AHR: 1.03; 95% CI: 1.01–1.06, P =0.004). (Table 6 ).
Discussion Malnutrition is a major geriatric condition that is prevalent among elderly hospitalized patients. It remains underreported, often underdiagnosed, and considered to be one of the contributing factors for worse health outcomes and increased morbidity and mortality [ 26 ]. The GNRI's benefits include being a quick and objective nutrition screening tool that requires little involvement from patients and being dependent on current body weight, which eliminates bias related to past unintentional weight loss investigations [ 23 ]. This study directly assessed the capability of the GNRI score as a prognostic index for the prediction of nutrition-related morbidity and mortality in an acute care setting in Cairo, Egypt. In this study, the prevalence of high nutritional risk was (45.5%) which is higher than that reported by an old cohort study conducted in the same hospital over a decade ago which revealed that the prevalence of high nutritional risk as assessed by GNRI was (41.2%) [ 9 ]. The present higher rate of high nutritional risk denotes that almost half of the admitted patients are at risk of nutrition-related complications including mortality. This also implies that malnutrition status is on the rise among elderly patients admitted to hospitals in Egypt. Similarly, previous studies nearly agreed with the current study where the prevalence of high risk was (49.7% and 48.4%) respectively [ 27 , 28 ]. This observation strengthens public health concerns regarding the nutritional risk of health complications in the elderly population. The present study showed that the nutritional risk significantly increased with advancing patient age. This coincides with a prospective multicenter cohort study in an acute hospital setting conducted in Italy [ 29 ]. This relation between age and nutritional risk is expected given that malnutrition and ageing are linked in the elderly. And the fact that many changes related to ageing such as anorexia, decreased taste and smell, and a decrease in gastric acid secretion which affects the absorption of multiple nutrients can cause malnutrition. There was a statistically significant difference between the preadmission status and nutritional risk as among the high-risk group, more than half (52.6%) were in the ICU prior to ward admission. The metabolic reaction to serious illness may provide an explanation for this finding. The body shifts to a hypercatabolic state during critical illness conditions, as the patient suffers from a high degree of stress and inflammation, which causes the body to catabolize more proteins and other substances to meet the patient's increased energy demands and maintain physiological functions [ 30 ]. Regarding the anthropometric parameters, the present study revealed that increasing nutritional risk was associated with more depleted nutritional parameters. Significant differences were detected in the parameters of skinfold thickness, MAC, and CC in the GNRI groups. In addition, BMI was detected in high, low, and no nutritional risk groups (23.5, 26.0, and 29.9) respectively. This result was further agreed with other studies that found that the high nutritional risk group had a BMI and serum albumin lower than the other groups [ 29 , 31 ]. These results suggested that simple and low-cost parameters such as the anthropometric measures are probably valid parameters for estimating nutritional status in elderly hospitalized inpatients. The utilization of both albumin and weight in the index minimizes different confounding variables such as inflammation and hydration status. According to a Japanese study, the GNRI was more accurate at predicting morbidity and mortality than either the BMI or albumin alone [ 32 ]. Regarding the adverse clinical outcomes, as the level of nutritional risk increased, the incidence of complications increased. In the present study, the incidence of HAIs in high, low, and no nutritional risk was (42.1%, 30%, and 16.4%) respectively. A similar incidence rate was reported in a previous study mentioned that the incidence of HAIs in high, low, and no nutritional risk was (41.7%, 25.5%, and 20.6%) respectively [ 28 ]. This is also in accordance with another study reported that severe malnutrition defined by GNRI is associated with a higher risk of complications [ 18 ]. So, GNRI quantifies the severity of malnutrition and its impact on individual complications. The present study also found that high and low nutritional risk were significant independent predictors for HAIs complications. This result was further agreed with a study found that high nutritional risk was an independent risk factor of postoperative pneumonia, surgical site infection, sepsis, and urinary tract infection [ 33 ]. In the same context, the present study illustrated that bed sores developed at the hospital were significantly associated with high nutritional risk. This finding was supported by a study reported that GNRI was detected as a significant independent predictor for bed sores complications [ 23 ]. The association between malnutrition and hospital LOS is well‐established. One previous study suggested that the risk of malnutrition, as assessed using the GNRI, contributed to prolonged LOS in elderly patients [ 29 ]. The results of the present study were consistent with that previous finding as they showed a significant association between prolonged LOS and nutritional risk, the median hospital days significantly increased in patients with no, low, and high risk from 8 to 10 and 12 days, respectively. This issue is of special interest as clinical decision-making concerning nutritional screening and therapeutic interventions is often driven by economic factors [ 34 ]. In this study, the incidence of hospital mortality among patients of the high-risk group was (15.1%) this observation agrees with a study conducted on elderly inpatients admitted to a teaching hospital in Seoul, Korea which reported that (21.7%) of high nutritional risk patients died in the hospital within 28 days [ 35 ]. The difference in hospital readmission rate between GNRI groups, as assessed in this study, didn't quite reach statistical significance. One potential reason is that the cause of rehospitalization is multifactorial and is related not only to the severity of malnutrition but also to patient self-care and socioenvironmental factors. In this study, most patients who were readmitted to the hospital were because of different factors not related to malnutrition as undergoing an endoscope (previously scheduled at discharge). There was a much lower overall survival rate in cases with high nutritional risk compared to the normal group and the difference is highly statistically significant ( P = 0.004). Consistency to this result, a study conducted on elderly patients admitted to critical care units in Boston, USA and found that the 90-day survival was significantly lower in the group with nutrition risk (GNRI ≤ 98) compared with the no-risk group (GNRI > 98) [ 36 ]. Although an old cohort study which was conducted in the same hospital a decade ago reported the validity and simplicity of the GNRI tool for prediction of nutrition-related morbidity and mortality complications in elderly hospitalized patients [ 9 ], yet this nutritional screening tool is not applied in the geriatric hospital or considered as a screening tool. The findings of the present study indicate the need for a reliable and simple index for the early detection of the risk of malnutrition in Elderly hospitalized patients all over Egypt. And, with fast detection comes the need for close and thorough follow-up from dietitians in this high-risk group to lower mortality among these categories. So, there is the utmost need for the application of this geriatric nutritional screening tool in Egyptian hospitals. Limitations of this study Single time point measurement of the GNRI at admission was used for the analyses. This single measurement may have failed to detect the intraindividual variability in the albumin level over time and may result in the misclassification of the patients into different GNRI level categories. It is not always easy to measure the current weight of acute bedridden patients. Another limitation is the COVID-19 pandemic because it forced the geriatric hospital to close and become an isolation facility for confirmed COVID-19 cases. This made it difficult to collect data for a while. Finally, this was a single-center study, the results may not be generalizable to different clinical settings.
Conclusions In conclusion, GNRI is a simple and objective nutritional screening method that could be used to give warning on short-term and long-term risks of morbidity and mortality. Nutritional risk, as defined by GNRI, is an independent predictor of multiple health adverse outcomes such as bed sores developed during hospitalization, HAIs, and prolonged hospital LOS. Therefore, using GNRI to assess elderly patients' nutritional status may help to identify patients who are at high risk of adverse outcomes more quickly and allow for early intervention with appropriate and timely nutritional care management to mitigate the risk of morbidity, improve clinical outcomes, and reduce the costs of healthcare.
Background Elderly are one of the most heterogeneous and vulnerable groups who have a higher risk of nutritional problems. Malnutrition is prevalent among hospitalized elderly but underdiagnosed and almost undistinguishable from the changes in the aging process. The Geriatric Nutritional Risk Index (GNRI) is a tool created to predict nutrition-related complications in hospitalized patients. This study aims to measure the prevalence of nutritional risk using the GNRI among hospitalized elderly Egyptian inpatients and to determine the association between the GNRI and selected adverse clinical outcomes. Methods A hospital-based prospective cohort study was conducted among 334 elderly patients admitted to a tertiary specialized geriatric university hospital in Cairo, Egypt from August 2021 to June 2022. Within 48 hours after hospital admission, socio-demographic characteristics, blood biomarkers, anthropometric measurements, and nutritional risk assessment by the GNRI score were obtained. Patients were divided into three groups based on their GNRI: high, low, and no nutritional risk (GNRI<92, 92-98, and >98) respectively. Patients were followed up for the occurrence of adverse outcomes during hospital stay (bed sores, Healthcare-Associated Infections (HAIs), hospital Length of Stay (LOS), and hospital mortality) and three months after discharge (non-improvement medical status, appearance of new medical conditions, hospital readmission and 90-day mortality). Multivariable regression and survival analysis were conducted. Results The prevalence of high-nutritional risk was 45.5% (95% CI, 40%–51%). Patients with high risk had significantly longer LOS than those with no risk. The high-nutritional risk was significantly associated with the development of bed sores (Adjusted Odds Ratio (AOR) 4.89; 95% CI, 1.37–17.45), HAIs (AOR: 3.18; 95% CI, 1.48–6.83), and hospital mortality (AOR: 4.41; 95% CI, 1.04–18.59). The overall survival rate was significantly lower among patients with high-nutritional risk compared to those with no risk. Conclusion GNRI is a simple and easily applicable objective nutritional screening tool with high prognostic value in this Egyptian sample of patients. The findings of this study signal the initiation of the application of this tool to all geriatric hospitals in Egypt. Keywords Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Abbreviations Ain Shams University Geriatric Nutritional Risk Index Healthcare-Associated Infections Length of Stay Confidence Interval Adjusted Odds Ratio Intensive Care Unit Malnutrition Inflammation Score Subjective Global Assessment Mini Nutritional Assessment–Short Form Malnutrition Universal Screening Tool Malnutrition Screening Tool Nutritional Risk Screening 2002 European Society of Clinical Nutrition and Metabolism Mini Nutritional Assessment Body Mass Index Mid-Arm Circumference Calf Circumference Estimated Height Knee-Heel Standard Deviation Inter Quartile Range Analysis of Variance Relative Risk Overall Survival Adjusted Hazard Ratios The author wishes to thank the staff of the Department of Geriatrics Medicine, Ain Shams University, all the study participants for their great contribution, and the hospital administration. Authors’ contributions Study concept and design: Hebatullah O Mohammed and Aisha Aboelfotoh. Investigation and writing the original main manuscript: Hebatullah O Mohammed and Khaled M. Abd Elaziz. Statistical analysis, data curation, and interpretation: Hebatullah O Mohammed, Khaled M. Abd Elaziz and Azza M. Hassan. Revision of the manuscript and editing: Aya Mostafa and Mohamed S. Khater. All authors reviewed and approved the final manuscript. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was performed in accordance with the ethical standards of the Declaration of Helsinki, 1964 and its later amendments. All methods were performed in accordance with the relevant guidelines and regulations. This study was approved by the Research Ethical Committee (REC) at the faculty of medicine, Ain Shams University (under the number code FAMSU MD 255/2019 (FWA 000017585) 28/8/2019). Informed consent was taken from each participant. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Geriatr. 2024 Jan 15; 24:62
oa_package/fa/7d/PMC10789057.tar.gz
PMC10789058
0
Introduction Therapeutic hypothermia (TH) for infants with moderate to severe hypoxic-ischemic encephalopathy (HIE) reduces mortality (relative risk 0.75 (95% CI, 0.64–0.88)) and long-term disability (relative risk 0.77 (95% CI, 0.63–0.94)) [ 1 ], and has been standard of care in high income countries (HICs) since 2010 [ 2 ]. Although controversy remains whether TH should be provided in low- and middle-income countries (LMICs) [ 3 – 6 ], the latest international guidelines on Neonatal Life Support recommends implementation of TH in such settings if certain intensive care facilities like intravenous therapy, respiratory support, pulse oximetry, antibiotics, and anticonvulsants are available [ 7 ]. A recent systematic review and meta-analysis including 2926 infants reported reduction in disability and cerebral palsy (CP) in infancy by TH independent of setting, but a reduction in mortality at 18–24 months was only reported in HICs [ 8 ]. The largest randomized controlled trial (RCT) on TH till date, the Hypothermia for Encephalopathy in Low- and Middle-Income Countries (HELIX) trial, including 408 infants from India, Bangladesh and Sri Lanka [ 9 ], found increased mortality among cooled infants. Cooled infants had more complications such as persistent hypotension and metabolic acidosis, prolonged blood coagulation, gastric bleeding, and severe thrombocytopenia, suggesting that cooling in those settings had adverse effects potentially contributing to the increased mortality. Such effects of a moderate decrease in core temperature are consistent with what has been found in studies of accidental hypothermia [ 10 – 12 ]. Moderate hypothermia may affect cardiac, liver and renal function, but such organ complications are difficult to distinguish from the effects of a hypoxic-ischemic insult itself [ 13 , 14 ]. Studies on TH for HIE from HICs has not reported any difference in the occurrence of hypotension, hemorrhage, or coagulopathies [ 15 – 19 ]. There are few studies with a normothermic control group after the implementation of TH as standard of care for infants with moderate to severe HIE. Detailed biochemical and clinical markers of organ dysfunction during TH are important to understand the conflicting results on adverse outcomes, safety, and efficacy from different settings and to optimize the treatment across settings. Although guidelines recommend that TH is provided in facilities with a certain level of intensive care [ 7 , 20 , 21 ], the level of monitoring and supportive treatment necessary for TH to be both effective and safe is still unclear. The Therapeutic Hypothermia in India (THIN) study was a prospective RCT in which infants with moderate or severe HIE were randomized to TH or standard care with normothermia [ 22 ]. Both the primary outcome of early brain MRI biomarkers and secondary outcome of neurodevelopment at 18 months showed a beneficial effect of TH [ 22 , 23 ]. The main purpose of this post-hoc analysis is to compare early biochemical profile and organ complications in cooled and non-cooled infants included in the THIN study. We also explore associations between organ dysfunction in the neonatal period and outcomes at 18 months.
Methods This is secondary analysis of the THIN study, a single-center RCT of infants admitted with moderate to severe HIE to the neonatal intensive care unit (NICU) at the Christian Medical College Vellore, a tertiary care teaching hospital in rural south India. Infants at or near term (> 35 weeks of gestation) admitted before 5 h after birth with signs of perinatal asphyxia (5-min Apgar-score < 6, pH < 7.0 base deficit ≥12, need of positive pressure ventilation > 10 min, or for outborn infants; no cry at birth) and moderate to severe HIE (identical to the NICHD trial [ 17 ]) were recruited between September 2013 and October 2015. Included infants were randomly assigned to hypothermia with target core temperature 33.5 °C ± 0.5 °C for 72 h induced by a phase changing material-based cooling device (MiraCradle Neonate Cooler, Pluss Advanced Technologies, India) or standard care (SC) with normothermia. A sample size of 25 infants in each arm of the RCT was calculated to detect a 10% difference in mean fractional anisotropy (FA) values in posterior limb of the internal capsule (PLIC) on neonatal MRI, and accounting for a 20% mortality before MRI was taken. Full description of the trial is published elsewhere [ 22 ]. For this study, we included biochemical data, clinical indicators of organ dysfunction, treatment data, seizures, adverse events during the intervention and neurodevelopmental outcome at 18 months. Blood gas, renal and coagulation parameters and full blood counts were monitored per study protocol, and investigation for infection was done as clinically indicated. Blood gas from cord or the infant within the first hour of life were only available in inborn infants. Persistent metabolic acidosis was defined as pH < 7.15 for more than 12 h. Liver enzyme analysis was only performed once, prior to starting TH and were thus not included in the analysis. Abnormal international normalized ratio (INR) and activated partial thromboplastin time (APTT) were defined as > 1.8 and > 43 s, respectively [ 24 ]. Thrombocytopenia was defined as platelet count < 100 000 per μl and severe thrombocytopenia as platelet count < 25 000 per μl or < 50 000 per μl with active bleeding. Anuria was defined as urine output < 0.5 ml/kg/h, and oliguria as urine output < 1 ml/kg/h. All treatments including medications were given as per existing treatment protocols. Mechanical ventilation was provided for infants with respiratory failure. First-line anticonvulsants was phenobarbitone, second was phenytoin, and third and fourth were levetiracetam and benzodiazepine (midazolam or clonazepam). Infants were followed up at 18 months with a complete neurological examination and the Bayley Scales of Infant and Toddler Development, third edition (Bayley-III) [ 25 ]. Adverse outcome was defined as death, CP with GMFCS (Gross Motor Function Classification System) level 3–5 or Bayley-III cognitive and/or motor composite score (CS) < 85 at 18 months of age [ 23 ]. Statistical analysis All statistical analysis were performed using IBM SPSS Statistics 27 and 29. The data is presented as counts with proportions for dichotomous variables and median with IQR for continuous variables, as we had a small sample size, and did not expect the variables to be normally distributed. Group differences by randomization and outcome were analyzed using Chi 2 -test, Fisher’s exact test and linear-by-linear association as appropriate for dichotomous variables and Mann Whitney U-test for continuous variables. A p -value < 0.05 was considered statistically significant for all analysis. Unadjusted odds ratios for an adverse outcome were calculated for significant exposures.
Results Fifty infants were included in the THIN study, 25 receiving TH and 25 receiving SC. Demographics, neonatal characteristics and outcome are shown in eTable 1 in Supplement. Longitudinal biochemical data on pH, platelet count, INR, APTT, creatinine, urea and troponin T in the TH- and SC-group are shown in Fig. 1 . Infants in the TH-group had significantly lower pH at 6–12 h compared to the SC-group (median (IQR) 7.28 (7.20–7.32) vs 7.36 (7.31–7.40), respectively, p = 0.003) and 12–24 h (median (IQR) 7.30 (7.24–7.35) vs 7.41 (7.37–7.43), respectively, p < 0.001). No infant had a persisting pH-value < 7.15 for more than 12 h. Data on biochemical profile, organ dysfunction and treatment according to randomization group are presented in Table 1 . During the first 24 h of life, significantly more infants in the SC-group had anuria/oliguria, compared to the TH-group (16/23 (70%) vs 7/25 (28%), respectively; p = 0.004, Table 1 ). This difference persisted also when excluding the four infants (all in SC-group) with global brain injury on MRI. Fourteen (28%) infants received mechanical ventilation with a maximum duration of three days, and two of these (one in TH and one in SC group) were on high frequency oscillation. All, except one infant in SC-group, were started on empirical antibiotic therapy, and twenty-one (11 in TH- and 10 in SC-group) were treated for more than 72 h. No infant had culture-positive sepsis. No infant had severe thrombocytopenia. Fresh frozen plasma was the most frequently used blood product during the intervention (10/25 (40%) in the TH-group and 8/25 (32%) in the SC-group, p = 0.56). Only two infants in the TH-group and one in the SC-group received packed red blood cells, and none were given a platelet transfusion. Two infants (SC-group) had subgaleal bleeds. No infants in the study had subcutaneous fat necrosis. Anuria/oliguria at 24–48 h, treatment with vasopressors, mechanical ventilation and ≥ 2 anticonvulsants were all significantly associated with an adverse outcome (Table 2 ).
Discussion This is a secondary analysis of the THIN study, an RCT of cooling induced by phase changing material versus standard care for infants with moderate or severe HIE admitted to a level III NICU in South-India. We report that cooled infants had lower pH during the first day of life compared to non-cooled infants, but no infant had persistent metabolic acidosis. Fewer cooled infants had early oliguria/anuria. There were no differences in other organ complications or in respiratory and hemodynamic support between cooled and non-cooled infants. These findings are consistent with those from HICs, showing low rates of adverse events during cooling [ 15 – 19 ]. Our finding that no infant had persistent metabolic acidosis is in in contrast to the HELIX-study, where 23% of cooled and 12% of non-cooled infants had persistent metabolic acidosis [ 9 ]. Hemodynamic control with close monitoring of blood pressure is key to avoid persistent hypotension and metabolic acidosis due to poor perfusion during TH [ 10 , 26 ]. All infants in the THIN study had arterial line for continuous monitoring, and cardiac ultrasound to assess hemodynamics was available as standard of care. Blood pressure data were unfortunately not collected, but there was no difference in the use of vasopressors between the groups. The HELIX-trial reports significantly more use of pressors and more persistent hypotension despite maximum inotropic support among cooled infants [ 9 , 27 ]. Similar to our findings, large RCTs from HICs have not reported an increased risk of hypotension or persistent metabolic acidosis with TH [ 15 – 19 ], and this could reflect the quality of intensive care and monitoring provided [ 26 ]. A higher, although not statistically significant, incidence of thrombocytopenia among cooled compared to non-cooled infants in the present study, is in line with what has been reported by others [ 5 , 14 , 28 , 29 ]. Similarly, more cooled infants had elevated INR, but this difference was also not significant. Both groups had a very high proportion of infants with elevated APTT. This is most likely due to samples taken from arterial lines running heparin, which could interfere with the actual levels even if protocol was to withdraw at least 5 mL of blood before taking samples. More importantly, cooled infants did not have increased incidence of severe bleeding or other clinical indicators of severe coagulopathies, which is in line with studies from HICs [ 15 – 19 ]. This is also in contrast to the HELIX-study, where severe thrombocytopenia, prolonged blood coagulation and gastric bleeds were more frequent in the TH-group [ 9 ]. A high proportion of infants in both groups received fresh frozen plasma, reflecting the use of plasma to correct biochemical abnormalities in coagulation status. Renal dysfunction is a common complication after perinatal asphyxia [ 30 ]. We found less anuria/oliguria in cooled than non-cooled infants, even after excluding four infants in the SC-group with global brain injury on MRI. Even though four large RCTs on cooling in HICs did not report any significant differences in renal dysfunction [ 15 – 18 ], the possible reno-protective effect of TH in our study is supported by a meta-analysis reporting significantly less acute kidney injury in cooled compared to non-cooled infants [ 31 ]. A recent cohort-study with a low incidence of acute kidney injury among cooled neonates with HIE also supports a reno-protective effect of TH [ 32 ]. No infant in our study had culture-proven sepsis, and there was no significant difference between the groups in infection parameters. Unit protocol was to start antibiotics in all cases without an obvious sentinel event as cause of HIE. Antibiotics was also continued beyond 72 h if CRP was still elevated. Cooling may lead to a late peak in CRP in the absence of infection [ 33 ], and the use and duration of prophylactic antibiotics in infants with HIE undergoing TH is still a matter of controversy [ 34 ]. A high rate of perinatal infection in LMICs has been suggested as a possible reason that TH may not be effective [ 35 – 37 ]. This is not supported by our data, and recent studies have reported similar mortality and a beneficial effect on neurodevelopment with TH, even in the presence of infection [ 38 ]. The THIN study has reported improved outcome with TH and these new findings of similar biochemical profiles and level of supportive treatments, support that this treatment is both effective and safe in our setting. A recent Indian cohort study on 155 cooled infants reports a higher incidence of sepsis and more use of invasive ventilation than in our trial, but argues that most cases are manageable in well-equipped NICUs [ 32 ]. Although there is no clear consensus on the optimal supportive care for infants receiving TH in any setting, respiratory support, continuous monitoring and preservation of hemodynamic stability, 24/7 imaging services etc. are standard in cooling centers in HICs [ 7 , 21 ]. These requirements pose a challenge in many low-resource settings, where the access to neonatal intensive care and transport services vary greatly. Clinical guidelines and recommendations for supportive treatments, as well as organization of services including transport, are needed and should be the focus of future research in both HICs and LMICs. The main limitation of this study is the small sample size. This limits the possibility to make prediction models for outcome based on organ dysfunctions and/or biochemical profiles [ 39 ]. We did not have data on blood pressure and some of the biochemical measurements were missing for many infants, especially pH-values. The generalizability of the findings to other populations and settings is unclear. Finally, the data in this study are a post-hoc analysis of an RCT, which is a limitation and require caution when interpreting the results. Despite these limitations, we believe our study supports that the safety and efficacy of TH depends on the level of intensive care available and not population differences as previously suggested [ 9 , 40 , 41 ].
Conclusions The findings in this post hoc analysis of a single-center RCT from India suggest that cooling for infants with moderate to severe HIE was not associated with more neonatal morbidities compared to standard care with normothermia. Despite lower pH at two time intervals, our findings do not support a negative effect of cooling on organ function in this setting. This suggests that the level of intensive care provided during TH may explain the lack of safety which have been reported in some studies from LMICs.
Background Therapeutic hypothermia for infants with moderate to severe hypoxic-ischemic encephalopathy is well established as standard of care in high-income countries. Trials from low- and middle-income countries have shown contradictory results, and variations in the level of intensive care provided may partly explain these differences. We wished to evaluate biochemical profiles and clinical markers of organ dysfunction in cooled and non-cooled infants with moderate/severe hypoxic-ischemic encephalopathy. Methods This secondary analysis of the THIN (Therapeutic Hypothermia in India) study, a single center randomized controlled trial, included 50 infants with moderate to severe hypoxic-ischemic encephalopathy randomized to therapeutic hypothermia ( n = 25) or standard care with normothermia ( n = 25) between September 2013 and October 2015. Data were collected prospectively and compared by randomization groups. Main outcomes were metabolic acidosis, coagulopathies, renal function, and supportive treatments during the intervention. Results Cooled infants had lower pH than non-cooled infants at 6–12 h (median (IQR) 7.28 (7.20–7.32) vs 7.36 (7.31–7.40), respectively, p = 0.003) and 12–24 h (median (IQR) 7.30 (7.24–7.35) vs 7.41 (7.37–7.43), respectively, p < 0.001). Thrombocytopenia (< 100 000) was, though not statistically significant, twice as common in cooled compared to non-cooled infants (4/25 (16%) and 2/25 (8%), respectively, p = 0.67). No significant difference was found in the use of vasopressors (14/25 (56%) and 17/25 (68%), p = 0.38), intravenous bicarbonate (5/25 (20%) and 3/25 (12%), p = 0.70) or treatment with fresh frozen plasma (10/25 (40%) and 8/25 (32%), p = 0.56)) in cooled and non-cooled infants, respectively. Urine output < 1 ml/kg/h was less common in cooled infants compared to non-cooled infants at 0–24 h (7/25 (28%) vs. 16/23 (70%) respectively, p = 0.004). Conclusions This post hoc analysis of the THIN study support that cooling of infants with hypoxic-ischemic encephalopathy in a level III neonatal intensive care unit in India was safe. Cooled infants had slightly lower pH, but better renal function during the first day compared to non-cooled infants. More research is needed to identify the necessary level of intensive care during cooling to guide further implementation of this neuroprotective treatment in low-resource settings. Trial registration Data from this article was collected during the THIN-study (Therapeutic Hypothermia in India; ref. CTRI/2013/05/003693 Clinical Trials Registry – India). Supplementary Information The online version contains supplementary material available at 10.1186/s12887-024-04523-6. Keywords Open access funding provided by Norwegian University of Science and Technology
Supplementary Information
Abbreviations Activated partial thromboplastin time Cerebral palsy Hypothermia for encephalopathy in low- and middle-income countries High-income countries Hypoxic-ischemic encephalopathy International normalized ratio Low- and middle-income countries Neonatal intensive care unit Randomized controlled trial Standard care with normothermia Therapeutic hypothermia Therapeutic hypothermia in India Acknowledgements The authors thank all the staff of the Department of Neonatology, CMC, Vellore, who took care of the study babies and the research officers for adequate recruitment and collection of clinical data. We thank members of the Data Safety Monitoring Board at the CMC for monitoring the study. Authors’ contributions NT was the PI of the THIN study, participated in the conceptualization and design of the study and data collection and revised and reviewed the manuscript. KHF analyzed the data and drafted the initial manuscript. KA, RS and KHF reviewed and revised the manuscript. All authors read and approved the final manuscript. Funding Open access funding provided by Norwegian University of Science and Technology Funding for the THIN study was provided by Central Norway Regional Health Authority (RHA) (reference: 2017/38297) and the Norwegian University of Science and Technology (NTNU) (reference: 2014/97710). There was no additional funding for this post-hoc analysis. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Anonymized data (including data dictionaries) will be made available on request to researchers who provide a methodologically sound proposal for use in achieving the goals of the approved proposal. Declarations Ethics approval and consent to participate Informed consent was obtained from the parent(s) of all the subjects. The study was approved by the Institutional Review Board at the Christian Medical College (number 2013/8223) and the Regional Committee for Medical and Health Research Ethics in central Norway (number 2013/2167, reference: 18557). Consent for publication Not applicable. Competing interests NT has a patent 1796/DEL/2013 Life cradle device for inducing neonatal hypothermia issued. The other authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Pediatr. 2024 Jan 15; 24:46
oa_package/d1/dd/PMC10789058.tar.gz
PMC10789059
38225634
Measles poses a significant global health threat, exacerbated by the COVID-19 pandemic. Despite the efficacy of two vaccine doses, under-5 mortality rates persist, with over 61 million delayed measles vaccinations worldwide. Nepal, striving to eliminate measles by 2023, faces a resurgence, attributing 1013 cases to inadequate vaccination and healthcare accessibility issues. Compounded by disruptions from the COVID-19 pandemic, the outbreak highlights the urgent need for vaccination promotion, improved healthcare access, and misinformation mitigation. This situation underscores the critical role of global collaboration and healthcare infrastructure investment to safeguard children's lives in Nepal and similar vulnerable regions. Keywords
Dear editor, Measles is an acute viral respiratory illness caused by an RNA virus belonging to the Paramyxoviridae family [ 1 ]. It is highly contagious, and human transmission occurs through direct contact with respiratory secretions and aerosolized droplets from an infected person. The main symptoms include high-grade fever, cough, conjunctivitis, rash (exanthem and enanthem), and rhinitis [ 2 ]. The rash usually begins on the face and gradually spreads downward [ 3 ]. The contagious period starts four days before the appearance of the exanthem and ends 4 days after the rash has disappeared [ 3 ]. This disease has the potential to cause severe complications, particularly in young children (< 5 years), including pneumonia, otitis media, diarrhoea, encephalitis, myocarditis, and, in rare cases, death [ 3 ]. Even though two doses of vaccination effectively prevent the disease, this disease has been associated with high rates of under-5 mortality [ 4 ]. In this study, we discuss the hurdles to measles elimination in Nepal. The World Health Organization (WHO) suggests 95% vaccination coverage with two doses of measles-containing vaccine (MCV) to attain herd immunity. The remaining 5% benefits from protection, as measles is unlikely to propagate in the vaccinated individuals. Herd immunity is crucial for preventing widespread outbreaks by reducing the number of susceptible individuals in the population and ensuring protection for those who cannot be vaccinated [ 4 , 5 ]. There was steady progress in this century towards achieving the desired coverage rate before the projected global coverage for the first dose of the measles-containing vaccine dropped significantly, consequent to the ongoing COVID-19 pandemic [ 5 ]. This has been further accelerated by increased malnutrition in children during the COVID-19 pandemic, which corresponds with higher viral infection severity [ 1 ]. More than 61 million doses of measles-containing vaccine were either postponed or not administered [ 6 ]. Close to a quarter million children worldwide did not receive their first dose of the measles vaccine through routine immunization programs in 2021 [ 5 ]. The poor coverage has resulted in an upsurge in measles cases across the globe, especially in the South-East Asian region [ 7 ]. As of early July 2023, India and Pakistan in South Asia had the highest number of measles cases globally [ 6 ]. An upsurge in cases has been noted in other countries in the region, like Nepal. This has impacted the regional goal of WHO to eliminate measles by 2020 in the South-East Asian region [ 8 ]. Nepal is a mountainous, land-locked country in South Asia with more than 30 million people. In the early part of the century (2003), around 5000 measles cases were reported in a year. At the time, routine immunization included just a single dose of vaccine, and the coverage rate was only 75%. The coverage of vaccination since then has significantly increased across the years to 90%. This has resulted in a significant decline, with less than a hundred cases reported in 2017 [ 9 , 10 ]. The goal of eradicating measles in 2020 was close to being achieved. However, after another upsurge of cases in Nepal and neighboring regions between 2019 and 2020 (Fig. 1 ), WHO has extended the timeline for achieving the eradication objective to 2023 [ 9 ]. According to recent reports, Nepal has experienced numerous measles cases with outbreaks and fatalities hindering its goal to eliminate measles by 2023, as targeted by the WHO South-East Asia Region. There were 1013 cases of measles in Western, Central, and Eastern Nepal reported from January to August 2023 (Fig. 1 ). Among these affected regions, Kailali district in Western Nepal Mahottari district in Central Nepal, and Sunsari district in Eastern Nepal reported the highest cases [ 11 , 12 ]. The probable reasons for the outbreak were the lower vaccination coverage, poor vaccine delivery system, lack of cold chain maintenance, and incomplete vaccine doses. Additionally, vaccine hesitancy and changes in individual health-seeking behavior have been exaggerated by the COVID-19 pandemic. This is especially true in remote and disadvantaged communities. Underdeveloped routine immunization and poor community involvement have also contributed to the spread of the disease. Until July 2023, more than one million cases of COVID-19 have been diagnosed in Nepal, resulting in more than twelve thousand fatalities (ndrrma.gov.np). The pandemic and the control response have collateral effects, disrupting routine vaccination programs and disease surveillance systems. This might have not only led to an upsurge in the cases but also affected outbreak detection and response. This outbreak highlights the urgent need for increased efforts to promote vaccination and improve access, especially in remote areas. Additionally, efforts must be made to raise awareness about the importance of vaccination and address misinformation. As the number of COVID-19 cases has significantly decreased in Nepal, the manpower and infrastructure developed for COVID-19 immunization can be diverted to immunization against measles and other vaccine-preventable diseases. Furthermore, collaborative immunization strategies across borders and mandating the measles vaccine as a requirement for school entry can effective strategies to boost vaccination rates among families. There is also a need to devise strategies to help children catch up with the missed measles vaccination doses. In conclusion, the recent measles outbreak in Nepal is a reminder of the importance of vaccination, monitoring malnutrition, administration of age-appropriate doses of vitamin A, strengthening measles surveillance, and mobilization with the overall need to invest in healthcare infrastructure. It serves as a wake-up call for the authorities to restore vaccination coverage and improve access to healthcare services. We must work together to ensure that every child in Nepal has unhindered access to life-saving vaccines and healthcare services.
Acknowledgements None. Author contributions CKT: conceptualization, literature search, data curation, writing—original draft and editing. NG: writing—original draft, validation, reviewing and editing. NP and SA: validation, review and editing. MD & PG: supervision, validation and reviewing. All authors critically reviewed and approved the final version of the manuscript. Availability of data and materials Not applicable. Declarations Competing interests The authors declare no conflict of interest. No writing assistance was utilized in the production of this manuscript. Ethics approval Ethics approval was not required for this editorial article. Patient consent Informed consent was not required for this editorial article.
CC BY
no
2024-01-16 23:45:34
Trop Med Health. 2024 Jan 15; 52:10
oa_package/51/fc/PMC10789059.tar.gz
PMC10789060
0
Introduction Sleep disturbance is a change of sleeping habits in quality and/ or quantity of sleep [ 1 ]. It encompasses disorders of initiating and maintaining sleep, excessive daytime sleepiness, and, disorders of sleep-wake cycle [ 2 , 3 ]. Sleep disturbance is a critical public health concern that affects the overall productivity of the country by decreasing an individual’s ability to comprehend and accomplish their day-to-day lives, disrupt school or work performance, and diminishing mental and physical health [ 4 , 5 ]. During pregnancy, the inconsistent sleep awake cycle in an indicator of sleep disturbance [ 6 – 9 ] they have increased, normal and decreased sleep pattern in their total sleep time in the first trimester, second and third trimester of pregnancy respectively [ 3 , 9 , 10 ]. Sleep disturbance is highly predominant during pregnancy and are commonly overlooked as a potential cause of maternal and fetal morbidity [ 3 , 11 ]. Women’s sleep-awake cycle is frequently disturbed during their pregnancy and it can diminishes the usual health functioning [ 9 , 12 ] and increases pregnancy related psychiatric comorbidities including anxiety and depression [ 11 , 13 ]. As the gestational age of pregnancy increases, the more likely women to have frequent awakenings and inadequate sleep habits [ 14 , 15 ]. Sleep disturbance causes maternal and fetal mental impairment [ 16 ], preterm birth [ 17 ], low birth weight [ 11 , 18 ], increase the risk of developing gestational diabetes mellitus [ 19 – 21 ] and the offspring predisposed to developmental delay and learning disabilities [ 22 ]. Sleep disturbance among pregnant women can be due to the hormonal and mental changes that the body undergoes during pregnancy time [ 23 ]. This may be due to mechanical and physical changes which leads the women to experience discomforts including leg cramps, urinary incontinence, and shortness of breath, and intense backaches [ 24 ]. Marked increment in the level of estrogen and progesterone because of the pregnancy influences a diverse range of both physiological and psychological processes, including sleep and mood disturbance [ 25 ] estrogen and oxytocin causes difficulty in breathing and sleep fragmentation [ 26 ]. Previous studies have identified multiple factors influence sleep disturbance among pregnant women such as unplanned pregnancy, third trimester anxiety depression and stress. In Ethiopia, several individual studies have been conducted on the prevalence and factors of sleep disturbance among pregnant women. However, the findings are inconsistently reported from 30.8 to 68.4% and are not systematically reviewed. Therefore, the purpose of this systematic review and meta-analysis was to determine the pooled prevalence of sleep disturbance and its associated factors among Ethiopian pregnant women. The results of this systematic review and meta-analysis might help stakeholders and policymakers to implement different programs and healthcare initiatives aimed improving sleep quality and quantity of pregnant women.
Method Information sources and searching strategy All potential articles were retrieved from the electronic databases such as PubMed, EMBASE, Web of Sciences, and Google Scholar. Article searching was conducted using free text search terms and Medical Subject Headings (MeSH). We used the following searching terms: “Sleep disturbance,” “Poor sleep quality,” “fragmented sleep,” “Disturbed sleep awake cycle,” “pregnant women,” “Predictors,” “Factors,” “Risk factors,” “Prevalence,” “Proportion,” and “Ethiopia.” The search was conducted using Boolean operators like “AND” and “OR” and truncations without publication date restriction. Articles were searched from October 12 to November 28, 2023. For PubMed searching we have used this searching formulation: ((sleep disturbance*[All Fields]) OR (poor sleep quality*[All Fields])) OR (fragmented sleep*[All Fields])) OR (disturbed sleep awake cycle*[All Fields])) AND (Pregnant mothers*[All Fields])) OR (women attending antenatal care follow-up*[All Fields])) AND (predictors) OR (risk factors[MeSH Terms]) OR (associated factors) OR (Barriers [MeSH Terms]) AND (Prevalence) OR (Proportion[MeSH Terms]) AND (Ethiopia)).The review was done following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist [ 27 ]. Inclusion and exclusion criteria Observational studies focusing primarily on sleep disturbance, articles assessing factors related to sleep disturbance among pregnant women and conducted in any study period (the study period was not restricted for inclusion),were some of the inclusion criteria of these review. Whereas, editorial letters, reviews and commentaries were excluded from this review. Finally, reviewers independently examined the research’s eligibility, and any discrepancies were handled by discussion and consensus. Study selection Initially, all the retrieved articles were exported to EndNote X7 reference manager software to manage duplicate studies, and the screening process. Then, two review authors (SST, YAT) independently screened articles by their titles and abstracts after the exclusion of duplicates. Consecutively, the full text of potentially eligible articles were retrieved and screened using predetermined inclusion and exclusion criteria. The disagreements between authors during study screening were solved by the remaining review authors (MK, SMW, and BDM). Exposure Factors or determinants of sleep disturbance. Outcome Pregnant women have experienced sleep disturbance. Outcome measurement The primary outcome of this review was the prevalence of sleep disturbance among pregnant women. It was assessed by the Pittsburgh Sleep Quality Index (PSQI), a Self-report questioner containing 19 items assessing seven components of sleep: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, daytime dysfunction, and use of sleep medications. Each component is scored (range 0–3). A total global PSQI score ranges from 0 to 21, a global score of ≥ 5 was classified as pregnant women having sleep disturbance. The second outcome of this review was determining the factors associated with sleep disturbance among pregnant women in Ethiopia and were measured with the odds ratio (OR). The odds ratio was calculated for each identified factor based on the binary result data presented by each study. Quality assessment To measure the quality of each original studies, the Newcastle-Ottawa Scale (NOS) tool has been used. The evaluation framework is divided into 3 areas: The first part of the method is a five-star rating system that evaluates the selection of study groups in each study. The second section evaluates the comparability of study with the likelihood of gaining two stars. The final part of the tool evaluates the appropriateness of statistical analysis that each primary study used, with three stars plausible. Subsequently, studies that scored > 6 stars out of 10 were considered as high quality 5 or 6 out of 10 were considered as good quality, and less than 5 were considered as poor quality. There were no studies excluded due to poor quality. Two review authors (SST, YAT) independently assessed the quality of studies with any disagreements were solved by the remaining review authors (MK, SMW, BDM). Data extraction To extract data from articles included in the review, a standardized data extraction tool was adapted from the Joanna Briggs Institute (JBI) was used. Two authors (SST and YAT) independently extracted the data from each primary study included in this meta-analysis. From each study information such as the first author’s name, the study’s region and setting, the year of publication, the study design, the study participants, the sampling technique, and the sample size, prevalence and factors associated with sleep disturbance, and measures of association (OR) were extracted. Statistical analysis The extracted data were imported into STATA version 14 for analysis. Tables, figures, and forest plots were used to describe and summarize findings. We calculated the I 2 statistic to determine study homogeneity, which describes the percentage of total variation among studies, in which 25%, 50%, and 75% represented low, moderate, and high heterogeneity, respectively [ 28 ]. A fixed-effects model was used to execute the pooled estimate of sleep disturbance and to calculate the pooled OR for each identified factors if substantial heterogeneity was observed, otherwise, a random-effects model was used. Furthermore, a graphic review of the funnel plot and an Egger’s regression test were used to determine the presence or absence of publication bias. However, publication bias for each factors was not evaluated due to limited number of studies. Publication bias A graphic review of the funnel plot and an Egger’s regression test were used to determine the presence or absence of publication bias. Accordingly, the results of the funnel plots and Egger’s regression test in this meta-analysis showed that there is no evidence of publication bias (Fig. 1 ).
Result Searching results A total of 17,100 articles were retrieved from different electronic databases, as well as from Google Scholar. About 491 duplicate articles were removed using EndNote citation manager; 16,577 articles were excluded after the title and abstract screening, and then 32 articles remained. After a careful review of the articles for the presence of the outcome variable and other inclusion criteria, 26 papers were removed in compliance with the exclusion criteria. Finally, six articles remained for the analysis. The overall study selection process was represented by the following flow diagram. (Fig. 2 ). Characteristics of the included studies All the included studies used a facility-based cross-sectional study design and conducted from 2019 to 2020. One study included in this review employed simple random sampling [ 29 ], whereas the others used systematic random sampling [ 30 – 34 ]. All studies employed an interviewer-administered sampling method to choose the study participants. A total of 2483 pregnant women took part in the review. According to the research examined, sleep disturbance among pregnant women ranged from 30.8% [ 30 ] to 68.4% [ 31 ]. Regarding study region, three studies were done in Amhara, two studies in Oromia, and one study were conducted in multiple regions, Northwest Ethiopia (Table 1 ). Prevalence of sleep disturbance among pregnant women in Ethiopia The results of this meta-analysis showed that the overall pooled prevalence of sleep disturbance among pregnant women in Ethiopia was 50.43% (95%CI: 39.34–61.52). During this meta-analysis, a fixed-effects model was applied as no heterogeneity was observed among the included studies (Fig. 3 ). Factors associated with sleep disturbance among pregnant women In this review, depression, stress, third trimester pregnancy, anxiety, unplanned pregnancy, poor sleep hygiene and multigravidity were the factors associated with sleep disturbance among pregnant women in Ethiopia. Meta-analysis of four studies showed that third trimester pregnancy has a significant association with sleep disturbance. The pooled odds of sleep disturbance were more than four times (AOR = 4.03; 95% CI: 2.84, 5.71) higher in pregnant women during third trimester pregnancy as compared to first and second trimester. Two studies showed that multigravidity has a significant association with sleep disturbance. The overall odds of sleep disturbance were about 1.99 times (AOR = 1.99; 95% CI: 1.54, 2.59) higher among prim gravid pregnant women. Two studies showed that unplanned pregnancy has a significant association with sleep disturbance. The overall odds of sleep disturbance were 2.56 times (AOR = 2.56; 95% CI: 1.52, 4.31) higher among pregnant women who had unplanned pregnancy than their counterparts. Three studies showed that pregnant women who had depression during pregnancy were 3.57 times (AOR = 3.57; 95% CI: 2.04, 6.27) more likely to experience sleep disturbance than their counterparts. Three studies also showed that stress has a significant association with sleep disturbance among pregnant women. The overall odds of sleep disturbance were 2.77 times (AOR = 2.77; 95% CI: 1.57, 4.88) higher among pregnant women who had stress than their counterparts. Two studies indicated that anxiety has an association with disturbed sleep. The pooled odds of sleep disturbance were about 3.69 times (AOR = 3.69; 95% CI: 1.42, 9.59) higher among pregnant women with anxiety than their counterparts. Two studies indicated that poor sleep hygiene has an association with sleep disturbance. The pooled odds of sleep disturbance were about 2.49 times (AOR = 2.49; 95% CI: 1.56, 3.99) higher among pregnant women who have poor sleep hygiene practice than their counterparts. (Table 2 ).
Discussion This systematic review and meta-analysis was employed to estimate the pooled prevalence of sleep disturbance and predictors among pregnant women in Ethiopia. As a result, half of pregnant women in Ethiopia have a disturbed sleep pattern. This finding underlines the significance of timely screening, early diagnosis and providing proper intervention of sleep disturbance among pregnant women. The pooled estimate of sleep disturbance among pregnant women in Ethiopia was 50.43%, (95% CI: 38.21–62.65). This study finding is in line with previous studies 45.7% [ 35 ] and 54.2% [ 36 ]. However, the results of this meta-analysis were lower than a systematic review and met analytic study conducted on association between sleep disorder during pregnancy time and risk of postpartum depression in China76% [ 37 ]. The possible reasons for this variation might be due to the difference in the socioeconomic status of the study participants including the difference in the prevalence of prenatal depression and stress. This review identified that third trimester pregnancy, multigravidity, unplanned pregnancy, depression, stress, anxiety, and poor sleep hygiene were the factors statistically associated with sleep disturbance among pregnant women in Ethiopia. This research found an association between third trimester pregnancy and sleep disturbance. This finding is supported by previous reviews [ 3 , 35 , 37 ]. The possible reasons of sleep disturbance during the third trimester might be due to the physiologic changes because of the pregnancy including urinary frequency, fetal movement, lower back pain, leg cramps, heartburn, easily fatigability and abdominal discomfort [ 14 ]. Furthermore, when the pregnant woman approaches to her expected date of delivery, she might worry about the mode of delivery, labor, birth outcome and financial issues which all could negatively affect the sleep pattern. Multigravidity has a significant association with sleep disturbance. This may be explained by the fact that maternal sleep quality is disturbed as a result of being overstressed about having extra roles after childbirth and the way they incorporate the new role and responsibilities as a mother. In addition, multigravid pregnant mothers complained that their sleep pattern is depend on their children’s sleep awake cycle. If children frequently wake up at night, mothers will have a disturbed sleep pattern. The review also showed that unplanned pregnancy has a significant association with sleep disturbance which is consistent with another study [ 38 ]. This may be due to inadequate preparation for pregnancy and childbirth leading mothers to feel stressed with all the changes and challenges. The odds of sleep disturbance were about more than three times greater among pregnant women with depression when compared with their counterparts. Another study supports this finding [ 39 ]. This could be due to mood and emotional disturbance results sleep disturbance as depression and sleep disturbance have a bidirectional relationship [ 40 ]. Furthermore, evidence has indicated that prenatal depression is one of the most possible psychological factors contributing to sleep disturbance during pregnancy [ 9 , 39 ]. This review also showed that stress has an association with sleep disturbance which is supported by previous studies [ 41 , 42 ]. These may be due to stress is thought to increase cognitive and somatic arousal which negatively affects sleep primarily by decreasing sleep duration and results pregnant women to have a fragmented sleep pattern [ 43 ]. Moreover, it could be due to the direct effect of stress during pregnancy on sleep quality might be related to arginine vasopressin hormone, which is involved in the stress response and circadian regulation of the sleep-wake cycle [ 44 , 45 ]. The findings of this review showed that anxiety has a significant association with sleep disturbance which is consentient with another study [ 46 ]. The odds of sleep disturbance were more than three times higher among pregnant women with anxiety as compared with their counter parts. The possible reason might be due to emotional and physiological arousal caused by anxiety and worries, which would result in more attention to environmental and personal stimuli, and these can lead to experience sleep disturbance [ 47 , 48 ]. Furthermore, poor sleep hygiene practice has a significant association with sleep disturbance among pregnant women which is supported by another previous studies [ 3 , 12 ]. The possible justification for this might be lack of healthy sleep habits, behaviors and environmental factors that can help pregnant women to have adequate sleep for instance drinking caffeinated drinks, performing dynamic physical activity and inconsistent sleep awake time. Limitation of the study Even though this is the first systematic review and meta-analysis of sleep disturbance among pregnant women in Ethiopia, it is not without limitations. This review may not be representative for all regions as the included studies were done in some regions of Ethiopia. In addition, the causal association between outcome variable and factors couldn’t be established since all the included studies were cross-sectional in nature. As a result of limited number of studies, publication bias and subgroup analysis were not performed for each identified factors though heterogeneity was observed in some analyses. Conclusion and recommendation This systematic review and meta-analysis found that about half of pregnant women have disturbed sleep pattern. Third trimester pregnancy, multigravidity, unplanned pregnancy, depression, stress, anxiety, and poor sleep hygiene were identified factors statistically associated with sleep disturbance. Thus, implementation of interventions for sleep disturbance after screening pregnant women is needed with collaborative effort of policy-makers and stakeholders. Moreover, public health interventions targeted at the prevention of unintended pregnancy, depression during pregnancy and considering other identified risk factors should be implemented.
Conclusion and recommendation This systematic review and meta-analysis found that about half of pregnant women have disturbed sleep pattern. Third trimester pregnancy, multigravidity, unplanned pregnancy, depression, stress, anxiety, and poor sleep hygiene were identified factors statistically associated with sleep disturbance. Thus, implementation of interventions for sleep disturbance after screening pregnant women is needed with collaborative effort of policy-makers and stakeholders. Moreover, public health interventions targeted at the prevention of unintended pregnancy, depression during pregnancy and considering other identified risk factors should be implemented.
Introduction Globally, sleep disturbance is the foremost public health issue among pregnant women which might have undesirable birth outcome including neurocognitive impairment, preterm birth, low birth weight, and neonatal morbidity and mortality. In Ethiopia, inconsistent findings have been reported on the prevalence of sleep disturbance among pregnant women. Therefore, this review aims to estimate the pooled prevalence sleep disturbance and its associated factors among pregnant women in Ethiopia. Methods This systematic review and meta-analysis of observational studies was designed according to the PRISMA guideline. A systematic search of literature was conducted in PubMed, Scopus, Web of science, and Google Scholar using relevant searching key terms. The Newcastle-Ottawa scale was used to evaluate the quality of all selected articles. Data were analyzed using STATA Version 14 software. Publication bias was checked using Egger’s test and funnel plot. Cochran’s chi-squared test and I 2 values were used to assess heterogeneity. A fixed-effects model was applied during meta-analysis. Results In this review, six studies were included after reviewing 17,100 articles. The pooled prevalence of sleep disturbance among pregnant women in Ethiopia was 50.43% (95%CI: 39.34–61.52). Third trimester pregnancy AOR = 4.03; 95% CI: 2.84,5.71), multigravidity (AOR = 1.99; 95% CI: 1.54, 2.59), unplanned pregnancy (AOR = 2.56; 95% CI: 1.52,4.31), depression (AOR = 3.57; 95% CI: 2.04, 6.27), stress (AOR = 2.77; 95% CI: 1.57, 4.88), anxiety (AOR = 3.69; 95% CI: 1.42, 9.59) and poor sleep hygiene (AOR = 2.49; 95% CI: 1.56, 3.99) and were statistically associated with sleep disturbance among pregnant women. Conclusion This review revealed that the magnitude of sleep disturbance among pregnant woman in Ethiopia was relatively high and multiple factors determined the likelihood of having a disturbed sleep-awake pattern. Thus, the implementation of interventions for sleep disturbance after screening pregnant women is needed. Moreover, public health interventions targeted on the prevention of unintended pregnancy and depression during pregnancy should be implemented. Keywords
Acknowledgements We would like to express our gratitude to all of the authors of the studies included in this systematic review and meta-analysis. Author contributions SST, YAT and BDM were design, analysis and interpretation of data; and responsible for the drafting of the article. SST, SMW, MK and BDM were involved in acquisition of data, analysis and interpretation of data. All authors read and approved the final manuscript. Funding There is no funding for this study. Data Availability All data generated or analyzed during this study are available from the corresponding author with a reasonable request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interest The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Psychiatry. 2024 Jan 15; 24:51
oa_package/c1/0e/PMC10789060.tar.gz
PMC10789061
0
Background Glioblastoma (GBM) is an incurable cancer type [ 27 ]. Primary glioblastomas (GBMs) are often associated with disturbed RAS signaling, although mutations in KRAS gene are rare in human gliomas and particularly rare in WHO grade III and IV gliomas in adult patients [ 21 , 26 , 32 , 40 ]. Nevertheless, glioblastoma is considered KRAS-driven cancer due to its essential role in mouse malignant gliomagenesis [ 14 , 17 , 18 , 36 ]. Although RAS alterations are not commonly reported in GBMs [ 21 , 23 , 45 , 46 ], GBM possesses mutations in genes that contribute to activated KRAS signaling, like neurofibromin-1 (NF1), are observed, which make KRAS signaling a potential target in GBM [ 3 , 5 ]. More recently, genomic characteristics of cerebellar glioblastoma C-GBMs reported RAS hotspot mutation or amplification [ 8 ]. Targeting glioblastoma (GBM) based on molecular subtyping has not yet translated into successful therapies. Combinatorial therapy based on temozolomide (TMZ) and cisplatin (CDDP) shows promising potential for GBM therapy in clinical trials [ 43 ]. Cisplatin is the mainstay in cancer chemotherapy for multiple tumour types, including medulloblastoma but not glioblastoma protocols. There is no clear explanation for the differences in clinical efficacy of cisplatin between medulloblastomas and glioblastomas, even though cisplatin is effective in vitro against the latter. Although cisplatin has been shown to have cytotoxic effects on human glioblastoma cells in vitro [ 20 , 41 ] the response in clinical treatment is weak and has not improved the overall survival of patients with brain tumours. Anyway, many studies are currently focused on new delivery modalities of effective cisplatin in GBM [ 2 , 6 , 37 , 43 ]. The mechanism of action of cisplatin is mainly based on DNA damage, inducing the formation of DNA adducts. The DNA lesions trigger a series of signal-transduction pathways, leading to cell-cycle arrest, DNA repair and apoptosis [ 13 ]. Within the area of combinatorial therapies, MEK inhibitors have currently growing advances in clinical trial in glioblastoma treatment [ 19 ]. PD98059, a potent but reversible MEK inhibitor, has recently developed as a new formulation to obtain long-term inhibition of pERK1/2 in brain regions at detectable levels [ 30 ]. PD98059 belongs to the first-generation MEK1/2 inhibitors, its inhibitory properties by binding to the ERK-specific MAP kinase MEK, therefore preventing phosphorylation of ERK1/2 (p44/p42 MAPK) by MEK1/2. PD98059 does not inhibit the MAPK homologues JNK and P38 [ 7 , 12 ]. Unfortunately, despite wide use in preclinical studies, this compound failed to reach clinical evaluation because of its pharmaceutical limitations [ 24 , 28 ]. In general, targeting MEK and other downstream proteins in the RAS signaling cascade has shown limited efficacy in RAS-driven malignancies, likely owing to dose-limiting toxicity and loss of auto-inhibitory feedback. Cyclin D1 is a cell-cycle regulator essential for G1, phase progression and a candidate proto-oncogene implicated in pathogenesis of several human tumour types, including glioblastomas [ 44 ]. Cyclin D1, in association with CDK4/6, acts as a mitogenic sensor and integrates extracellular mitogenic signals and cell cycle progression. When deregulated (overexpressed, accumulated, inappropriately located), cyclin D1 becomes an oncogene and is recognized as a driver of solid tumours. Cyclin D1 (CCND1) is upregulated in many solid cancers, promoting cancer progression [ 29 , 38 ]. Cyclin D1 expression has been shown to be associated with the pathological grade and aggressiveness of glioma, the prognosis of patients with glioma, and the response to chemotherapy [ 22 , 33 , 42 ]. In this study, we used cis-diamminedichloroplatinum (cisplatin, CDDP) to treat long-term cultures of human malignant glioblastoma to study their KRAS- dependent chemosensitivity. Gain-of-function experiment with constitutively active KRAS G12V enhanced glioblastoma sensitivity to CDDP measured by apoptosis and viability; interestingly, specific post-translational modification in HVR region altered sensitivity to cisplatin and/or MEK inhibition. The aim of our study was to elucidate the relationships between KRAS and its post-translational modification, MEK-inhibitor, and cancer cell response to chemotherapy with cisplatin in vitro.
Methods Long-term and primary glioblastoma cultures Human glioblastoma cell lines U87MG, U251MG, T98G, IDH1 mut U87, SW1783, and Ln229 were obtained from the American Type Culture Collection, Rockville, MD and cultured at 37 °C in 5% CO 2 in Dulbecco's Modified Eagle’s Medium (DMEM), supplemented with phenol red, L-glutamine (2 mM), 1% pen-strep and 10% Fetal Bovine Serum (FBS; South America origin, Brazil). SVGp12 Human Fetal Glial Cells (ATCC-CRL-8621 # 4282167) and NHA cells (purchased from Cambrex Corporation, East Rutherford, NJ) were grown according to the manufacturer’s instructions. Normal Primary Fetal Normal Neural Stem Cells from SVZ neural stem cells were derived from brain subventricular zone (SVZ) tissue of a premature neonate died of pulmonary failure; the continuous culture from this tissue is indicated as SC-30 (SC30, 25-week gestation, 1-day-old premature infant; [ 34 ].—Astrocytoma primary (WHO grade IV,samples GBM#C; GBM#D; GBM#F; GBM#M were established from tumor specimens of patients and cultured as described [ 25 , 47 ], . Astrocytoma primary (WHO grade IV) GBM#1; GBM#10; GBM#107; GBM#11; GBM#148; GBM#15; GBM#47; GBM#53; GBM#80; GBM#82 were established from tumor specimens of patients and cultured as described [ 31 ]. The genetic background of U87MG and U25MG1 long-term cultures are: U87MG (p53 wild type, IDH1 w.t.; low level of methyl guanine transferase (MGMT) cells; U251MG (p53 mutated; IDH1 w.t.; low level of methyl guanine transferase (MGMT). Moreover, U87MG is highly cytogenetically aberrant [ 10 , 15 ]. RT PCR and qPCR Total RNA was extracted using the acid guanidinium isothiocyanate-phenol–chloroform method. cDNA was synthesized in 20-μl reactions containing 2 μg of total RNA, 200 units of Superscript III Reverse Transcriptase (Invitrogen), and 1 μl of random hexamer (20 ng/μl) (Invitrogen). mRNA was reverse-transcribed for 1 h at 50 °C, the reaction was heat-inactivated for 15 min at 70 °C. The products were stored at -20 °C until use. Quantitative (q) RT-PCR were performed on an Applied Biosystems ABI StepOne Plus Real-Time PCR 96-Well System using the SYBR Green-detection system (FS Universal SYBR Green MasterRox/Roche Applied Science). For all reactions, following conditions were used: 95 °C for 10 min, 40 cycles of 95 °C for 30 s and 58 °C for 75 s. Primer sequences used are listed below: KRAS F: 5’ – TTG CCT TCT AGA ACA GTA GAC A – 3’ KRAS R: 5’ – TTA CAC ACT TTG TCT TTG ACT TC – 3’. Fold changes were normalized against the reference gene (18 S) amplified with the following primer sets: 18S F: 5’ – GAC CGA TGT ATA TGC TTG CAG AGT—3’; 18S R: 5’ – GGA TCT GGA GTT AAA CTG GTC CAG – 3’. Antibodies and reagents Monoclonal anti-panRas antibody (Ab-3) was purchased from Calbiochem (EMD Biosciences, an Affiliate of Merck KGaA, Darmstadt, Germany); The antibodies against Ki-Ras (sc-521), anti-ERK, phospho-ERK, anti-cyclin A, anti-p53, anti-cyclin D1, anti-p27 and anti-p53 were from Santa Cruz Biotechnology (Santa Cruz, CA, USA). The anti-β- actin was from Sigma–Aldrich (St. Louis, MO, USA). The peroxidase-conjugated (HRP) anti-rabbit, anti-mouse secondary antibodies, nitrocellulose membrane PROTRAN and the ECL detection system were from Amersham-Pharmacia (Biothec, UK Limited). Fetal bovine serum (FBS), trypsin–EDTA, and penicillin/streptomycin solutions were purchased from HyClone Europe Ltd. (Cramlington, UK); Dulbecco’s Modified Eagle’s Medium (DMEM) and Lipofectin reagent were from GIBCO BRL, Life Technologies (Carlsbad, CA, USA). All other reagents were purchased from Sigma Aldrich (Milano, Italy). Western blot assay Cells were exposed to cisplatin (CDDP) 16,6 μM and/or to the MEK-inhibitor PD98059 40 μM for the indicated times as described in the figures’ legends, harvested at times indicated, and lysed on ice-cold RIPA buffer (1% Triton X-100, 0.5% DOC, 0.1% SDS, 50 mM Tris–HCl, pH 7.6, 150 mM NaCl, 1 mM PMSF, and 1 mg/ml aprotinin, leupeptin, and pepstatin). After centrifugation at 12,000 g, protein concentrations were determined by Bradford assay. Twenty to fifty micrograms of protein were subjected to 7% or 12% SDS–PAGE and transferred onto nitrocellulose membranes (Schleicher & Schuell, Germany). Blots were then blocked in Tris-buffered saline (50 mM TrisHCl, 200 mM NaCl, pH 7.4) containing 5% nonfat dry-milk (Bio-Rad Laboratories Inc.,Hercules, CA) and incubated with primary antibodies as follows: anti-pan-Ras antibody, Ab-3, 1: 500; anti-Ki-Ras antibody, 1: 200; anti-Ha-Ras antibody, 1: 400, all incubated overnight at 4◦C; anti-β-actin, 1: 1,000, incubated 2 h at room temperature; anti-ERK1/2 and anti-phospho-ERK1/2 (1: 1,000), incubated 2 h at room temperature). Blots were washed three times with PBS and then incubated for 2 h with horseradish peroxidase-conjugated secondary antibodies (all used at 1: 5,000). Immunostaining was revealed by the ECL detection system (Amersham). Cell transfection Human U251MG, U87MG and HEK-293 cells were transiently transfected with Ras constructs mutant at carboxyl-terminal hypervariable region (HVR): (1) constitutively active K-Ras carrying a Val-12 point mutation (KRAS4B V12: Val 12 constitutively active instead of Gly; (2) a double K-Ras mutant carrying Val-12 and Ala-185 mutations (KRAS4B V12A185: Ala instead of Cys > prevent farnesylation); (3) a triple K-Ras mutant carrying Val-12, Glu-177 and Ala-185 mutations (KRAS4B V12E177 E = Glu E177 instead of Lys > disrupt KKKKK Lysine stretch); (4) a double mutant H-Ras carrying Leu-61 and Ser-186 mutations (HRASL61S186, Leu-61, Ser-186; it is a cytoplasmic, GTP-bound interfering Ras mutant [ 48 ]. HEK-293 cells were plated onto 100-mm Falcon dishes and grown in DMEM containing 10% FBS. One day after plating, cells were transfected with 10 μg of cDNA in serum free medium using a LipofectinTM Transfection Reagent (Thermo Fisher Scientific Inc.), according to manufacturer’s instructions. Two hours later, cultures were switched into the growing medium. After 24 h, the cells were then processed for fluorescence-activated cell sorting (FACS) analysis. Complete sequence verification of the DNA plasmids carrying point mutations were assessed by a modification of the Sanger dideoxy method as implemented in a double stranded DNA cycle sequencing system with fluorescent dyes. Sequence reactions were then run on a 3130 Automated sequence system (Applied Biosystem) [ 1 ]. Cell cycle distribution analysis Flow cytometry was used to determine the cell cycle distribution using a cell cycle kit with PI staining (BD Biosciences). U87MG and U251MG cells were plated in 6-well plates and treated with various concentrations (0, 1, 2, 5 μM) of cisplatin for 72 h. Then, the cells were collected by centrifugation at 167.7 × g for 5 min at room temperature. Subsequently, the cells were washed and fixed with PBS and cold 70% ethanol for 24 h 4 °C. Then, the cells were treated with 50 μl 100 μg/ml RNase at 37 °C, washed twice with PBS, centrifuged at 167.7 × g for 5 min and stained with 5 μl PI (50 mg/ml stock solution). The results were analyzed by BD FACSAria (BD Biosciences). The data were quantified using ModFit LT 4.0 (Verity Software House, Inc.) [ 25 ]. TUNEL assay 5 × 10^5 cells were grown in 60 mm dishes. At 18 h after treatment, cells were fixed in 2% paraformaldehyde/1 × PBS for 10 min at RT and washed once in PBS þ 50 mM glycine for 10 min at RT and washed again three times for 5 min in PBS. Cells were permeabilized with 0.5% Triton X-100/1 × PBS for 10 min, washed 3 × 5 min in PBS and incubated with 100 μl of 1 × TdT reaction mix. TdT-mediated dNTP nick end labeling was carried out at 37°C for 60 min using 15 U of TdT (Roche Diagnostics S.p.A, Roche Applied Science, Monza, Italy) and 2 μl of 2 mM BrdUTP. BrdUTP incorporation was revealed by anti-BrdU-FITC and then stained with propidium iodide. The data were acquired and analysed by CELLQuest software for bivariate analysis of DNA content versus BrdU. Experiments were performed in triplicate [ 11 ]. Viability assay U251MG (p53 mutated; low level of methyl guanine transferase (MGMT)) and U87MG (p53 wild type, low level of methyl guanine transferase (MGMT) cells [ 9 ] (American Type Culture Collection, Manassas, VA) were maintained at Dulbecco’s minimal essential medium (DMEM) (Thermo Fisher Scientific) with 10% Foetal Bovine Serum (FBS, Hyclone, Logan, UT) and added glutamine/pyruvate (HyClone) at 37 °C with 5% CO 2 . Cells were treated with different concentration of cisplatin/CDDP ranging from 0 to 16,6 μM (Sigma, St. Louis, MO) dissolved in Opti-MEM (Invitrogen, Carlsbad, CA) from 50 mM DMSO stock solutions (indicated in Figure Legends). After 4 h of treatment, 10% FBS was added, and cells incubated for an additional 44 h. To determine viability, PrestoBlue (Invitrogen) was added as per manufacturer’s protocol and read on a microplate reader (BioTek, Winooski, VT). Statistical analysis Results are presented as mean ± s.e.m. of at least three replicates. Data sets were analyzed statistically using the JMP Statistical DiscoveryTM software 6.03 by SAS (Statistical Analysis Software). Statistical significance between groups was determined using Student’s t-test or one-way analysis of variance (ANOVA). Differences between the two cell lines were tested for statistical significance using the Chi Square test (Χ2). Two-tailed significance tests were performed with p < 0.05 considered significant. Statistical parameters for each experiment can be found within the corresponding figure legends.
Results Cisplatin-based chemotherapy resistance and MEK-inhibition effects in glioblastoma To assess the sensitivity to cisplatin of human glioblastoma, we treated in vitro two representative long-term cultures with different doses of cisplatin (CDDP) and assayed K-Ras4B protein abundance by semiquantitative Western Blot. Human U87MG and U251MG were added with cisplatin 16,6 μM for 2, 12, 24 h. Furthermore, as the activation of the MAPK signalling pathway plays an important role in GBM response to chemotherapy, we used the potent and selective non-ATP-competitive MEK1 inhibitor PD98059 (preclinical studies). Here, we measured distinct sensitivity to cisplatin of the two prototypical models of glioblastoma by using different approaches (semiquantitative Western Blot, MTT, TUNEL). Endogenous KRAS4B protein levels and ERK phosphorylation showed significant differences as measured by semiquantitative Western Blot. Cisplatin upregulates total Ras (measured by pan-Ras antibody) in both cell lines and K-Ras4B isoform (measured by isoform-specific antibody against KRAS4B) account for the total Ras amount only in U251MG (Fig. 1 A and in Figure_ 1 SBIS). Moreover, MEK-inhibitor treatment strongly de-phosphorylated ERK and contextually enhances (detected a two-fold increase) K-Ras4B protein expression in U87MG only. Inhibition of MEK by PD98059 was confirmed by de-phosphorylation of ERK1/2 in both cell lines (markedly in U251). Particularly, PD98059 alone doesn’t increase K-Ras4B expression in both cultures (Fig. 1 A and Figure_ 1 SBIS). Besides the signalling pathway in U87MG and U251MG cells, we investigated their ability to affect cell viability after 48 h of treatment by using a standardized MTT assay. Cell viability assay (MTT assay) (Fig. 1 B) and cytofluorimetric analysis of fragmented DNA of apoptotic cells (TUNEL assay) (Fig. 1 C) measured remarkable differences in cisplatin sensitivity among the two cultures, particularly U87MG cells were almost insensitive to CDDP. Cells were treated with different concentrations of CDDP (3,3 μM 6,6 μM 16,6 μM for 72 h and we measured a massive apoptosis only in U251MG cells (90%) versus the negligible U87MG response (10%). To determine to what extent MEK inhibition could interfere with cisplatin-induced apoptosis, these cell lines were incubated with cisplatin (CDDP) 16,6 μM and/or MEK inhibitor PD98059 40 μM for the indicated times and fluorescence-activated cell-sorting (FACS) analysis was conducted to determine the percentage of cells with a sub-G1 (apoptotic) DNA content. We observed a significantly enhanced cytotoxicity (90%) in U251MG in response to cisplatin (CDDP) (time 72 h and CDDP concentration 16,6 μM (Fig. 1 panel B), and this effect was rescued by MEK-inhibitor treatment (20%). Conversely, a percentage of 12% TUNEL-positive U87MG cells in response to chemotherapy treatment was further increased up to 30% by MEK-inhibitor (see also Supplementary Fig. 1 , S 1 ). We next used flow cytometry and Western blotting to determine whether cisplatin releases GBM cells from G 2 /M arrest and modulates G 2 /M checkpoint regulators. Cells were fixed in ice-cold ethanol and stained with propidium iodide (PI)/RNase buffer, and DNA content was analyzed by flow cytometry following FL2H versus FL2W analysis for doublet elimination. In the presence of different concentrations, CDDP caused a persistent accumulation of cells in G0/G1-phase without appearance of cells in G 2 /M up to 72 h in both cultures. A significant arrest in the G0/G1 cell cycle phase and subsequent decline in both S and G2/M phases were observed in U251 and, at a lesser extent in U87MG cells following cisplatin treatment (72 h at 5 μM concentration) (Fig. 2 A). Notably, the percentage of U87MG cells in the G1 remains almost constant, except for co-treatment with MEK-inhibitor. S and G2 phases (and the level of expression of cyclin D1 in panel B) decreased in U87MG cells (Fig. 2 A). Consistent with these results, upregulation of cyclin D1 expression (and cyclin A) was detected in U87MG cells treated with cisplatin, but not in cisplatin treated U251MG cells. Interestingly, evaluation of cyclin D1 expression in thigh confluence was higher than lower density. The abnormal expression of cyclin D1 at a high cell density was observed in both conditions, growing and starvation, and in cisplatin-treated cells over the time-course (Supplementary Fig. 2 , Figure_S 2 ). The levels of the cell cycle inhibitor p27, but not cyclin D1, were dramatically increased in U251MG cells, suggesting that p53-mutated glioblastoma may be more sensitive to cisplatin-induced apoptosis. Interestingly, we observed absence of expression of cyclin D1 and p27 in U251MG (Fig. 2 B). Cisplatin-based chemotherapy resistance is defined by HVR K-RAS post-translational modification in human glioblastomas Experiments of gain-of-function were performed by overexpressing plasmids coding for oncogenic KRAS carboxyl-terminal hypervariable region (HVR)-mutants. We measured apoptosis (TUNEL Panel A) and cell viability (MTT Panel B) in response to increasing doses of cisplatin in these cells over-expressing oncogenic KRAS G12V (Fig. 3 ). Over-expression of KRAS G12V , KRAS G12VC185A , KRAS G12VC185AK177E mutants was obtained by transient transfection in both U87MG and U251MG cells and assayed at 72 h (see Methods and Figure S 3 ). Interestingly, overexpression of the constitutively active mutant KRAS G12V induced opposite response to CDDP treatment. Percent cell apoptotic cells were assessed for both cell lines that were either treated or untreated with CDDP. We measured increased values of TUNEL-positive U87MG over-expressing KRAS G12V cells (23,2% vs 18,4%) compared to decreased values (60% vs 30%) in U251 in response to CDDP treatment. Moreover, the mutant KRAS G12VC185A , in which is prevented the farnesylation of the residue 185, do not alter the response to CDDP; on the other hand, the triple mutant KRAS G12VC185AK177E (in which the polybasic region of K-RAS HVR is partially neutralized) exerts a protective role in cisplatin chemosensitivity in both (Fig. 3 A). Next, we measured the cell viability of transiently transfected KRAS G12V mutants by using a standardized MTT assay (Fig. 3 B). We detected only slightly differences between KRAS G12V mutants on viability measured by mitochondrial activity. This effect could be mediated by increased intracellular and mitochondrial reactive oxygen species and decreased mitochondrial membrane potential (ΔΨm) induced by cisplatin []. KRAS expression in human glioblastomas Primary human glioblastomas show high mRNA expression of KRAS compared to long-term cultures (Supplementary Fig. 4 , Figure_S 4 ). The KRAS expression was evaluated by real-time polymerase chain reaction (RT-PCR) in a panel of traditional xenograft cell lines and patient-derived xenograft models. We analyzed a panel of glioblastoma samples, spanning from primary cultures (GBM, WHO grade 4, see methods) to several human long-term cultures U87MG, U87MG IDH1mut , U251MG, T98G, SW1783, and Ln229 and T98G. Normal Human Astrocytes (NHA), Neural Stem Cells (SC30) and the human fetal glial cell line SVGp12 KRAS mRNA abundance were used as references samples. We detected KRAS transcripts are highly abundant in all primary samples (GBM#C, GBM#D, GBM#F and GBM#M) compared to classical long-term cell lines. It’s interesting to note that fetal glial cell line SVGp12 shows relatively high level of KRAS mRNA, accordingly with patterns of Ras expression in mouse brain (cerebral cortex) during the development [ 49 ]. Human long-term cultures (U87MG w.t. and mutant IDH1, U251MG and Ln229 showed comparable abundance of mRNA transcript (Supplementary Fig. 4 , Figure_S 4 ). Relative high protein abundance of endogenous K-Ras4B in long-term cultures and primary cultures (GBM, WHO grade 4) were assessed by semiquantitative western blot analysis with a panel of commercial antibodies specific for KRAS4B [ 50 ] (data not shown).
Discussion In the current study, we examined the KRAS gene involvement in platinum-based chemotherapy in human glioblastoma. Our results revealed high KRAS expression in primary human GBM tumors compared to several GBM cell lines. An in vitro examination of cell viability, cell cycle progression and apoptosis in two glioblastoma cell lines, U87MG and U251MG were used to study cisplatin responsiveness. Firstly, we examined endogenous KRAS expression in response to cisplatin, studying the involvement of the effector pathway RAF/MEK/ERK mitogen-activated protein kinase (MAPK) cascade. Then, including gain-of-function experiments with plasmids coding for oncogenic KRAS G12V carboxyl-terminal hypervariable region (HVR)-mutants, we examined the responsiveness to oncogenic over-expression. The antitumor efficacy of cisplatin is unquestionable. Platinum-based chemotherapy remains popular for treating cancers, especially in patients with genetic or pathological profiles that respond poorly to targeted therapies. Although cisplatin is used for adjuvant chemotherapy against glioma [ 51 ] and therein references), intrinsic and acquired resistance restricts cisplatin application. Preclinical in vitro data reported CDDP half maximal inhibitory concentrations (IC 50 ) in glioblastoma considerably lower (hundred times) than that temozolomide (TMZ) in cell lines commonly used for research on gliomas The former exhibited strong resistance to cisplatin in our experiments, which agreed with its performance in xenograft transplantation models [ 15 ]. The median in vitro IC 50 of cisplatin was 8 μM which is consistent with previous in vitro tests in glioblastoma and other tumour cells [ 52 , 53 ]. Viability, apoptosis, and cell cycle assays showed remarkable differences, especially in terms of massive (90%) versus negligible (12%) apoptotic response in U251MG and U87MG respectively. These differences may be partially due to their different genetic background involving their mutational status [ 39 ]. The molecular signature of glioblastoma (proneural, classical and mesenchymal GBM) with distinctly different patterns of gene expression is, regrettably, poorly representated by cell line models that are still widely used. These two cultures differ mainly in their genetic background respect p53 status that affect cell cycle progression during cisplatin administration. Particularly, p53 has opposing effects in gliomas treated with methylating agents and, therefore, the p53 status should be considered when deciding which therapeutic drug to use [ 4 ]. Dysregulated signalling represents an important conserved oncogenic mechanism. The dysfunctional signalling in tumours arises also by rewiring of signalling pathways, also determining the response to treatment. KRAS signalling necessarily relies on ERK/MEK signalling and, MEK-inhibitors as single agent or in combinatorial setting are at the leading-edge treatment for many cancers, including glioblastoma [ 19 ]. Actually, clinically approved MEK inhibitors (i.e.Trametinib) showed no apparent benefit of blocking MEK [ 35 ]. We therefore sought to investigate this further as a potential explanation for MEK-resistance in glioblastoma. By using the MEK-inhibitor PD98059, we observed an increased KRAS protein expression and the concomitant ERK dephosphorylation exerting opposite effect on percentage of apoptotic cells and, doing so, blocked the progression at various stages of the cell cycle. The MEK/ERK pathway is considered to enhance survival and confer resistance against radio- and chemotherapy. Blocking MEK signaling in GBM is clearly antiproliferative and, the absence of MEK activity, did not cause cell death per se but sensitize cells for apoptosis induced by chemotherapy. Moreover, as reported by others [ 54 ], inhibition of the PI3K but not the MEK/ERK pathway sensitizes human glioma cells to alkylating drugs these are not investigated in our experiments. Since Cyclin D1 is a major regulator of cell cycle progression, we therefore sought to investigate cyclins modulation in our experiments. Cyclins A, E and D1 and p27 expression were assayed by immunoblot analysis over time exposure of 36 h. Our results do not confirm previous observations showing cyclin D1 expression in U251MG cultures [ 55 ], instead U87MG p53 wild type only shows cyclin D1 induction as marker of diminished cell cycle arrest. Moreover, it is well known that p53 affects both the duration of G2/M arrest and the fate of alkylating-treated human glioblastoma cells [ 16 ]. Here, MEK inhibitor was capable of mitigating cisplatin-induced G2/M arrest in U87MG (p53 wild-type), as evidenced by the reduction in cell accumulation in the G2/M phase of cell cycle following cisplatin treatment. The importance of mutations in RAS oncogenes in tumorigenesis, cancer progression and resistance to treatment has been demonstrated in numerous model systems in vitro and in vivo. The majority of KRAS are localized in codon 12 (changing glycine to either valine, aspartic acid, or arginine) involving in a constitutive and aberrant activation of the downstream KRAS signaling cascade. It’s well known that constitutively activated KRAS G12D is not sufficient for astrocytoma initiation but rather is required for progression to high-grade tumors [ 36 ]. Here, we reported experiments of gain-of-function obtained by over-expressing oncogenic KRAS G12V in glioblastoma, studying the cisplatin resistance in relationship with single-point-mutations HVR K-RAS post-translational modification. Transient over-expression of mutants KRAS G12V , KRAS G12VC185A , KRAS G12VC185AK177E induced different response to cisplatin treatment depending on the tumour’s context. Meanwhile oncogenic KRAS G12V was able to rescue cisplatin-induced apoptosis, otherwise the mutant KRAS G12VC185A , in which is prevented the farnesylation of the residue 185, partially rescued this response. Of note, the triple mutant KRAS G12VC185AK177E (in which the polybasic region of K-RAS HVR is partially neutralized) mimicked the oncogenic KRAS G12V response as well as Harvey mutant HRAS L61S186 . From these results, we conclude that, at least in U251MG glioblastoma cultures, the overexpression of an oncogenic KRAS mutations modulate chemoresistance in vitro, not necessarily coupled with effect on proliferation/viability. Its worthy of note that oncogenic KRAS G12D or KRAS G12C are involved in the generation of intracellular reactive oxygen species [ 56 ] as well as cisplatin is involved in oxidative metabolism accompanied the cisplatin-induced inhibition of cancer cell growth in vitro and in vivo [ 57 ]. Our current data define a novel role of KRAS in GBM and elucidate a molecular mechanism underlying KRAS-mediated GBM chemoresistance. Particularly, chemo treatment with cisplatin induces viability and apoptotic changes in glioblastoma cells in vitro, and KRAS proteins can reprogram cell state when ectopically expressed. This provides further insights into cisplatin responsiveness of glioblastoma cancer which could ultimately lead to clinical opportunities to manipulate KRAS pathways/activity to maximize patient benefit.
Background KRAS is the undisputed champion of oncogenes, and despite its prominent role in oncogenesis as mutated gene, KRAS mutation appears infrequent in gliomas. Nevertheless, gliomas are considered KRAS-driven cancers due to its essential role in mouse malignant gliomagenesis. Glioblastoma is the most lethal primary brain tumor, often associated with disturbed RAS signaling. For newly diagnosed GBM, the current standard therapy is alkylating agent chemotherapy combined with radiotherapy. Cisplatin is one of the most effective anticancer drugs and is used as a first-line treatment for a wide spectrum of solid tumors (including medulloblastoma and neuroblastoma) and many studies are currently focused on new delivery modalities of effective cisplatin in glioblastoma. Its mechanism of action is mainly based on DNA damage, inducing the formation of DNA adducts, triggering a series of signal-transduction pathways, leading to cell-cycle arrest, DNA repair and apoptosis. Methods Long-term cultures of human glioblastoma, U87MG and U251MG, were either treated with cis-diamminedichloroplatinum (cisplatin, CDDP) and/or MEK-inhibitor PD98059. Cytotoxic responses were assessed by cell viability (MTT), protein expression (Western Blot), cell cycle (PI staining) and apoptosis (TUNEL) assays. Further, gain-of-function experiments were performed with cells over-expressing mutated hypervariable region (HVR) KRAS G12V plasmids. Results Here, we studied platinum-based chemosensitivity of long-term cultures of human glioblastoma from the perspective of KRAS expression, by using CDDP and MEK-inhibitor. Endogenous high KRAS expression was assessed at transcriptional (qPCR) and translational levels (WB) in a panel of primary and long-term glioblastoma cultures. Firstly, we measured immediate cellular adjustment through direct regulation of protein concentration of K-Ras4B in response to cisplatin treatment. We found increased endogenous protein abundance and involvement of the effector pathway RAF/MEK/ERK mitogen-activated protein kinase (MAPK) cascade. Moreover, as many MEK inhibitors are currently being clinically evaluated for the treatment of high-grade glioma, so we concomitantly tested the effect of the potent and selective non-ATP-competitive MEK1/2 inhibitor (PD98059) on cisplatin-induced chemosensitivity in these cells. Cell-cycle phase distribution was examined using flow cytometry showing a significant cell-cycle arrest in both cultures at different percentage, which is modulated by MEK inhibition. Cisplatin-induced cytotoxicity increased sub-G1 percentage and modulates G2/M checkpoint regulators cyclins D1 and A. Moreover, ectopic expression of a constitutively active KRAS G12V rescued CDDP-induced apoptosis and different HVR point mutations (particularly Ala 185) reverted this phenotype. Conclusion These findings warrant further studies of clinical applications of MEK1/2 inhibitors and KRAS as ‘actionable target’ of cisplatin-based chemotherapy for glioblastoma. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11758-6. Keywords
Supplementary Information
Abbreviations Kirsten Ras human gene Glioblastoma Cisplatin Mitogen Activated Protein Kinase Reverse-transcription polymerase chain reaction Western Blot Extracellular signal-regulated kinase MAP-ERK kinase Hyper Variable Region Temozolomide Carboxyl-terminal hypervariable region O6-methylguanine-DNA-methyltransferase Propidium iodide 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide Terminal deoxynucleotidyl transferase dUTP nick end labeling Acknowledgements We are grateful to Dr. Maria Giulia Rizzo at Regina Elena National Cancer Institute for her kind gift of glioblastoma long-term cultures IDH1 mut U87, SW1783, and Ln229. Authors’ contributions S.M. and A.P. wrote the main manuscript text and C.Z., AR and S.L. made the experiments and prepared figures; A.P. provided the statistical analysis of the data. All authors reviewed the manuscript. Funding The Grant of Excellence Departments, MUR (ARTICOLO 1, COMMI 314 – 337 LEGGE 232/2016) to Department of Science. Availability of data and materials All data generated or analyzed during this study are included in this published article [and its supplementary information files]. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Cancer. 2024 Jan 15; 24:77
oa_package/3e/30/PMC10789061.tar.gz
PMC10789062
38221605
Background As people age, the risk of having two or more chronic somatic diseases and metabolic conditions increases rapidly [ 1 ]. In particular, frailty is a major challenge associated with the rapidly growing older population [ 2 ]. In this population, falls are common events with serious consequences for those affected and society in general. In addition to physical injuries, such as fractures, falls may lead to psychological consequences, such as fear of falling, social withdrawal, mood disorders and reduced quality of life (QoL) [ 3 ]. Due to the increasing proportion of older people worldwide, the ability to function within society at an increasing age is gaining importance. Therefore, the World Health Organization (WHO) has called to investigate disabilities (e.g. impairments, activity limitations and participation restrictions) in this vulnerable population [ 4 ]. Regular physical activity is recommended to improve the prognosis of chronic diseases in older people. The WHO recommends at least 2.5 h of moderate physical activity per week [ 5 ]. In fact, 73% of German women and 67% of German men aged ≥ 65 years do not meet this recommendation [ 6 ]. The benefits of regular exercise to reduce physical dependence in older people have been known for a long time [ 7 – 9 ]. Physical activity has long been established as a cornerstone of health and well-being, with numerous studies underscoring its positive effects on various health outcomes. While the benefits of structured exercise interventions are well-documented, the focus of this study diverges to explore a unique avenue—the potential impact of volunteer-supported programs on the health and well-being of older adults. Acknowledging the health benefits of exercise, our study seeks to shift attention to an alternative approach. Our intervention, focussed on the involvement of volunteers, to enhance the health and overall well-being of older adults. Importantly, apart from physical effects, we expect our interventions to increase social participation and improve intergenerational relationships. The primary objective of this study is to assess the effectiveness of regular volunteer-supported outdoor walking compared with a control condition, the provision of unrelated health information. Specifically, we will examine its impact on physical function, cognitive function, frailty, fear of falling, and quality of life (QoL). The intervention was aimed at people ≥ 65 years old, who were not able to move independently and sufficiently due to physical limitations. In the first study period, the participating study centres provided the logistics for volunteer support. The second study period served to explore the possibility of non-academic initiatives to implement the idea of volunteer-assisted walks. Our study was previously conceptualised and piloted exclusively with nursing home residents [ 10 ]. Partly due to the very good uptake, we have now extended the setting to include majority community dwelling persons.
Methods Study design This randomised, controlled interventional superiority trial was undertaken from October 2017 to December 2021. The protocol was registered on 31 August 2018 at the German Clinical Trials Register ( www.germanctr.de ), Deutsches Register Klinischer Studien, with the number DRKS00015188. The study design has been published previously [ 11 ]. The study was carried out at primary care departments of the University of Witten-Herdecke (North-Rhine Westphalia) and the University of Marburg (Hesse), Germany. It was approved by Ethical Review Committee at both sites (Marburg reference number 208/17; Witten-Herdecke reference number 71/2018). Written informed consent was obtained from all participants and volunteers. Setting Research personnel at both study sites recruited a cohort of participants ≥ 65 years and followed them up for 12 months. In the Witten region, we collaborated with nursing homes. The study team identified and approached potential participants. In the Marburg region, the study team recruited participants from the community setting. We involved primary care general practitioners (GP) and home care nursing services in the recruiting process. Moreover, the study was covered by local newspapers to encourage potential participants. Information leaflets were distributed in shops and pharmacies. Due to difficulties recruiting participants in the community, we also approached nursing homes in the Marburg region to recruit participants. A total of 224 participants were included. Volunteers We used different channels to recruit volunteers. On the one hand, we approached cooperating partner organisations (e.g. volunteer agencies), and on the other hand we placed advertisements in local newspapers, internet forums and bulletin boards (e.g. of universities and schools). The minimum age for volunteers was 16 years, which is the minimum age for helpers in the federal volunteering service (Bundesfreiwilligendienst), and the possession of a mobile phone. Moreover, volunteers were required to speak German sufficiently well, to be fit enough to assist participants during the walks and to be available for at least 6 months. The number of participants assigned to one volunteer was based on each volunteer’s time and the physical condition of the participants. The study staff trained and prepared each volunteer for a total of 6 h. The training included instructions on how to support older people (e.g. using aids such as walkers) and how to document the walks. We then assigned the volunteers to the participants. We considered preferences during assignments (e.g. support of a female volunteer and participants close to home). We instructed the volunteers to arrange the appointments with the participants themselves. Participants Participants were eligible if they were ≥ 65 years old and lacked confidence to a walk on their own, which we assessed informally. They had to have reduced physical function defined as a Short Physical Performance Battery (SPPB) score of < 9 [ 12 ]. For the pre-selection, we informed the nursing staff of the nursing homes as well as the participating GPs and nursing services about the inclusion criteria. We excluded participants if they: did not give informed consent; had cognitive impairment (a Mini-Mental State Examination [MMSE] score of < 18 at baseline) [ 13 ]; had severely reduced physical function so that volunteer-supported walks were not safe (an SPPB score at baseline of ≤ 2 in nursing homes and ≤ 3 in the community setting); had excellent physical function so that benefit from the intervention was unlikely (an SPPB score of ≥ 10); were permanently bedridden could only be mobilised in a wheelchair; already had regular physical activity levels estimated to be at least equivalent to the intervention; had a life expectancy of < 6 months as estimated by personal physicians and/or nursing teams; had another foreseeable inability to take part in the intervention for 6 months; had known alcohol or drug addiction or a psychotic episode during the last 12 months; another person of the same household already participated in the study. Research staff visited potential participants who had either expressed an interest in the study or been identified by an institution and then screened them for eligibility. Randomisation After completion of the baseline visit including checking of eligibility, we randomly assigned the participants to the control or experimental group according to the randomisation list, which was generated before the recruitment by the Clinical Trials Centre at the University of Marburg (Fig. 1 ). The randomisation stratified the two study centres with a blocking procedure, which created alternating blocks of 4 and 6 participants. Study visits Research assistants performed the examinations and collected data at baseline (T0) and after 6 months (T1) and 12 months (T2) of the intervention. All visits took place at the participant’s private or nursing home. We collected sociodemographic data and characteristics like physical function and frailty at baseline. For details, see Table 1 and Supplementary Material S 1 . Intervention After baseline assessments (T0) and randomisation, participants in the intervention group received the physical activity intervention for 6 months. They were visited by an assigned volunteer up to three times a week to go for a walk outside. The initial duration and speed of the walk were determined according to the participant’s physical ability. The aim was to gradually increase the duration of each walk up to 50 min to meet the WHO recommendation of 150 min per week [ 5 ]. The activity could take place indoors in case of bad weather under the supervision of the volunteer. It consisted of exercises for balance and strength based on a programme of the federal centre for health education [ 14 ]. This brochure provides simple, illustrated instructions for effective and safe indoor training (see Supplementary Material S 4 ). The study intervention is described in more detail elsewhere [ 11 ]. Walking pairs of participants and volunteers received an activity diary to record the date, time, duration and type of each exercise episode (outdoors or indoors). Events relevant for the safety of the intervention, such as falls or injuries, were also documented in the diary by the walking pairs themselves. After each walk, the participant recorded their subjectively experienced physical strain on a visual analogue scale. We invited the participants in the control group to two lectures given by study staff. The lectures covered topics related to healthy ageing, such as diet or the interpretation of blood tests. We presented the topics in an easy-to-understand and entertaining manner. These lectures did not mention physical activity. Follow-up and extension study After 6 months, the post-intervention examination took place for both study arms (T1). We followed the study participants in both groups for an additional 6 months until the final examination at 12 months (T2). During this second study period, the academic study centres did not organise or coordinate walks. However, we expected community services outside the coordinating academic departments to continue to provide support for regular walking as initiated. The evaluation of the extended period at T2 had two main objectives: first, to prolong the follow-up and to assess the long-term effects of regular walking (up to 12 months); second, to evaluate potential sustainability and dissemination of the study intervention after cessation of support from the academic study centres. Outcomes The primary outcome of the study was physical function as measured by the SPPB. This assessment includes several observed activity tests measuring balance, gait speed and the ability to get up from a chair. SPPB scores are associated with disability in mobility and ADL [ 15 – 17 ], future hospitalisation [ 18 ], health improvement [ 19 ] and mortality [ 19 – 21 ]. The SPPB has good reliability and validity [ 16 ]. The secondary outcomes were quality of life (the EQ-5D-5L score) [ 22 ], fear of falling (the Falls Efficacy Scale [FES-I]) [ 23 ], physical activity (activity diary) and cognitive executive function (the Clock Drawing Test [CDT]) [ 24 ]. We defined falls requiring medical attention, any hospitalisation and death as adverse events. We obtained details on these events from the participants’ primary care physicians and/or from hospital discharge reports. See Supplementary Material S 1 for an overview of the measurements at T0, T1 and T2. Some of these findings will be published separately. Blinding Due to the type of intervention, neither participants nor volunteers could be blinded regarding the intervention. Because study staff communicated with participants at T0, T1 and T2 for at least 30 min, shielding them from information regarding the study arm was not a realistic option. However, randomisation took place after the baseline examination by an independent unit, thus ensuring allocation concealment. To keep data collection at follow-up visits as unbiased and consistent as possible, we developed a standardised protocol. The study statistician (MHG) received datasets without labels regarding group allocation; thus, they were blinded to the allocation of the participants. Sample size For our sample size calculation, we chose physical function as the primary outcome – that is, the SPPB score at T1. A change of 1 point has been shown to be of clinical relevance (e.g. predictive of future hospitalisation, health improvement and mortality) [ 25 ]. The standard deviation in comparable samples has ranged from 2.6 to 2.8 points [ 19 , 24 , 25 ]. We chose a conservative approach and assumed the higher standard deviation (2.8) for our calculation. In an analysis of covariance (including the baseline values of the primary endpoint), with R 2 of 0.5 for the covariate and a power of 95% (1 – β), a sample size of 206 would be required to detect a difference of 1 point between the means of the two study arms. In the pilot study we conducted [ 10 ], which we initiated solely in nursing homes, we found a 28% loss of participants during the 6-month intervention period mainly due to death and hospitalisation. Because we did not know what loss could be expected in the community setting, we chose a conservative approach with an estimated expected loss of 40% of participants during the first intervention period, mainly due to death and hospitalisation. Hence, a total sample of 345 people would be required to obtain a sufficient power of 95% for our primary endpoint. The Marburg Study Centre aimed to enrol 230 people from the community and the Witten study centre aimed to enrol 115 people from nursing homes. Statistical analysis We used SPSS Statistics Version 27 (IBM Corp., Armonk, NY, USA) for all analyses. Metric outcomes are reported as the mean and standard deviation if they were normally distributed, or as median and interquartile range if they were not normally distributed. For the primary efficacy analysis of the intention to treat (ITT) population, we hypothesised a higher SPPB score at T1 in the intervention group compared with the control group. To evaluate the difference in the primary outcome (the SPPB score) between the treatment groups, we applied linear regression with a robust estimator of the covariance matrix in the framework of generalised linear models with the predictors treatment and the baseline SPPB score [ 26 ]. Thus, the result is controlled for by the baseline SPPB score. We evaluated the secondary outcomes – SPPB at T2, QoL (the EQ-5D-5L score) at T1 and T2, cognitive function (the CDT score) at T1 and T2, fear of falling (the FES-I score) at T1 and T2 and physical activity at T1 and T2 with Student’s t-tests or the Mann–Whitney U-test. We determined whether the data met the assumptions for the parametric models, namely a normal distribution, by inspecting the Q–Q plots. We postulate two reasons for missing values of outcome variables. Nursing home or hospital admission as well as death can be regarded as ‘informative’ because they are potentially related to the intervention. In other words, the study intervention is intended to reduce hospital admissions and deaths. In our view, however, moving out of the area should be regarded as ‘non-informative’, because it is not related to the intervention. We ascertained the reason for loss to follow-up. For the ITT analysis, we replaced missing values according to the reason they were missing. For ‘informative’ missing values, we substituted the worst possible value of the respective variable. For ‘non-informative’ missing values in the primary outcome, we proceeded with the multiple imputation procedure of SPSS Statistics Version 27. The approach resulted in 10 complete datasets. The imputation was based on the non-missing baseline variables sex, age, frailty, the FES-I score and the SPPB score. We used the ‘METHOD’ keyword ‘AUTO’, which is default and specifies the imputation method. We analysed the primary outcome in the per protocol (PP) dataset by performing a sensitivity analysis. We included participants attending the T1 visit and who, if in the intervention arm, had completed at least 25% of their scheduled walks (equal to 20 walks). We used the χ 2 test to compare deaths between the groups and Poisson regression to compare falls between the groups. Based on imputation and replacements, patients lost to follow-up could contribute to our primary outcome analysis. The remaining outcome evaluations were dependent on compliance with study procedures. Thus, the numbers differ according to the availability of data (see Tables 2 and 3 ).
Results Participants The participant flow is summarised in Fig. 1 . The study sample comprised 224 participants. Because of difficulties recruiting participants in the community (planned sample size for Marburg n = 230, achieved n = 106), we failed to achieve the overall planned sample size of 345 participants. We recruited 118 participants in Witten, all of whom were nursing home residents. Most participants in the Marburg region lived in the community ( n = 76 [72%]); the remaining 30 (28%) lived in nursing homes. Overall, 79% of the sample was female. A total of 196 (87.5%) participants had an officially acknowledged need of nursing care (in German, Pflegegrad). Of these participants, 65 (31.2%) had at least level 3 (of 5 [= worst]). We randomised 110 participants into the control group and 114 into the intervention group. Table 1 shows their characteristics at baseline. Compliance with the trial protocol The overall number of walks by each participant in the intervention group ranged from 0 to 101 with a mean ± standard deviation (SD) of 17.7 ± 19. Assuming walks three times a week over a period of 6 months, this would result in 78 walks. We included only the participants in the intervention group with at least 20 walks, 40 participants, in the PP population. Community participants completed 20 walks more often (37%) than nursing home participants (20.8%). The overall number of indoor training, in case of bad weather, by each participant in the intervention group ranged from 0 to 17 with a mean ± standard deviation (SD) of 3.4 ± 2.5. Efficacy of the intervention The effects of the intervention on the primary and the secondary outcomes evaluated in the ITT population are presented in Table 2 . There were no significant differences in our primary outcome, the SPPB score, frailty, QoL, cognitive function or fear of falling between the study arms at T1 or T2. Given the interference of the COVID-19 pandemic with the study intervention and visits, we conducted a PP analysis for our primary outcome at T1 ( n = 40). In the PP population, the SPPB scores of participants who actively took part in the intervention were higher than those of the controls (mean ± SD: 4.82 ± 2.46 vs. 3.87 ± 2.56, p = 0.01). As an additional exploratory analysis, we applied a regression model to the intervention group only, with the SPPB score at T1 as the dependent variable. As predictors we chose the baseline SPPB score and the ‘number of walks completed’. The latter had a significant influence on the outcome. We repeated the same analysis with ‘time spent walking’ as the independent variable (for details, see Supplementary Material S 3 ). Safety During the study, there were no significant differences in falls and death between the groups at T1 or T2 (Table 3 ). None of the recorded falls or hospitalisations were associated with volunteer-assisted walks. The number of days spent in hospital between baseline and T1 was 236 in the intervention arm and 267 in the control arm (analysis of variance [ANOVA] p = 0.771), and between T1 and T2 it was 109 in the intervention arm and 259 in the control arm (ANOVA p = 0.151). We obtained similar results for the number of hospitalisations (see Supplementary Material S 2 ). Post hoc power analysis Because of the impact of the COVID-19 pandemic on the study, there are large discrepancies between the PP and ITT populations. Would our study have been more successful if the participants had adhered to the study protocol? Assuming an effect size as shown in the PP analysis and a sample size of 202, the conditional power calculation method suggests 77% power to detect an R 2 of 0.019 attributed to one independent variable using an F-test with a significance (alpha) level of 0.05. The variables tested are adjusted for an additional covariate, which has a combined R 2 of 0.454 by itself.
Discussion We could not show that regular, volunteer-assisted walks for old people improve physical or cognitive function, frailty, fear of falling or QoL. The intervention, aimed at individuals ≥ 65 years with reduced physical function and lacking confidence and/or external support, appears to be safe regarding falls. Although ITT comparisons were negative, exploratory analysis suggested a positive effect of walking on health outcomes. Difficult recruitment While recruitment in nursing homes proceeded as planned, finding participants in the community proved to be difficult. Our eligibility criteria apparently applied to only to a small section of the population aged ≥ 65 years old in the community. We wanted to reach individuals restricted in their physical function who lacked opportunities and support for regular physical activity. We excluded bedridden people or those in whom mobilisation seemed unlikely to succeed. Potential participants were pleased to meet volunteers; the walking part, however, deterred many [ 27 ]. GP practices and home care nursing services were often too busy to approach patients systematically regarding study participation. Therefore, we decided to use additional channels to reach our target population, such as local newspapers, flyers in shops, etc. Although these measures were successful to a certain degree, we did not reach our recruitment target within the planned time period. Cooperation and recruitment were far more straightforward in nursing homes. Management was usually happy to offer residents additional activities. Hence, they showed considerable commitment towards the study. Negative result The contact restrictions due to the COVID-19 pandemic had an additional impact on achieving the study objectives. Walks were cancelled, which led to some dropouts, but also to relevant delays in study visits to evaluate outcomes. The second study period suffered the most from the COVID-19 pandemic. Its objective was to explore whether actors in the community would continue to offer exercise support to the older population. Restrictions related to the COVID-19 pandemic made this largely impossible. Promoting well-being in older people through volunteer support Various consortia address healthy aging, such as the WHO Clinical Consortium on Healthy Ageing [ 28 ]. The evidence supporting the positive impact of a physically active lifestyle on the health of older adults is substantial; however, only a limited percentage of the elderly population adheres to the recommended levels of physical activity. Whithall and colleagues undertook a qualitative analysis of the best approaches and synthesised evidence from end-user representatives and stakeholders to refine one of these approaches, an intervention to promote active ageing through peer volunteering [ 29 ]. They state that participants engaged primarily for social reasons, facing barriers like lack of companionship, low confidence, weather concerns, and established group dynamics. Volunteers emphasized the need for meaningful engagement and social interaction. The study supports peer-volunteering for active aging, emphasizing effective recruitment and overcoming barriers like lack of motivation and security concerns. This study is based on the findings of the study by Stathi and colleagues [ 30 ]. ACE (Active, Connected, Engaged) was a feasible and well-accepted intervention using peer-volunteering support to promote active aging in socially disengaged older adults. The study, involving 54 participants, demonstrated that ACE increased out-of-house activities, improved physical function, and enhanced well-being and vitality. Participants in the intervention reported increased confidence, knowledge of local initiatives, and perceived social support. The findings emphasize ACE's potential to help socially disengaged older individuals get outdoors, boost confidence, and engage more with their community. The interest in volunteer support for the walks in our study was highly positive. We received strong interest and support from the public, stakeholders, and other interest groups. The sub-study conducted by Weissbach et al. successfully demonstrated this in a mixed-method approach [ 27 ]. Both participants and volunteer companions reported, in semi-structured guide-based interviews (nursing home residents), two focus group interviews (volunteers), and a cross-sectional questionnaire survey (volunteers), not only physical improvements but also highlighted the positive impact of social interaction associated with the walks. The study findings indicate that volunteer support for mobility-impaired nursing home residents has a positive impact on the quality of life for both groups. The simple intervention received predominantly positive evaluations, even though no new insights into physical activity were gained. Future programs should be tailored to the individual needs of older adults to enhance their quality of life and mobility. A suitable environment in the nursing home and training for volunteers are crucial for the success of such initiatives. Strengths and limitations Despite our failure to achieve our recruitment goals, our study had some strengths. We were able to recruit, motivate and instruct a sufficient number of volunteers to support the study participants in their walking. Most participants and volunteers enjoyed the experience. Our main outcome was a battery of physical function tests. A study with sufficient power to investigate outcomes such as QoL or frailty would be desirable. Because of the nature of the intervention, the participants could not be blinded. Due to logistical constraints, we also could not blind the study personnel. Given the clear preferences some (potential) participants had, (potential) allocation to the ‘wrong’ study arm made motivation to contribute to the project sometimes difficult to maintain. We also found that acute health problems, bad weather conditions, volunteers moving or lacking time proved to be obstacles. When establishing a permanent service as evaluated in this study, planners should keep in mind that it requires high flexibility. Community organisations such as public health departments or municipal volunteer agencies were highly interested in supporting the idea underlying the study. However, the COVID-19 pandemic prevented us from exploring this promising aspect further.
Conclusions To our knowledge, this is the first randomised controlled trial evaluating a low-threshold intervention such as volunteer-assisted outdoor walking to improve physical function in older people. Against the background of a smaller-than-planned sample size resulting in low power, and the interference of the COVID-19 pandemic, we suggest that the idea of community-based low-threshold interventions of this kind should be explored in future studies.
Background Regular physical activity has multiple health benefits, especially in older people. Therefore, the World Health Organization recommends at least 2.5 h of moderate physical activity per week. The aim of the POWER Study was to investigate whether volunteer-assisted walking improves the physical performance and health of older people. Methods We approached people aged 65 years and older with restricted mobility due to physical limitations and asked them to participate in this multicentre randomised controlled trial. The recruitment took place in nursing homes and the community setting. Participants randomly assigned to the intervention group were accompanied by volunteer companions for a 30–50 min walk up to three times a week for 6 months. Participants in the control group received two lectures that included health-related topics. The primary endpoint was physical function as measured with the Short Physical Performance Battery (SPPB) at baseline and 6 and 12 months. The secondary and safety endpoints were quality of life (EQ-5D-5L), fear of falling (Falls Efficacy Scale), cognitive executive function (the Clock Drawing Test), falls, hospitalisations and death. Results The sample comprised 224 participants (79% female). We failed to show superiority of the intervention with regard to physical function (SPPB) or other health outcomes in the intention-to-treat analyses. However, additional exploratory analyses suggest benefits in those who undertook regular walks. The intervention appears to be safe regarding falls. Conclusions Regular physical activity is essential to preserve function and to improve health and quality of life. Against the background of a smaller-than-planned sample size, resulting in low power, and the interference of the COVID-19 pandemic, we suggest that community based low-threshold interventions deserve further exploration. Trial registration The trial was registered with the German Clinical Trials Register ( www.germanctr.de ), with number DRKS00015188 on 31/08/2018. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-024-04672-4. Keywords Open Access funding enabled and organized by Projekt DEAL.
Supplementary Information
Abbreviations Activities of daily living Clock Drawing Test Coronavirus disease 2019 Deutsches Register Klinischer Studien (German Clinical Trials Register) Falls Efficacy Scale General practitioner Intention-to-treat Mini-Mental State Examination Per protocol Quality of life Short Physical Performance Battery World Health Organization Acknowledgements We would like to thank all those who have supported and continue to support the POWER study with their voluntary engagement. In particular, we would like to thank the representatives of the volunteer agencies and the health department for their support. Authors’ contributions NDB and AS devised the project and secured project funding. NG, SW and UT collected, analysed and interpreted the data. Statistical analysis was performed by MHG. Specifically, NG, NDB, UT, AS and SW wrote the first draft of the manuscript. All authors have read and approved the final manuscript. Authors’ information All authors had access to the data and had a role in writing the manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. This study is funded by The Federal Ministry of Education and Research. (BMBF) (grant number: 01GL1708A and 01GL1708B). This funding source had. no role in the design of this study and will not have any role during its. execution, analyses, interpretation of the data, or decision to submit results. Availability of data and materials The results of the data analysis are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate The study was conducted according to the Declaration of Helsinki. It was approved by the Ethical Review Committee of the Medical Department of the Philipps-University of Marburg (Ref: 208/17) and by the Ethics Committee of Witten/ Herdecke University (Ref: 71/2018). Written informed consent was obtained from all participants and volunteers. Consent for publication Not applicable. Competing interests Ellen Freiberger is an Associate Editor for BMC Geriatrics. All other author's do not have competing interest.
CC BY
no
2024-01-16 23:45:34
BMC Geriatr. 2024 Jan 15; 24:60
oa_package/99/72/PMC10789062.tar.gz
PMC10789063
38221630
Background The development of ovaries is closely related to their normal immune environment. An abnormal immune environment can lead to ovarian dysfunction, such as premature ovarian insufficiency or polycystic ovarian syndrome (PCOS). POI refers to the continuous decline and dysfunction of ovarian function in women before the age of 40 [ 1 , 2 ]. A recent report showed that the global overall prevalence of POI among women was 3.5% [ 3 ]. POI is a highly heterogeneous disease, and in most cases, the molecular pathophysiology of POI remains unexplained. Currently, POI is believed to be caused by genetic or other unknown factors, either single-factor or syndromic [ 4 ]. Granulosa cell (GCs) surround the oocytes and play a key role in follicle development and fate determination. In addition to providing essential nutrients and growth factors for follicular development, GCs also secrete steroid hormones required for folliculogenesis [ 5 ]. Dysfunction of GCs can lead to follicle arrest and apoptosis, ultimately resulting in POI [ 6 , 7 ]. The cellular functions of GCs are strongly positively correlated with the maintenance of normal ovarian function [ 8 ]. Resveratrol, belonging to the stilbene subclass, is derived from phenylalanine and contains an aromatic ring with an active hydroxyl group [ 9 ]. It is a natural compound with free radical scavenging activity. In the female reproductive system, resveratrol can enhance the quality of oocytes through its antioxidant and anti-apoptotic functions [ 10 ]. Resveratrol can maintain the normal physiological function of ovarian granulosa cells [ 11 ]. Resveratrol also exerts its antagonistic effect on POI by modulating the immune response [ 12 ]. However, the specific cellular functions of ovarian granulosa cells affected by resveratrol remain unclear. An increasing amount of evidence indicates that autophagy is closely related to the development of GCs. Autophagy is a major biological process involved in the degradation and recycling of cellular components and damaged organelles [ 13 – 15 ]. Autophagy also participates in regulating the apoptosis of GCs to accelerate follicular atresia, while insufficient granulosa-lutein cell autophagy leads to decreased progesterone synthesis and preterm birth in mice [ 15 , 17 ]. GCs from patients with primary ovarian insufficiency (POI) show significant inhibition of autophagy levels, decreased expression of autophagy-related genes, a decrease in the ratio of LC3-II to LC3-I and an increase in p62 protein levels [ 16 ]. Autophagic flux blockade caused by Epg5 deficiency leads to decreased expression of steroidogenic genes, thereby interfering with GC differentiation [ 17 ]. In this study, we improved the stability and bioavailability of resveratrol by constructing resveratrol-βcd, which made it easier for the body to absorb resveratrol and achieve a higher C max at a lower concentration. In a mouse model of POI, resveratrol-βcd restored the proportion and function of macrophages in the ovarian environment, inhibited the progression of POI, and maintained normal ovarian function. After resveratrol-βcd treatment, macrophage-derived IL-6 and resveratrol directly acted on GCs, promoting the autophagy level of GCs and enhancing their estrogen secretion function. Resveratrol-βcd inhibits the progression of POI by exerting combined effects on immune cells and GCs.
Methods Construction and stability identification of resveratrol-βcd Resveratrol-βcd was prepared according to previous literature [ 18 ]. Briefly, at room temperature, resveratrol (MCE, HY-16,561) was added to a sulfobutyl ether-β-cyclodextrin solution (βcd, Shanghai Chineway Pharmaceutical Tech. Co., Ltd.) in excess of its intrinsic solubility. The mixture was sonicated and stirred in a water bath. After filtration through a 0.22 μm filter, the solution was freeze-dried. The stoichiometry of the complex was determined by Job’s plot method, showing a maximum value at R = 0.5 and a symmetrical shape, indicating a 1:1 complex. Resveratrol-βcd was dissolved in PBS, with a storage concentration of 3 mg/ml, PBS was used for dilution for subsequent in vivo experiments. The resveratrol concentration in the culture medium or blood was measured using a Resveratrol ELISA Kit (Cloud-Clone Corp., Wuhan, China) following the manufacturer’s instructions. For mouse serum resveratrol measurement, mouse blood was collected into a 1.5 ml tube, allowed to stand at 37 °C for 60 min, and then stored at 4 °C overnight. Centrifugation at 500 × g for 5 min at 4 °C was performed to collect the upper layer serum for resveratrol-βcd detection. Cell lines and animals 3-week-old (10–12 g) or 8-week-old (20–22 g) (depending on the experimental requirements) female C57BL/6 mice were obtained from Shanghai Model Organisms Center, Inc. For 8-week-old mice, we confirmed the normal estrous cycle by vaginal smears and housed them in a specific pathogen-free (SPF) animal facility with controlled environmental conditions (22–24 °C, 60–70% relative humidity, 12-hour light-dark cycle) and free access to food and water. These animals were cared for following the “Principles of Laboratory Animal Care” guidelines. All experimental procedures involving animals were approved by the Ethics Committee of Shenzhen TopBiotech Co., Ltd. The COV434 cell line and RAW 264.7 cells were obtained from Procell Life Science & Technology Co., Ltd. (Wuhan, China) and cultured in DMEM supplemented with 10% FBS, 2 mM glutamine and 1% P/S (100 mg/ml streptomycin sulfate and 100 U/ml penicillin). The cells were maintained in a saturated atmosphere of 37 °C, 95% air, and 5% CO2 and tested negative for mycoplasma contamination. The RAW264.7 cells used in this experiment were passages 6 to 8, and the COV434 cells were passages 8 to 10. POI mouse model construction The POI mouse model was established by a single intraperitoneal injection of busulfan (30 mg/kg) (Sigma, B2635) and cyclophosphamide (120 mg/kg) (MCE, HY-17,420). The day of administration was recorded as day 1 after the first treatment. Resveratrol-βcd was orally administered at a dose of 50 mg/kg every other day. Each group used five 8-week-old mice for experiment, prior to resveratrol-βcd administration, each mouse was weighed. In the IL-6 neutralization experiment, 200 μg/mouse IL-6 antibody (Bioxcell, BE0046) was administered via intravenous injection at the same time points as resveratrol-βcd. Reproductive activity analysis was performed in the 8th week after treatment, 5 IU PMSG (Solarbio, P9970) was injected intraperitoneally into the mice, and the vaginal plugs was examined the morning after mating. Each group used 5 mice for experiment, in vivo experiments were independently repeated three times. Primary mouse granulosa cell isolation The isolation of primary granulosa cells was performed as previously reported [ 55 , 56 ] with slight modifications. 3-week-old female C57BL/6 mice were euthanized and sterilely dissected to remove the ovaries. The excised ovaries were placed in DMEM/F12 medium containing 1 mg/ml bovine serum albumin and 1% P/S. The tissues surrounding the ovaries were cleaned under a dissecting microscope (Leica, Singapore, M125) and washed twice using the aforementioned medium. GCs were collected by puncturing the excised ovaries with a 25-gauge needle. Cells were resuspended in DMEM/F12 medium (without phenol red) supplemented with 10% charcoal-stripped fetal bovine serum (Vivacell, C3830) and 1% P/S. Cell viability was determined by trypan blue staining, and the purity of granulosa cells was identified using an FSHR antibody (Proteintech, 22665-1-AP). Cells were cultured in basal medium (without phenol red) alone or with different concentrations of testosterone (Sigma, T1500) and FSH for 48 h at 37 °C in a saturated water vapor atmosphere with 95% air and 5% CO 2 . The culture medium was collected for E2 measurement through an electrochemical method. Primary macrophage isolation and transwell coculture The isolation of primary macrophages was performed as previously described [ 57 ]. Briefly, mice were euthanized using cervical dislocation and sterilized with 70% ethanol. The abdominal wall was opened to expose the peritoneum and sterilized again with 70% ethanol. Using sterile forceps, 10 ml of sterile PBS was injected into the posterior side of the abdomen, gently shaking the whole body for 10 s and slowly withdrawing the saline containing peritoneal cells. The cells were resuspended in macrophage culture medium (RPMI 1640 with 10% FBS, 50 IU penicillin, 50 μg streptomycin, and 2 mM glutamine) and incubated at 37 °C for 60 min. After incubation, the cells were washed five times with preheated PBS to remove nonadherent cells. Cell proliferation was assessed using the CCK8 assay A 1 × 10 4 cell/100 μL/well cell suspension was prepared in a 96-well plate. The plate was incubated in a CO 2 incubator for 24 h at 37 °C. Then, 5 μM resveratrol-βcd or resveratrol was added to the plate, and the medium with the drug was replaced every 24 h for 4 days. On days 1, 2, 3, and 4 after drug treatment, the Cell Counting Kit-8 (CCK-8) assay (YEASEN, China) was used to measure cell proliferation. Before adding the CCK-8 solution, the culture medium was removed, and the cells were washed twice with medium to remove any residual drug. Then, 10 μL of CCK-8 solution was added to each well, and the plate was incubated in a CO 2 incubator for 2 h. The absorbance at 450 nm was measured. Enzyme-linked immunosorbent assay (ELISA) For the detection of cytokines and steroid hormones, corresponding ELISA kits were used according to the manufacturer’s instructions. For mouse serum samples, whole blood without anticoagulant was collected and incubated at 37 °C for 30 min, followed by centrifugation at 500 × g for 5 min to obtain the supernatant. The serum was diluted 1:1 with PBS before use as the sample for ELISA. Three replicate wells were set up for each sample. The following ELISA kits were used: Mouse IL-6 (Biolegend, 431,304), Mouse TNF-α (Biolegend, 430,904), Mouse FSH (Follicle Stimulating Hormone) ELISA Kit (Elabscience, E-EL-M0511C), Mouse AMH (Anti-Mullerian Hormone) ELISA Kit (Elabscience, E-EL-M3015), and sIL-6 Mouse IL-6R alpha duoset ELISA (R&D, DY1830). The absorbance at 450 nm was measured. Flow cytometry The single-cell suspension was prepared using mechanical dissociation. Briefly, the isolated ovaries were thoroughly washed with PBS, and the fatty tissue was removed. The ovaries were then placed on a 70 μm cell filter (Biosharp, BS-70-XBS), and the tissue was ground with the rubber head of a 5 ml syringe to obtain a single-cell suspension. The prepared single-cell suspension was washed once with PBS and stained with the following monoclonal antibodies for cell surface phenotypic analysis: CD45-efluor 450 (eBioscience, 48-0451-82), CD11b-APC cy7 (eBioscience, 47-0112-82), F4/80-PE (eBioscience, 12-4801-82), CD86-FITC (eBioscience, 11-0862-82); CD206-APC (eBioscience, 17-2061-82), and IL-6R-PE (eBioscience, 12-1261-80). For IL-6R, rat IgG2b kappa isotype control (eB149/10H5) and PE (eBioscience, 12-4031-82) were used as isotype controls. The single-cell suspension was stained with antibodies for 30 min at 4 °C in the dark, and single-color controls were also prepared for compensation adjustment. The samples were washed and resuspended in PBS. Flow cytometry data acquisition was performed using Cytoflex LX (Beckman Coulter, Brea, CA). Data analysis was conducted using FlowJo 10.0. Supplementary Fig. 3 provides an example of the gating strategy. Histological section preparation and HE staining After the treatment was finished, the mice were euthanized, and the ovaries were isolated. The ovaries were fixed in 4% paraformaldehyde at 4 °C for 24 h, with one change of fixative at 12 h after fixation. The ratio of fixative to tissue was 20:1. The ovaries were dehydrated using a gradient of 20% sucrose/PBS and 30% sucrose/PBS, with each gradient lasting 24 h and one solution change in between. The ovaries were embedded in O.C.T. Compound (Sakura, 4583), and the ovarian tissues were sectioned completely at a thickness of 5 μm. After air-drying at room temperature, the sections were washed with PBS for 5 min, stained with hematoxylin-eosin (HE) for 5 min, differentiated in 1% hydrochloric acid alcohol, rinsed in water for 30 min, counterstained with eosin for 5 min, dehydrated with 95% ethanol, absolute ethanol, and xylene (absolute ethanol: xylene = 1:1), and then mounted and observed using an upright microscope (Leica DM4B, Germany). The number of follicles in the ovaries was counted. Quantitative real-time PCR (qPCR) The ovarian tissues were thoroughly minced, and total RNA was extracted using the AG steadypure Universal RNA Extraction Kit (AG, AG21017). The RNA concentration was measured using a NanoDrop 2000. Reverse transcription was performed using the Hifair III 1st Strand cDNA Synthesis supermix for PCR (Yeasen 11137ES10) according to the instructions. qPCR was conducted using Hieff qPCR SYBR Green Master Mix (No Rox) (YEASEN 11201ES08) on the Bio-Rad CFX96 System. The results were analyzed using the ΔΔCT method [ 58 ]. The primers used were as follows: Indirect immunofluorescence Isolated primary ovarian granulosa cells were plated on cell slides and incubated overnight at 37 °C with 95% air and 5% CO2. The culture medium was removed, and the cells were washed once with precooled PBS. Cells were fixed with 4% paraformaldehyde at room temperature for 15 min. Then, the cells were blocked with 5% BSA/0.3% Triton X-100/PBS for 60 min. Next, FSHR antibody (Proteintech, 22665-1-AP) was added at a 1:50 dilution and incubated overnight at 4 °C. After washing three times with PBS, the cells were incubated with a secondary antibody (anti-rabbit Alexa Fluor 488, A32731, Invitrogen) diluted in antibody dilution buffer (1% BSA/0.3% Triton X-100/PBS) at a ratio of 1:2000 for 60 min at room temperature in the dark. Following three washes with PBS, cell slides were mounted with mounting medium with DAPI - Aqueous, Fluoroshield (Abcam, ab104139) and observed using a confocal microscope (ZEISS, LSM 880). Western blot Cultured cells were washed once with precooled PBS. Then, 100 μl of RIPA buffer (Beyotime, PC101) (prior to use, PMSF was added to a final concentration of 1 mM) was added to each well of a 6-well plate and incubated on ice for 1 min. All the liquid was transferred to a new tube. The protein concentration of the lysate was determined using a BCA Protein Concentration Assay Kit (Beyotime, P0012). Then, 5X loading buffer (250 mM Tris-Cl pH = 6.8, 10% SDS, 0.5% bromophenol blue, 50% glycerol, 5% β-mercaptoethanol) was added to the lysate, and the mixture was incubated at 95 °C in a water bath for 5 min to prepare the samples for SDS‒PAGE. A 12% or 15% concentration separating gel was used for SDS‒PAGE. The proteins were transferred to PVDF membranes using a semidry transfer system (Bio-Rad Trans-Blot Turbo). The membrane was blocked with 5% BSA/TBST at room temperature for 1 h. Dilute primary antibodies in 5% BSA/TBST and incubate overnight at 4 °C with shaking. The primary antibodies used in this experiment included LHX8 (Sigma, SAB2101342), NOBOX (Sigma, SAB2105362), SQSTM1 (CST, #88,588), LC3I/II (LC3A/B CST, #12,741), β-actin (Proteintech, 20536-1-AP), JAK2 (CST, #3230), and p-JAK2 (CST, #3771). The membrane was washed four times with 15 ml TBST for 5 min each time. Secondary antibodies were diluted in 5% BSA/TBST and incubated at room temperature for 60 min. The secondary antibodies used in this study were HRP-linked anti-rabbit IgG (CST, #7074) and HRP-linked anti-mouse IgG (CST, #7076). The membrane was washed four times with 15 ml TBST for 5 min each time. Following the instructions of Supersignal West Pico Plus Chemiluminescent Substrate (Thermo Scientific, 34,580), Solution A and Solution B were mixed at a 1:1 ratio, and the membrane was immersed in the mixture, incubated for 3 min, and visualized using the iBright FL1000 Imaging System (Thermo Fisher Scientific Inc.). Statistical analysis All in vitro and in vivo experiments were independently repeated at least twice, and the data were analyzed by GraphPad 9.0.0 software (GraphPad). Statistical analysis was performed using unpaired two-tailed Student’s t test for two-group comparisons, two-way ANOVA was used for more than two-group comparisons, and P values < 0.05 were considered statistically significant, ns = not significant, Error bar = ± S.D.
Results Resveratrol-βcd can improve the stability of resveratrol According to previous studies, we found that the resveratrol-βcd complex can enhance the solubility of trans-resveratrol [ 18 ]. However, it remains unclear whether the resveratrol-βcd complex can enhance the stability of resveratrol. We evaluated the detection of resveratrol-βcd using the resveratrol ELISA kit. The results showed that the resveratrol ELISA kit provided accurate measurements of resveratrol-βcd concentrations (Supplementary Fig. 1 ). We used the ELISA method to detect the decrease in resveratrol in the culture medium. Within a 48-hour period, the resveratrol content in the resveratrol-βcd group remained at 84.3%, which was a 9.5% improvement ( p = 0.0002) compared to the resveratrol group’s 74.8% (Fig. 1 A). Additionally, we evaluated the impact of resveratrol on cell proliferation in vitro. The results showed that at a concentration of 5 μM, resveratrol had almost no effect on the proliferation of primary ovarian granulosa cells and COV434 ovarian granulosa cells (Fig. 1 B, C). The pharmacokinetic parameters and safety of resveratrol in vivo was also evaluated. We performed pharmacokinetic analysis by orally administering resveratrol-βcd or resveratrol to mice at doses of 25 mg/kg (0.1 mmol/kg) or 50 mg/kg (0.2 mmol/kg). The results showed that at 50 mg/kg, the C max value of the complex was 2.17 ± 0.27 ng/ml, while the control group had a C max value of 0.24 ± 0.07 ng/ml, indicating a high peak concentration. There was also a significant difference in Area Under the Curve (AUC) between the two groups (resveratrol-βcd: 101,573 ± 2912 vs. resveratrol: 60,259 ± 1305, p = 0.0003) (Fig. 1 D). The results of the 25 mg/kg group were consistent with those of the 50 mg/kg group (Fig. 1 E). Furthermore, we evaluated the effects of resveratrol-βcd on the physiological and immune status of mice. We administered resveratrol-βcd at doses of 25 mg/kg or 50 mg/kg every other day via oral gavage. Resveratrol-βcd had a minimal effect on mouse body weight (Fig. 1 F). Based on our previous research, resveratrol has a strong activating effect on dysfunctional macrophages. The activation state of macrophages was also evaluated during treatment. Over the observation period, the level of TNFα, an important cytokine derived from activated monocyte-macrophages, in peripheral blood showed minimal changes (Fig. 1 G). On day 14 posttreatment, we euthanized the mice, and there were no significant differences in spleen size among the different groups (Fig. 1 H, I). Resveratrol-βcd suppresses POI in a mouse model Based on the resveratrol-βcd safety confirmed above, busulfan and cyclophosphamide (B/C)-treated mice, the POI mouse model, were treated with resveratrol-βcd at a dosage of 50 mg/kg every other day via oral gavage (Fig. 2 A). The combination of cyclophosphamide and busulfan is frequently employed for creating a mouse model of Premature Ovarian Insufficiency (POI) due to its rapid model establishment and high success rate [ 19 – 22 ]. The results showed that the body weight of mice in the POI model group gradually decreased throughout the model construction period, while mice treated with resveratrol-βcd exhibited significant weight gain compared to the POI group (Fig. 2 B). After the treatment period, the mice were dissected, and the weight of the ovaries was measured. Compared to the control group, the ovarian weight of mice decreased by 46.24% after B/C treatment, but there was significant recovery in ovarian weight after resveratrol-βcd treatment (Fig. 2 C, D). During the treatment process, we evaluated ovarian function-related phenotypes such as fertility rate, estrous cycle, and quantity of ovarian follicle, and also conducted tests on genes and proteins that showed a clear positive correlation with reproductive cell function. The use of resveratrol-βcd resulted in obvious recovery of the quantity of primary, secondary, and antral follicles in the ovaries (Fig. 2 E, F). We also examined LIM homeobox gene 8 (LHX8) and newborn ovary homeobox (NOBOX), two proteins that are associated with ovarian development. LHX8 plays an important role in the formation and maintenance of primordial follicles as well as early follicle development [ 23 ]. Lack of NOBOX leads to accelerated loss of oocytes from primordial follicles to growing follicles and induces fibrosis in ovarian tissue in female mice [ 24 ]. The results showed that the expression levels of both proteins were moderately enhanced compared to those in the POI group after resveratrol-βcd treatment (Fig. 2 G). Transcript levels of some typical markers in the ovary were also examined, and it was observed that the gene levels of Ddx4 (also known as mouse vasa homolog) and Pou5f1 (also known as octamer-binding transcription factor 4) exhibited significant recovery after treatment (Fig. 2 H). The decrease in AMH in the blood was notably inhibited (Fig. 2 I), and the level of FSH also significantly decreased (Fig. 2 J), with a significant recovery seen in the level of E2 (Fig. 2 J). After treatment, estrous cycles examination showed that the estrous cycle of POI mice underwent noticeable changes, with a significant reduction in the duration of the estrous phase, this condition was partially alleviated in the treatment group (Fig. 2 K). After treatment, reproductive activity analysis was conducted on the treatment group and the POI group. 5 IU PMSG was injected intraperitoneally into the mice, and the vaginal plugs was examined the morning after mating. The POI mice with vaginal plugs did not produce fertilized embryos, indicating evident infertility symptoms in the POI mice. resveratrol-βcd treatment group showed partial restoration of this condition (Fig. 2 L). Resveratrol-βcd maintained ovarian granulosa cell function and promoted GC autophagy E2 can be produced in various organs throughout the body, but it is mainly derived from ovarian granulosa cells and adipose tissue [ 25 ]. It has been proven that autophagy in GCs plays an important regulatory role in steroidogenesis [ 26 ]. Therefore, we analyzed the phenotypes and autophagy status of ovarian granulosa cells. Primary ovarian granulosa cells were isolated from the POI model or resveratrol-βcd treatment group. The levels of FSHR were measured using indirect immunofluorescence (Fig. 3 A, B). The results showed a significant increase in FSHR levels in the treatment group. The FSHR western blot results were consistent with the indirect immunofluorescence results (Fig. 3 C, E). It has been reported that resveratrol can act as an autophagy activator and modulate cell metabolism and differentiation [ 27 , 28 ]. We analyzed the autophagy levels of these cells, and the results showed that the autophagy levels derived from the POI mouse model were significantly inhibited (Fig. 3 D, F), but there was a significant recovery in the autophagy status of primary ovarian granulosa cells after treatment with resveratrol-βcd. Additionally, GCs derived from the treatment group exhibited higher levels of E2 secretion (Fig. 3 G). This suggests that resveratrol-βcd effectively maintains the normal function of GCs during the progression of POI. Furthermore, we identified the activation of autophagy using resveratrol-βcd in vitro. After treatment with resveratrol, there was a certain elevation in the autophagy levels of primary granulosa cells. Western blot analysis showed an increase in the ratio of LC3II/LC3I and a decrease in SQSTM1/p62 levels (Fig. 3 H, I). However, when primary granulosa cells were stimulated with different concentrations of resveratrol-βcd in vitro, the levels of E2 showed a slight increase, but there was no significant difference between groups (Fig. 3 J), which is inconsistent with the results from in vivo experiments (Fig. 2 L). This suggests that there may be other factors involved in the regulation of ovarian granulosa cells. Resveratrol-βcd can activate macrophages to secrete IL-6, antagonizing the process of POI Our previous results showed that using resveratrol can modulate the relationship between macrophages and T cells [ 18 ]. We evaluated the changes in the proportion and polarization level of ovarian macrophages in the POI model and the resveratrol-βcd treatment group. The analysis of macrophage levels in the ovaries revealed a significant decrease in the macrophage ratio in the POI group. After treatment with resveratrol-βcd, there was a significant recovery in the macrophage ratio (Fig. 4 A, B). Analysis of the M1 and M2 macrophage proportions showed a certain degree of decrease in the proportion of M2 macrophages in the POI mouse model. After treatment with resveratrol-βcd, the proportion of M2 macrophages exhibited some recovery, but there was no statistical significance (Fig. 4 C, D). The proportion of M1 macrophages significantly increased (Fig. 4 E). We evaluated the macrophage activation ability of resveratrol-βcd in vitro. The activating effect of resveratrol-βcd on RAW 264.7 cells, a macrophage-like cell line, was minor (Fig. 4 F), but resveratrol-βcd directly promoted the differentiation of primary macrophages into the M1 phenotype (Fig. 4 G). Apart from transcription markers, we analyzed the cytokines that may be derived from macrophages in serum and ovarian cells during the treatment process of the POI mouse model. The level of TNFα in the serum of POI mice showed a certain increase. In the resveratrol-βcd treatment group, there was no significant difference in TNFα levels compared to those in the POI group (Fig. 4 H). Interestingly, after treatment with resveratrol-βcd, the level of IL-6 showed a significant increase (Fig. 4 I). To elucidate the role of IL-6 in antagonizing the process of POI during resveratrol-βcd treatment, we conducted neutralization experiments using an anti-IL-6 antibody (αIL-6). The results showed that the anti-IL-6 antibody could significantly reduce IL-6 levels in peripheral blood (Fig. 4 J). αIL-6 partially neutralized the effects of resveratrol-βcd. On day 20 posttreatment, the ovarian weight and the number of growing follicles in the αIL-6 neutralization group were significantly lower than those in the group treated with resveratrol-βcd alone (Fig. 4 K, L). Resveratrol-βcd and IL-6 can synergistically promote autophagy IL-6 is a pleiotropic cytokine that participates in the physiological activities of almost all organ systems. Recent studies have shown that IL-6 can induce autophagy [ 29 ]. IL-6 belongs to the IL-6 cytokine family, and its pathway is mediated through glycoprotein 130 (gp130) [ 30 ]. IL-6 cannot directly activate gp130 and must first bind to IL-6R, a component of a pathway known as the classic pathway [ 31 ], to further activate intracellular signaling, such as the JAK/STAT pathway [ 30 ]. However, only a few cell types, including liver cells and several subsets of leukocytes, express IL-6R on their cell surface. To clarify whether IL-6 can directly act on ovarian granulosa cells. We first analyzed the IL-6R level on the surface of primary ovarian granulosa cells, which was found to be relatively low (Fig. 5 A). In addition to cell surface IL-6R, there is also a soluble form of IL-6R (sIL-6R), which has similar affinity to IL-6 compared to membrane-bound IL-6R, and the IL-6/sIL-6R complex can induce the formation of gp130 homodimers on almost all cell types, a process known as IL-6 trans-signaling [ 32 ]. Our results showed that consistent with previous reports [ 33 ], resveratrol-βcd can induce the secretion of sIL-6R from primary macrophages (Fig. 5 B). At the same time, resveratrol-βcd weakly stimulated the secretion of sIL-6R by primary granulosa cells in vitro (Fig. 5 B). To further determine whether the IL-6/sIL-6R signaling pathway is involved in the effects of resveratrol-βcd on GCs, we also identified downstream signaling pathways. Resveratrol-βcd enhanced the phosphorylation level of JAK2 over time in the coculture of primary granulosa cells and macrophages in vitro (Fig. 5 C). When neutralizing with IL-6 antibody, the coculture of primary granulosa cells and macrophages failed to activate the JAK2 signaling pathway in primary granulosa cells (Fig. 5 C). We also observed that during coculture, the autophagy level of granulosa cells was increased with the activation of JAK2 (Fig. 5 C, D). We further evaluated the impact of resveratrol-βcd on autophagic flux in GCs by using mRFP-GFP-LC3 lentivirus-transfected primary granulosa cells. There was a significant increase in the number of LC3 puncta in both the resveratrol-βcd alone group and the coculture groups with resveratrol-βcd and macrophages (Fig. 5 E, F). The ratio of red to yellow puncta was significantly enhanced after adding macrophages, indicating a further synergistic effect of autophagic flux by macrophages and resveratrol-βcd (Fig. 5 G). To confirm the synergistic effect of resveratrol-βcd and IL-6 in promoting autophagy, we stimulated granulosa cells derived from (B/C)-treated mice with IL-6 and resveratrol-βcd together, and the results showed that macrophage-derived cytokines and resveratrol-βcd further enhanced the level of autophagy (Fig. 5 H, I). When macrophages were added, the secretion of E2 by primary granulosa cells was significantly enhanced (Fig. 5 J), and this promoting effect was inhibited when IL-6 antibody was added (Fig. 5 K). In conclusion, macrophage-derived IL-6 stimulated by resveratrol-βcd can synergistically enhance the autophagic function of granulosa cells and E2 secretion.
Discussion The function of resveratrol in promoting autophagy has been validated in various types of cells. In vitro experiments have shown that resveratrol can directly inhibit mammalian target of rapamycin (mTOR) [ 34 ], a key regulator of autophagy [ 35 ]. Resveratrol can enhance the interaction between mTOR and the mTOR inhibitor DEPTOR (DEP-domain containing mtor-interaction protein) [ 36 , 37 , 29 , 39 ]. Resvega, a commercial product containing resveratrol, has been shown to upregulate autophagic flux and autolysosome formation [ 27 ]. These results suggest that resveratrol could promote the autophagic pathway in GCs. In this study, our results demonstrate that resveratrol can directly act on primary granulosa cells and promote their autophagy. On the other hand, resveratrol can also exert its antagonistic effect on POI by modulating the immune response [ 12 ]. In infectious disease research, resveratrol has been shown to activate M2 macrophages during pathogen infection [ 38 – 40 ]. Our previous report also showed that resveratrol can balance the functions of macrophages and T cells [ 18 ]. We improved the stability of resveratrol by constructing resveratrol-βcd, making it easier for the body to absorb. In this experiment, the use of beta-cyclodextrin increased the C max value of resveratrol in vivo (Fig. 1 D, E), which is beneficial for enhancing its local effects on the body. Many groups have extensively studied the ovarian immune microenvironment in patients with primary ovarian insufficiency. In the peripheral blood of POI patients, there is a significant decrease in the proportion and proliferation level of Tregs [ 41 ]. However, the levels of IL-10, IL-4, and TGF-β in the serum of POI patients do not significantly differ from those of healthy controls [ 41 ]. This suggests that immune dysfunction caused by POI may only exist in the ovary. In the mouse model of POI, there is a high Th1 response level in the ovaries, and the high expression of IFN-gamma and TNF-alpha leads to granulosa cell apoptosis. In this study, the peripheral blood levels of TNF-alpha in POI mice did not show a significant increase (Fig. 4 H), but we observed a significant increase in IL-6 levels in both peripheral blood and ovaries after resveratrol treatment (Fig. 4 I, J). [ 44 – 50 ]IL-6 is a multifunctional cytokine, and the role of IL-6 in human oocyte maturation and subsequent embryo development is still uncertain. Some studies have suggested that high levels of IL-6 in follicular fluid are beneficial for oocyte maturation [ 42 , 52 ]. However, other studies have drawn opposite conclusions: higher levels of IL-6 during in vitro fertilization cycles are associated with poorer embryo quality and lower chances of pregnancy [ 43 , 44 ]. There are also studies suggesting that IL-6 does not affect the clinical pregnancy rate in IVF-ET and therefore does not affect oocyte quality [ 45 , 46 ]. Our findings indicate that IL-6 exerts its effects on ovarian granulosa cells via the IL-6/sIL-6R pathway, as there is a lack of IL-6R on GCs (Fig. 5 A), while sIL-6R derived from macrophages can act on GCs (Fig. 5 B-C). It has been reported that IL6Rα mRNA and protein are highly expressed in granulosa cells of progressing preovulatory follicles [ 47 , 48 ]. Therefore, the role of IL-6 may be discussed separately at different stages of follicle development. Detection of the dynamic changes in IL-6/IL-6R and their function during follicle development requires further investigation in subsequent studies. There have been reports suggesting that IL-6 inhibits FSH-induced production of estradiol and progesterone in granulosa cells [ 49 ]. However, our results demonstrate that IL-6 generated after resveratrol-βcd treatment can enhance autophagy levels in granulosa cells and promote their activation (Fig. 5 C). Consistently, the use of IL-6 antibodies inhibited the ability of resveratrol-βcd to antagonize POI (Fig. 4 K, L). The inconsistent results compared to previous reports may be due to phenotypic differences in granulosa cells at different stages of follicular development or differences in mouse strain. [ 43 ]It is worth noting that the cell-specific effect of interleukin production is another important function of resveratrol. Resveratrol can increase the expression of IL-1β and IL-6 in peripheral blood lymphocytes [ 50 ], but its stimulating effect on cell lines such as RAW 264.7 is not significant [ 50 ]. This is consistent with the results of our study (Fig. 4 F, G), where resveratrol-βcd failed to significantly activate RAW 264.7 cells in vitro. The enhancement of IL-1β and IL-6 production is characteristic of proinflammatory states, which aids in the differentiation and function of T helper lymphocytes [ 51 ], but it also plays a role in tissue regeneration [ 52 ]. These results suggest that resveratrol-βcd may have specific receptors or signaling pathways to regulate cell development, and further research on resveratrol could focus on the signaling pathways and resveratrol-Treg regulation. Our research also has certain limitations. IL-6, as a multifunctional cytokine [ 53 ], not only induces autophagy but also has profound effects on other cells within the ovary and ovary functions [ 54 ]. Additionally, the function of resveratrol is not solely restricted to stimulating macrophages; it can directly impact some cellular signaling pathways [ 10 , 11 ]. In this study, we were unable to provide sufficient clarification regarding the interrelationships among these potential mechanisms. This will be a target for subsequent research.
Background The ovarian environment of premature ovarian insufficiency (POI) patients exhibits immune dysregulation, which leads to excessive secretion of numerous proinflammatory cytokines that affect ovarian function. An abnormal level of macrophage polarization directly or indirectly inhibits the differentiation of ovarian granulosa cells and steroid hormone production, ultimately leading to POI. Resveratrol, as a health supplement, has been widely recognized for its safety. There is a substantial amount of evidence indicating that resveratrol and its analogs possess significant immune-regulatory functions. It has also been reported that resveratrol can effectively inhibit the progression of POI. However, the underlying immunological and molecular mechanisms through which resveratrol inhibits the progression of POI are still unclear. Results Our preliminary reports have shown that resveratrol-βcd, the beta-cyclodextrin complex of resveratrol, significantly enhances the stability of resveratrol. Resveratrol-βcd could regulate the dysfunctional immune status of macrophages and T cells in the tumor microenvironment. In this study, we treated busulfan and cyclophosphamide (B/C)-treated mice, which were used as a POI model, with resveratrol-βcd. After resveratrol-βcd treatment, the levels of IL-6 in the ovaries were significantly increased, and the progression of POI was suppressed. IL-6 activated granulosa cells (GCs) through soluble IL-6R (sIL-6R), promoting autophagy in GCs. Resveratrol-βcd and IL-6 had a synergistic effect on enhancing autophagy in GCs and promoting E2 secretion. Conclusions We partially elucidated the immune mechanism by which resveratrol inhibits the progression of POI and the autophagy-regulating function of GCs. This provides a theoretical basis for using resveratrol to prevent POI in future studies and clinical guidance. Supplementary Information The online version contains supplementary material available at 10.1186/s13048-024-01344-0. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements Not applicable. Author contributions B.H. initiated and design the research, W.Z. and X.Z. performed the experiments, collected and analyzed data. B.H. prepared the figures and wrote the manuscript. B.H. and X.Z. contributed to the manuscript equally. All authors contributed to manuscript revision, read, and approved the submitted version. Funding This work was supported by the Project of Traditional Chinese Medicine Bureau of Guangdong Province, China (Grant No. 20221095). Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Animals used in the study were cared for following the “Principles of Laboratory Animal Care” guidelines. All experimental procedures involving animals were approved by the Ethics Committee of Shenzhen TopBiotech Co., Ltd. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
J Ovarian Res. 2024 Jan 15; 17:18
oa_package/a5/12/PMC10789063.tar.gz
PMC10789064
0
Introduction The average sickness absence rates vary between 3% and 6% across European countries. Especially long term sickness absence (LTSA) has high costs amounting up to 2.5% of a country’s gross domestic product [ 1 ]. In the Netherlands, most costs are carried by employers, as they not only lose the productivity of the sick-listed employee but also have to continue to pay the salary of sick employees for two years [ 2 ]. Reducing sickness absence is important for society, the employer, and in particular for employees, as being in employment is often associated with better quality of life, health and physical functioning [ 3 ]. Sickness absence is related to various factors, including the cause of the sickness absence, the age and gender of the employee, and the work environment. The most common causes of LTSA are diseases of the circulatory system, mental disorders and diseases of the musculoskeletal system [ 4 , 5 ]. In the Netherlands, the sickness absence percentage increases with age until 65 years of age. Interestingly, employees between the ages of 65 and 75 exhibit a lower sickness absence percentage than those aged 55 to 64 years, suggesting a “healthy worker” effect. This phenomenon implies that healthier individuals tend to remain in the workforce longer, while those with health issues exit the labor process at an earlier stage [ 4 ]. Extensive research has been done to investigate gender differences in sickness absence. The predominant finding of these studies is that, on average, female employees report sick more frequently and experience longer periods of sickness absence in comparison to male employees [ 6 – 11 ]. Leijon et al. (1998) investigated gender trends in sickness absence for various causes, and found that women had both a higher sickness absence frequency and longer sickness absence duration compared to men [ 8 ]. Similarly, Arcas et al. (2016) found that among employees with diseases of the musculoskeletal, women had a longer sickness absence duration than men [ 9 ]. However, in some older age groups, they observed a longer absence duration for men. Labriola et al (2011) focused on long term sickness absence and found that the frequency was nearly 40% higher for men compared to women [ 10 ]. Bekker et al. (2009) conducted a literature review on the relationship between gender and sickness absence, finding that women are generally absent more frequently, especially when it comes to short-term absences [ 12 ]. They also found that gender differences in sickness absence are influenced by various factors such as country of residence, age, and professional group. Different other studies focus on factors explaining the so called gender-duration-gap, such as parenthood, type of work, and social roles [ 13 – 17 ]. Nilsen et al. (2017) reviewed eight longitudinal studies and found that although women report higher work-family conflict than men, but this did not explain the gender difference in sickness absence [ 17 ]. Angelov et al. (2013) investigated the effect of parenthood on sickness absence and found that entering parenthood increased women’s absence rate compared to the corresponding rate for men. They also found that this effect was long-lasting and remained at least until 16 years after the birth of first child [ 13 ]. On the other hand, Mastekaasa (2013) analyzed data from 23 EU countries plus Norway and found that dependent children are associated with lower sickness absence among married/cohabiting women [ 16 ]. Casini et al. (2013) also studied factors that could explain gender differences in absence duration. They found that especially job strain is linked to a longer absence duration for women compared to men [ 14 ]. Similarly, Lidwall et al. (2009) found that women have a higher risk on long-term absence when working in high-strain jobs compared with men, especially in the private sector [ 7 ]. Contrarily, some other studies have found limited evidence of a correlation between sickness duration and gender. For example, Cornelius et al. (2010) conducted a systematic review and found only limited evidence to support an association between sickness absence duration and gender [ 18 ]. A study by Spierdijk et al. (2009) on self-employed individuals failed to identify any significant gender differences in sickness absence duration [ 19 ]. While most studies have primarily focused on statistical measures such as average duration of sickness absence or sickness absence frequency, our study takes a more comprehensive approach. In addition to determining descriptive statistical measures, we investigate gender differences by analyzing recovery rates across various diagnoses. This thorough analysis provides us with a more detailed understanding of how gender influences sickness absence and the trajectory of recovery across various diagnoses over time.
Methods Study population and design In The Netherlands, it is a requirement for all employers to ensure that their employees have access to occupational health care, which is typically provided by an occupational health service (OHS). An OHS is responsible for registering sickness absences, and for providing guidance to sick-listed employees by medical consultations and advice for returning to work (RTW). When an employee reports sick, the OHS registers this in the sickness absence register. Sickness absence can be due to any (i.e., work-related and non-work-related) physical or mental illness or injury. In The Netherlands, the employer financially compensates sickness absence for a period of 104 weeks. Most employers cover 100% of the worker’s salary in the first year of sickness absence and 70% in the second year. The OHS follows employees during 104 weeks of sickness absence, after which the employee may apply for a disability pension provided by the Employee Insurance Agency (UWV) and the employer may end the job contract. For our study we retrieved data from a sickness absence register of a large Dutch national OHS, registering sickness absence data of approximately 1.24 million Dutch employees from about 11.6 thousand companies of various economic sectors throughout the country. The dataset included all reported employee sickness cases from January 2010 to December 2020. When an employee experienced multiple periods of sickness absence, we included each separate period in the dataset as a single case. For each case, we calculated the duration of sickness, defined as the interval from the first to the last registered day of sickness absence. For our study we included cases aged between 16 and 70 with a sickness duration between 1 day and 104 weeks. We excluded cases diagnosed as pregnancy and pregnancy-related diseases. Most of the sickness absence cases we observed were short-term, typically lasting less than two weeks. These short-term absences were commonly due to medical conditions like upper respiratory infections or gastrointestinal disturbances. For longer sickness absence periods, employees consult an occupational health physician (OHP) for return-to-work (RTW) advice. In the Netherlands, it is mandatory for employees on sick leave to consult an OHP within 42 days of their absence. The OHP then documents the diagnosis in the Occupational Health Service (OHS) register. To classify the employees’ diagnoses, the OHS uses the Dutch classification system for Occupational and Social insurance physicians (CAS) [ 20 ]. This system is based on the ‘International Statistical Classification of Diseases and Related Health Problems’ (ICD-10) and contains similar main categories. An important distinction between the CAS system and the ICD10 is the classification of neoplasms. In the CAS system, neoplasms are categorized under the relevant organ system, whereas in the ICD10, they are considered a distinct class [ 21 ]. For our detailed analysis, we utilized cases that had been OHP diagnosed, excluding cases with unknown diagnoses. Analysis Statistical analyses were done using the lifelines library for survival analysis and Python 3.10 [ 22 ]. Employees were right-censored when the job contract ended during the sickness absence. Data about sickness duration were analyzed descriptively using the mean, median, and standard deviation. Descriptive statistics were computed for each diagnostic category and gender. For each diagnostic category, the difference in means between genders was calculated. We focused on differences in means including confidence intervals instead of applying tests of significance. This is because the latter are heavily influenced by sample size and will almost always demonstrate a significant difference, even for small differences that may not have practical significance [ 23 ]. We have analyzed the most important causes for sickness absence, in terms of both frequency and duration, in more detail. For these diagnostic categories, hazard rates have been determined using the Nelson-Aalen estimator [ 24 ]. The estimated hazard function gives the recovery rate at each point in time t , and is defined as the probability that an employee will recover in the next moment , conditional on the fact that the employee was still sick at time t . For instance, if 100 employees are absent at the start of a day, and 2 have recovered by the start of the next day, the daily recovery rate is 0.02. Considering that the onset and recovery of sickness absence are not evenly distributed across weekdays, with a higher percentage of employees reporting recovery on Mondays, we used a one-week moving average filter to smooth hazard the rate at each point in time t . Ethical approval Ethical approval was not necessary as the Medical Research involving Human Subjects Act does not apply to studies of anonymized register data. The Medical Ethics Committee of the University Groningen confirmed that ethical clearance was not necessary for this study.
Results Descriptive results In the period between January 2010 and December 2020 there were 4,998,455 sickness cases that fulfilled our inclusion criteria, cf., Fig. 1 . Of these cases 52% were male, and 48% were female. This closely reflects the male/female ratio of the Dutch working population during the same period (about 53% male and 47% female) [ 25 ]. The mean absence duration was 23 ± 42 days. Among these approximately 5 million cases, approximately 11% ( n = 562,395) were consulted and diagnosed by an occupational health physician (OHP). For further analysis, we excluded cases where employees were diagnosed with an unknown code ( n = 6). Tables 1 and 2 present descriptive statistics regarding sickness absence duration for men and women across various diagnostic categories. The most prevalent causes of sickness absence were diseases of the musculoskeletal system, mental disorders, diseases of the respiratory system, and diseases of the digestive system. For women, the most prevalent diagnoses were mental disorders, whereas for men diseases of the musculoskeletal system were most prevalent. Across all OHP diagnosed cases, the mean sickness absence duration was 158 days for women and 117 days for men, with an average gender difference of 41 days (95% confidence interval: 40.3-42.0). Among the most prevalent causes, mental disorders had the longest sickness absence duration (186 ± 162 days). Except for diagnoses of diseases of the blood and blood forming organs, women had a longer average sickness absence duration than men. For each specific diagnosis, the 95% confidence interval indicates that the results are not only statistically significant but also practically relevant. The largest gender differences in sickness absence duration were found for cases diagnosed with diseases of the musculoskeletal system and cases diagnosed with diseases of the genitourinary system. Analytical results The most important causes for sickness absence, in terms of both frequency and duration, were diseases of the musculoskeletal system, mental disorders, diseases of the nervous system and diseases of the circulatory system. We have explored these diagnostic categories in more detail. Figure 2 displays the recovery rates for these diagnostic categories. For male employees with mental disorders, the recovery rate remains rather constant during the initial 1.5 years. This suggests that the conditional probability of recovery does not change during this period and is independent of the actual sickness absence duration. In contrast, for female employees with mental disorders, we observe considerably lower recovery rates in the first few months. This indicates that a smaller proportion of women recover during this early period relative to men. For diseases of the musculoskeletal system, the gender difference in recovery rate is even more pronounced, particularly in the initial few months, where the percentage of women reporting recovery is relatively low. However, after approximately three months, the recovery rates become comparable between women and men. For diseases of the nervous system we observe a small difference in recovery rate during in the initial stage of sickness absence. The recovery rates for diseases of the circulatory system are similar for men and women. Table 3 shows the proportion of cases with either mental or musculoskeletal disorders that have recovered at three different points in time (13, 26, and 52 weeks). We see that the proportion of recovered cases differs most among genders within the first three months. For instance, after 3 months, 67% of men with musculoskeletal disorders have recovered, versus 51% of women. For mental disorders, the proportions are 44% for men and 37% for women. Besides gender differences the recovery curves show another interesting pattern around one year of sickness absence. Around that period, we observe a sudden increase in recovery rate for both men and women. This might be an effect of a decrease in salary in the second year of sickness absence, as most employers decrease the salary to 70% after the first year of sickness absence. We will investigate this topic in more detail in a future paper.
Discussion We studied sickness absence patterns among men and women for different diagnostic categories. The results show that for almost all categories the average sickness absence duration was remarkably longer for women than for men. The largest gender differences in sickness absence duration were found for cases with musculoskeletal disorders and diseases of the genitourinary system. The large difference in absence duration for diseases of the genito-urinary system can be explained as this category includes diseases of the breasts, such as mamma carcinoma, which are more prevalent among women and in general are associated with a long sickness absence duration. The most common causes for sickness absence were diseases of the musculoskeletal system, mental disorders, diseases of the nervous system and diseases of the circulatory system. For these categories gender differences were analyzed in more detail by determining the recovery rate over time. In what follows, we will compare our current results with prior studies. To the best of our knowledge, there are only some Dutch studies exploring recovery rates in depth, which might be due to the fact that Dutch regulations for LTSA are rather generous as compared to other countries. However, given that gender differences exist in many Western countries, the observed differences in recovery rates during the initial months might be observed in other countries as well. The general opinion is that the probability of recovery decreases with increasing sickness duration, or in other words, a negative duration dependency [ 6 , 26 ], although some studies report a positive duration dependence [ 27 ]. Koopmans et al. (2009) studied long-term sickness absence between 1998 and 2001 and found declining recovery rates over time [ 26 ]. In contrast, our study found relatively stable recovery rates during the entire duration of sickness absence, except for the initial months. One important difference between our study and Koopmans et all’s is that we conducted subgroup analyses by gender and type of disease. Nonetheless, given that recovery rates were relatively stable across all diagnoses and genders, we anticipate a similarly stable combined recovery rate. Another key difference is that sickness policies in the Netherlands have changed over time. Before 2004, employers were financially responsible for sick employees for only one year, after which individuals could apply for a disability pension. Under the current regulation employers are financially responsible during the first two years of sickness absence. This extended period of financially responsibility could potentially motivate employers to more actively support employee recovery and thus possibly influence recovery rates. Conversely, the prospect of a disability pension under the previous policy might theoretically also have had a stimulating effect on the recovery rate. Joling et al. (2006) examined the duration dependence during sickness absence and found that the recovery rate increased over time [ 27 ]. Their study analyzed both short-term and long-term sickness absence for employees who reported sick in 1990. In contrast, our study only focused on long-term sickness absence and included more recent cases. Roelen et al. (2012) investigated the recovery rates for employees that had been sick-listed between 2006 and 2008 with mental disorders and also found that women resumed their work later than men [ 6 ]. This finding is consistent with our results for mental disorders. By examining recovery rates for different diagnostic groups over the entire duration of sickness absence, our study additionally found that not only the average sickness duration differs between men and women, but that there are also remarkable differences in the recovery rates. In particular during the first months of sickness absence, women have a lower recovery rate than men, indicating women have a certain “delay” in recovery. In the next paragraph, we will explore possible reasons for these observed differences, and discuss their consequences in a practical context. Gender-related factors Various gender-related factors may contribute to differences observed in absence duration and recovery rate, including medical, biological, personal, family and work-related factors [ 12 ]. We will investigate some of these factors and explore how these could account for the observed delay in recovery for women compared to men. Person-related factors Person-related factors, such as coping style and work attitude, can also play a role in sickness absence. Tamres et al. (2002) conducted a meta-analysis of 50 studies on gender differences in coping and identified 17 coping strategies which they classified as problem-focused or emotion-focused behaviors [ 28 ]. Problem-focused coping strategies include active behaviors (such as changing the situation, removing the stressor), planning (review possible solutions), seeking instrumental social support that is directed towards solving problems, and general problem-focused behavior. Emotion-focused behaviors aim to alter the response to the stressor and include seeking emotional support, avoidance, denial, positive reappraisal, isolation, venting, rumination, wishful thinking, self-blame, positive self-talk, and exercise. According to the study, women tend to make more use of coping strategies compared to men, particularly more emotion-focused strategies. Van Rhenen et al. (2008) investigated the role of different coping styles on both the duration and frequency of sickness absence [ 29 ]. They found that both the use of an active problem-solving coping style and seeking emotional support decreased the mean sickness absence duration, with a stronger effect of the problem-solving coping style. Other emotion-based strategies had either no effect (expression of emotions) or a negative effect (avoidance) on sickness absence duration. Conversely, Loset et al. (2018) conducted a survey study to explore differences in attitudes and norms regarding sickness absence and found no significant differences between genders [ 30 ]. Further research is necessary to investigate variations in coping styles throughout the entire period of sickness absence, with particular emphasis on potential differences in coping styles during the initial stage of sickness absence and on changes in coping styles after these initial period. Daily life characteristics Several differences in daily life and occupational characteristics may also influence sickness absence frequency and duration [ 13 , 15 – 17 ]. Women generally tend to spend more time to household tasks and childcare compared to men. The double burden hypothesis proposes that the combination of different roles, such as being an employee and a parent, can increase stress and consequently increase the risk for sickness absence. The strain associated with having multiple roles can be reflected by perceived work-family conflicts, where the demands of one’s professional role interfere with their family role, or vice versa. In a systematic review by Nilsen et al. (2017), they found moderate evidence for a positive correlation between work-family conflict and subsequent sickness absence, indicating that the strain from balancing work and family roles can indeed lead to higher levels of sickness absence [ 17 ]. However, the evidence was insufficient to draw conclusions about the role of gender in the prospective association between work-family conflict and subsequent sickness absence. In our study we found that gender differences in particular influence the recovery rate during the first months of sickness absence. The work-family conflict provides a possible explanation for this observed differential recovery. The strain associated with managing different roles may affect the recovery process differently for men and women. For instance, during the initial recovery phase, women might prioritize resuming their family and social roles, while men may focus more on returning to their professional roles. This potential variance in prioritization could account for the lower recovery rates among women during the early stages of sickness absence. Further research is necessary to investigate the relationship between work-family conflict and recovery rates during the complete sickness absence period. Occupational characteristics In addition to daily life characteristics, sector-specific gender representation patterns may also play a role. Women are predominantly employed in the healthcare, social services, and education sectors, whereas men are more commonly found in industries such as construction, manufacturing, information technology, and transportation [ 31 ]. Some studies suggest that higher rates of sickness absence are associated with occupations dominated by women [ 32 , 33 ]. The emotionally demanding nature of jobs in healthcare and social services often entails working directly with patients or clients. Such roles may require a more complete recovery from mental disorders before work can be resumed. In contrast, physically demanding work might offer a distraction from mental health issues, which could partly explain why men might return to work sooner. However, studies about the association between job occupation and sickness absence are not conclusive. In a recent study Østby et al. (2018) found no evidence that the type of occupation is related to gender differences in sickness absence [ 15 ]. Mastekaasa (2014) even found an increase in gender differences when adjusting for the type of occupation [ 11 ]. Convergence of recovery rates Interestingly, recovery rates for men and women become more similar after the initial three months of sickness absence. This could be due to the natural course of the disease, a change in coping techniques or a re-evaluation of work. More research is needed to investigate the reasons behind this change in recovery rates, which could have significant implications for sickness absence management strategies. Strengths and Weaknesses of the Study The strength of our study is that we could analyze a very large sample of sickness cases over a period of 10 years. Furthermore, we have used the occupational health physician’s (OP) diagnosis, enabling us to investigate sickness absences across various diagnostic categories. A limitation of our study is that we did not examine the impact of other factors that could potentially influence sickness duration and recovery rates. These factors include variations in job type, age, socio-economic status, specific diagnoses, overall health status, and the severity of the disorder. To improve understanding of gender differences in sickness absence, further research is needed to investigate the effect of these factors in particular during the first months of sickness absence.
Conclusions Our study found marked gender differences in both sickness absence duration and recovery rates, with a longer sickness absence duration for women compared to men across almost all sickness causes. Interestingly, we found that recovery rates for women were considerably lower in the first months, indicating that most women start later with recovery than men. This indicates that there is a kind of delay in the recovery process for women. However, after the initial months, recovery rates for both genders tend to converge. In the initial months of sickness duration, a considerable number of employees are affected, so even small differences during this period can greatly influence the total sickness duration and the corresponding costs. Consequently, it is very important to comprehend the factors leading to the noticeable delay in recovery for women. There is a need to further explore known factors that may affect the duration of sickness absence, such as coping mechanisms and conflicts between work and family life. Future research might be directed towards understanding how these factors change during the early stages of sickness absence, as well as the differences among genders in these factors during the beginning phase of sickness absence. This understanding can help to develop effective prevention and intervention strategies to minimize recovery delays and reduce the overall period of sickness absence. Ideally, implementing these strategies during the initial stage of sickness absence seems to be most beneficial.
Purpose Sickness absence is a major public health problem, given its high cost and negative impact on employee well-being. Understanding sickness absence duration and recovery rates among different groups is useful to develop effective strategies for enhancing recovery and reducing costs related to sickness absence. Methods Our study analyzed data from a large occupational health service, including over 5 million sick-listed employees from 2010 to 2020, out of which almost 600,000 cases were diagnosed by an occupational health physician. We classified each case according to diagnosis and gender, and performed descriptive statistical analysis for each category. In addition, we used survival analysis to determine recovery rates for each group. Results Mean sickness duration and recovery rate both differ significantly among groups. Mental and musculoskeletal disorders had the longest absence duration. Recovery rates differed especially during the first months of sickness absence. For men the recovery rate was nearly constant during the first 1.5 year, for women the recovery rate was relatively low in the first three months, and then stayed nearly constant for 1.5 year. Conclusion Across almost all diagnostic classes, it was consistently observed that women had longer average sickness absence durations than to men. Considering mental disorders and diseases of the musculoskeletal system, women had relatively lower recovery rates during the initial months compared to men. As time progressed, the recovery rates of both genders converged and became more similar. Keywords
Acknowledgements Not applicable. Authors' contributions ST, as the primary author of the manuscript, took charge of the intellectual concepts, planned the project, handled the statistical analyses, and wrote the manuscript. ND took on the task of mathematically verifying the analyses. CR played a significant role in crafting the manuscript. All authors have reviewed and given their approval to the final version of the manuscript. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Availability of data and materials The data that support the findings of this study are available from the authors upon reasonable request. The contact person for requests is Sheila Timp and can be contacted by email [email protected]. Declarations Ethics approval and consent to participate The Institutional Review Board of the Faculty of Economics and Business at the University of Groningen concluded that ethical clearance was not necessary for this study because the Medical Research involving Human Subjects Act does not apply to studies of anonymized register data. Therefore informed consent to participate and consent for publication were not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Public Health. 2024 Jan 15; 24:178
oa_package/1c/6f/PMC10789064.tar.gz
PMC10789065
38225675
Introduction/background There is increasing global recognition and impetus for action to transform food systems towards greater food security, sustainability and better health outcomes [ 34 ], while aspiring to achieve developmental objectives in equity, inclusivity, resilience, and efficiency [ 36 ]. Israel’s food system, however, is yet to be adequately prepared to deal with urgent (rising food prices, uneven access to affordable and healthy food, and nutritional insecurity), and longer-term issues (strategic plan for agricultural production, food system sustainability, climate change impacts on food security, geo-political upheavals, etc.) [ 1 ]. The outbreak of the war in October 2023 is expected to aggravate food insecurity with a current loss of 40% of the agricultural workforce, restricted access to 30% of agricultural lands as they are located in the Gaza envelope, and a price hike of agricultural produce [ 20 ]. Recent works addressing diverse areas in Israel’s food system include food security [ 13 ], food welfare, food prices and economic policy [ 4 ], agricultural production [ 17 ], food waste [ 25 ], food and public health [ 9 , 14 ], environmental impacts and sustainability [ 12 , 30 , 33 ], and food systems and society [ 15 ]. However, there are insufficient academic studies addressing Israel’s food system in an integrated and multidisciplinary manner. Some examples of such studies in the international arena include transitioning food systems toward a circular economy [ 16 ], scaling pathways in food system transitions [ 26 ], and positioning local resilience and global engagements in food system transformations [ 27 ]. This paper on an expert opinion survey on Israel’s food system contributes to addressing these gaps. The survey was conducted in 2022 as part of a larger study on the systemic features of Israel’s food system transition to understand policy gaps and find pathways towards a healthy and sustainable food system. This survey is structured on the findings of initial interviews (not covered here) with food and agricultural practitioners and researchers to determine the relevant subjects to be covered by the study. The survey ranks the relevance and importance of food system challenges and policy preferences of respondents. We then discuss the policy implications by examining potential policy priorities, gaps and dissensus raised from the results. Through this, we seek to contribute to policy discussions towards a more integrated, healthy, and sustainable future for Israel’s food system.
Methodology While this paper focuses on the findings of an online survey (n = 50), it benefitted from a set of in-depth interviews (n = 17) carried out shortly before the survey. Both the interviews and survey sought “expert” respondents who are Israeli food and agriculture practitioners and researchers in the broad domains of food security, agriculture, health and nutrition, governance and policymaking, food and society, environmental sustainability, and technology. Some interview respondents also did the survey (but the numbers are unknown as survey respondents may choose to remain anonymous). The study did not relate to seniority in profession or age, educational qualifications, or number of years in the profession. We searched for respondents on the internet and from contacts known or recommended to us and emailed them the survey links. The email invitation also requests that the survey be forwarded to others in the field of food and agriculture, thereby initiating a respondent-driven snowball sample [ 24 ]. The interview findings and relevant literature were used to build the survey questions. The survey comprises mainly quantitative Likert scale questions, each followed by open-ended answer boxes for optional additional comments. Participants could choose to do the online survey in English or Hebrew. To analyse the findings, we used a 5-point Likert scale for both unipolar and bipolar questions. Unipolar questions give a range of options based on the degree of a single characteristic, for instance ranging from “not important at all” to “very important”. Bipolar questions have two opposite ends such as “strongly disagree” and “strongly agree” with a neutral option in between. The responses were weighted for the bipolar questions (strongly agree—5 points; agree—4; neutral—3; disagree—2; strongly disagree—1), and for the unipolar questions (very important—5 points; important—4; moderately important—3; slightly important—2; not important at all—1). The total scores were then ranked in descending order and presented in charts (Figs. 3, 4, 5, 6, 7, 8, 9, 10). The open-ended responses were examined through thematic analyses to identify additional relevant topics such as education, sustainable agriculture, and resilience in the food system (elaborated below) raised by the respondents. The characteristics of the respondents are shown in Fig. 1 a–f. The respondents were quite evenly spread out by the four sectors—public and private sectors, NGOs, and academia (Fig. 1 a). Respondents came from a variety of fields, the three largest were “food, health and nutrition” (24%), “public policy and economics” and “agriculture” (20%) (Fig. 1 b). They were quite equally divided by gender (Fig. 1 c), but Jewish respondents were over-represented (86%, compared to their 76% share of the national population). Arabs were very under-represented, (2%, compared to their 21% share of the national population) (Fig. 1 d). There could be several reasons for this. One was that the survey was not translated into Arabic. It can be noted that out of the 50 survey responses, only 5 respondents chose to answer in English while 45 chose the Hebrew survey. Another reason could be that there are fewer Arab experts in food and agriculture. In our search for Arab interview respondents, we also encountered difficulties finding any, although there were Jewish respondents who worked extensively with the Arab population. 84% of the respondents have postgraduate degrees (Fig. 1 e), which may indicate that many are professionals. This aligns with the survey’s intention to seek informed responses through practitioners and researchers in food and agriculture. 78% of the respondents worked completely or to a large extent on food and agriculture issues (Fig. 1 f), which indicates a high level of familiarity with the topic.
Results Overall opinion on food policies, prices and food security Figure 2 a–c show respondents’ opinions on three facets of the system: the sufficiency of Israel’s food policies, the price of food and the condition of food security. There is a majority agreement (76%) that Israel’s food policies are lacking or severely lacking. Similarly, 76% think that the price of food is high or very high in Israel. The responses for the state of food security are more divided with 64% thinking that Israel is not food secure or not food secure at all, and 36% with the opinion that it is secure or very secure. Opinion on food security Respondents related strongly to both concepts of nutritional security (at individual and household levels) and national food security. Figure 3 shows 90% of respondents deem access to nutritious food as relevant or highly relevant. More than 80% perceive food security as part of national security and having sufficient food during times of crisis as relevant or highly relevant. In comparison, just over 60% think that Israel’s ability to import food is relevant or highly relevant. This is surprising because Israel imports 55% of its caloric food supply [ 18 ], with high import dependency for food items such as cereals and cereal products (97% imported), fish (91%), and legumes, peanuts and nuts (75%) ([ 7 ]: 19). In the open-ended responses, several argued for the importance of local food production for food security. One such response was, “Food security is the state's ability to provide a supply of quality and healthy food at fair prices for every person over time. Looking ahead, maintaining the autonomy of food production cannot be avoided as a condition for ensuring food security”. Opinion on the state of agriculture in Israel Respondents overwhelmingly recognize the benefits of Israeli agriculture, with 60–90% agreeing or strongly agreeing that it benefits food security, economic value, national identity and more (Fig. 4 ). A few open-ended responses noted that local agriculture contributes to stability in the food system, as one commented, “Food policy must account not only for the current price of food but also the risks of food systems in the world, including increasing competition for food, risks from climate change, and damage to agricultural land and ecosystems”. Another respondent emphasized the reality that “Israeli agriculture cannot guarantee food security under any conditions, except for vegetables and fruits”. Opinions on the disadvantages of agriculture are more divided, with about 40–55% of responses disagreeing or strongly disagreeing with statements such as agriculture consumes too many resources, does not pay off economically or is polluting (Fig. 5 ). In the open-ended answers, respondents suggest that the negative effects of agriculture can be mitigated, and that the kind of practices implemented determine whether agriculture is environmentally detrimental and not the existence of agriculture itself. Others suggest there is scope for improving the environmental performance of agriculture in Israel, including a transition to sustainable agriculture, examining effects and policies on soil pollution and chemical pesticides, and animal agriculture. On the economic value of agriculture, one noted that “agriculture as a political resource [that] cannot be measured based on national profit/loss”. Figure 6 shows ranked opinions on the different facets of the importance of agriculture in Israel. The questions juxtapose various priorities vis-à-vis domestic agriculture, such as food affordability and the role of food import for food security. 74% disagree or strongly disagree that lower food costs are more important than protecting local agriculture, and 64% disagree or strongly disagree that food imports should play a bigger role in food security. This could signal a lack of agreement with the agricultural reforms’ way of implementation or its goals to increase competition and lower the cost of food by increasing imports. The reforms were announced in July 2021, with the first tranche of import tax abolished on selected fruits and vegetables implemented in 2022 [ 22 ]. Notably, 88% disagree or strongly disagree that local agriculture is less important for food security today than in the past. That is, it seems the experts strongly disagree with the recent market-oriented policies, mainly advanced by the Treasury to lower consumer prices. In the open-ended responses, several disagreed with framing food affordability and local agriculture as opposing goals, as one stated, “It is wrong to pit local agriculture against the price of food. Promoting and strengthening agriculture should lead to more stable and less expensive food systems. The state should invest in agriculture to strengthen its benefits, but not at the expense of higher prices”. Opinion on Israel’s food system Figure 7 shows the top-ranked problems (over 90% agreeing or strongly agreeing) are, the lack of national goals and strategic planning, and a lack of integrated policymaking across ministries. While these two are overall systemic problems, the rest are specific ones such as food waste, costly farming inputs, and food affordability. Most of these issues are perceived as problems, except two to a lesser extent—food supply during a crisis and farmers subject to too many regulations received less than 50% affirmative response. Other than the problems stated in the survey, a respondent also added, “There is no organized database in agriculture, what is grown in what quantities, where is it grown, what will be harvested and when; there is no information for both the farmers and the state to plan and prepare”. Another commented on how available resources are not efficiently used, “Only 60% of the land suitable for agriculture is cultivated, so there is no shortage of land; water is not lacking—these are administrative issues”. Opinion on food policies The importance of having integrative policies again features as the most important consideration in food policies (Fig. 8 ). The highest-ranked specific issues related to strengthening national food security (more than 90% deem it as important or very important), reducing nutritional insecurity (85%), helping people make healthy food choices (72%), and reducing the environmental impact of food production (77%). Socio-cultural issues may be less recognized as the inclusion of diverse voices in food policy is relatively lower ranked (53%). Figure 9 on policy preferences reiterates the key relevant policy areas identified in Fig. 8 . An overwhelming 97% prefer or highly prefer a national strategy for food and agriculture. Issues of affordability of food, sustainable agriculture, support for agriculture, and nutritional security remain important policy preferences. 50% prefer or highly prefer the policy of diversifying food import sources for food security. This represents a divided opinion over food imports as a food security policy (despite the fact of Israel’s high dependency on food imports). Interestingly, just 35% prefer or highly prefer to limit selected food imports to protect the domestic market. This contradicts the result in Fig. 6 where 64% disagree or strongly disagree that food imports should play a bigger role in food security. A possible interpretation of this contradiction could be that most do not prefer a growing dependence on food imports but they do not prefer to use tools of market protectionism to achieve that end. Otherwise, this could reflect that food import (its role, extent, and policy approaches) is a policy grey area, where its implications are not well-understood. Opinion on food (in)justice The top issues of food (in)justice perceived by respondents are intermediary profits (78% think are relevant or highly relevant) and the dominance of large corporate players (78%). This is closely followed by equitable access to nutritious food (Fig. 10 ). Much lower on the priority list appear to concern minority groups—just over 50% think that access to land and water for people whose livelihoods depend on agriculture/herding is a relevant or highly relevant issue. Notably, just over 40% deem that having a greater cultural sensitivity to minority groups is relevant or highly relevant. This finding resonates with the relatively low ranking of the perceived importance of including diverse voices in food policy (Fig. 8 ). This may suggest that socio-culturally responsive food policies are lacking, and significant challenges remain in achieving greater inclusivity, as data reflect a consistent pattern of minorities such as Ultra-Orthodox Jews (15.8% in food insecurity in 2021) and Arabs (42.4%) have a much higher rate of food insecurity than the non-Ultra-Orthodox Jews (10.7%) [ 8 ]. This could also reflect the composition of the respondents, who were not from minority groups. Other themes from the open-ended responses Some of the respondents raised insights or provided contrary framing of problems in the open-ended responses. These opinions relate to the long-term considerations of the food system. We summarize three such additional themes: Education Respondents mentioned two aspects of education, first, as a necessary factor to strengthen and renew the domestic agricultural sector. One mentioned that “agriculture should be considered as education not just as another economic sector”. Another felt that “agricultural education should be added to the schools. Something that will allow the students to connect with the subject and perhaps produce the farmers of the future”. The second aspect of education relates to consumer awareness of sustainable consumption. One respondent commented that “we need to do both—both lower prices and develop local agriculture and above all educate the public on economical consumption and waste prevention”. On tackling food waste as part of sustainable consumption, one noted that “food waste is a huge phenomenon that needs to be eradicated. Reducing food waste will contribute to the local economy, stabilizing the food system and its resilience, [...] lower the cost of living and national expenditure on food”. Resilience Respondents relate to various aspects of stability and resilience (the ability to reduce and cope with system vulnerabilities [ 28 ], and recover from adverse events). Several respondents questioned the assumption that food affordability and local production are conflicting goals but argued that they can go together through policy planning, as one said, “It is of great importance to create stability in the food chain through local agriculture”. Another remarked, “The notions that strengthening agriculture is at the expense of the price for the consumer, and reducing prices comes at the expense of local agriculture are outdated and oversimplified. The fact is that a strong agricultural sector is important to the country and strengthens the economic system”. These comments touch on the economic stability of food prices for consumers, and the stability of the Israeli agricultural sector. Some respondents also frame issues in terms of economic and environmental risks, as one noted, “Food policy must consider not only the current price of food but also food systems risks in the world, including increasing competition for food, risks from climate change, and degradation of agricultural land and ecosystems.”. These respondents prioritize the longer-term perspectives by factoring in resilience and risks and see the importance of investing in domestic agriculture for future stability in supply and prices. Sustainable agriculture as the future Sustainability in the food system refers to the ability to deliver food security and nutrition to all using social, economic and environmental resources in ways that do not compromise the ability of future generations to do so [ 6 , 11 ]. Several respondents raise the importance of shifting to sustainable agriculture, in part to deal with the negative effects of agriculture (raised in the survey), and in part to improve future practices. A respondent remarked, “Care should be taken to shift agriculture to sustainable practices, but in no way to abolish it”. Another noted that “local agriculture is important for preserving the environment and sustainability values. It is not at all clear that Israeli agriculture in its current state is there”.
Discussion This section discusses the policy implications of the findings in three areas: policy priorities, policy gaps and policy grey areas. Policy priorities The survey sought to rank the importance of food system issues as perceived by the respondents, and from these to extrapolate possible policy priorities. We identified three top policy concerns. First is the need for integrative policies cutting across diverse areas of the food system, including food and nutritional security, public health, food welfare and equity, food economics and supply chain, sustainable agriculture, and food waste [ 23 , 36 ]. Secondly, a strategic and long-term vision is needed for policy prioritization. This helps to differentiate issues that are important or urgent, or both; it also brings a time perspective of the immediate and long-term goals. Evaluating these two dimensions can support clearer policy prioritization, and ensure that short-term goals or policy low-lying fruits (for instance lowering food prices with food imports) do not compromise long-term values and goals (such as the viability of domestic agriculture). More elaborate guidance on policy prioritization can be found, for instance in Taeihagh et al. [ 32 ] who advance measurement criteria such as expected cost, effectiveness, the timescale for implementation, technical and institutional complexity, and public acceptability as measures to set priorities. The dual goals of food affordability and strong domestic agriculture—both perceived as important and may appear contradictory under current conditions—should not just be a balancing act where one prevails at the expense of the other. Some respondents point out that policy should support shifts in both areas so that domestic agriculture supports affordable and healthy food while boosting food and nutritional security. Adding a third goal, sustainable agriculture (highly prioritized by respondents), further increases the complexity with additional trade-offs. A policy roadmap would be needed to integrate important goals and seek their synergies to shift knowledge, practices and technologies [ 5 , 37 ]. To this end, a systemic national plan– a policy action strongly supported by the respondents—would help to guide policy strategies and interventions in all stages of the food production and consumption chains. Policy gaps Based on the survey, we identify four key policy gaps: the lack of resilience and stability in the agriculture and food system (addressed above); insufficient data and knowledge for policy action; inadequate food policy attention for vulnerable groups; reforming the food industry for better health, equity and sustainability outcomes. There is insufficient timely and accurate data and knowledge to support decision-making for policymakers and agriculture practitioners. There is a need to (re)establish data infrastructure that includes food system performance and risk indicators (for economic, food security and environmental risks) to measure and enhance its resilience. While comprehensive databases existed in the past when central planning was in vogue in the agricultural sector, the data needed today varies widely from those collected in the past. There needs to be greater food policy attention for vulnerable groups [ 8 , 29 ]. An example is the monitoring of micronutrient consumption of population groups such as the elderly, pregnant women, infants and children, and other socio-demographic groups at risk of under- or mal-nutrition. Again, this needs to begin with data and knowledge building. In addition to quantitative data, ethnographic knowledge can bring in cultural perspectives of food production and consumption practices upon which to build culturally sensitive and inclusive policies. Finally, more policy attention is needed for restructuring and regulating the food industry for better health, equity and sustainability outcomes. This was insufficiently covered in the survey but emerged in the interviews. Issues pertinent to Israel include market consolidation by large players, shifting production towards healthy food and away from ultra-processed food; and addressing decades-long policy inaction on food fortification [ 9 , 31 ]. Policy grey areas and dissensus Our findings show where there are policy grey areas or dissensus. There is an apparent divide over whether Israel is food secure (Fig. 2 c). This may be attributed to differences in the way respondents relate to the multifaceted concept of food security. For instance, the Food and Agriculture Organization [ 10 ] delineates different dimensions of food security—availability, accessibility, utilization, and stability, while Berry et al. [ 6 ] argue for an additional long-term (inter-generational) dimension of “sustainability”. Specific to Israel, a recent Knesset document ‘Food system security in Israel’ mapped the definitions of food security by different government ministries [ 21 ]. This document identifies two broad spheres of definitions of food security, roughly corresponding to the spheres of government action. The first is “national food security”, which deals with the sufficiency of food production and supply at the national level. The second is “nutritional security”, which addresses the ability of individuals and households to access and purchase food that meets their nutritional needs for optimal well-being (similar to the USDA definition—USDA website). The two terms in Hebrew are often used interchangeably in the general literature. The survey did not delve into how respondents define food security, but the different perceptions on whether Israel is food secure may come from the ways respondents relate to the different dimensions of food security, their respective disciplinary lenses, or even relating to different time scales of current or future food security. Another area of disagreement is over the disadvantages of agriculture in Israel. This may relate to considerations of which agricultural practices should be used to incorporate environmental, political and socioeconomic concerns, as well as how the organization, practices and technologies of agriculture should change in the future. A related policy grey area is the strategy for food import, broaching questions on how to balance food import and domestic production, and whether Israel should main production capacity for a core or essential selection of food crops. Amdor (2022, 2023), for instance, investigated 23 core food items consumed in Israel based on a comparison of their environmental impacts if they were produced in Israel or imported. Other works measuring the environmental impacts of food trade concerning Israel [ 12 , 30 ] can also be instrumental in making decisions about food imports. Study limitations This survey offers a preliminary assessment of the challenges, potentials and policy implications as perceived by experts in the field. For a more thorough examination of the topic, this research should be complemented by more in-depth qualitative research (for instance, interviews and focus group discussions) and quantitative research (including comprehensive, cross-population and targeted population studies). This study is also limited in its underrepresentation of Arab respondents, as well as providing a more nuanced picture of the opinions of minority groups such as the Ultra-Orthodox and the Bedouin.
Conclusions Implementation recommendations The survey side-stepped the issue of the ways for the practical implementation of the results in formulating a national policy for sustainable food systems. An example of such a challenge was the distribution of food aid during the recent COVID-19 pandemic. Some eleven government ministries were involved, in addition to many NGOs. This was both costly and inefficient. Building on this study's findings, further policy research and implementation areas to be covered include government responsibility for universal food security, strategic systemic policies for food systems, prevention and preparedness for future crises, including climate change, and promoting resilience. The way forward should probably best be through an inter-ministerial committee with the responsibility, necessary budgets, mandate and executive authority to plan data-driven policies and priorities to ensure a sustainable food system for the future of Israel. A first step in this direction may be seen in the recent report of the national committee on food systems adaptation and mitigation of climate change [ 19 ].
Background While there has been increasing global recognition and impetus for action to transform food systems towards greater food security, sustainability and better health outcomes, Israel has only recently begun to focus on the diverse challenges of its food system and its potential for transformation. Methods An expert opinion survey (n = 50) on Israel’s food system was conducted as part of a larger study on the systemic features of Israel’s food system transition to understand its policy gaps and find strategies towards a healthy and sustainable food system. The survey ranks the relevance and importance of food system challenges and policy preferences. Policy implications are then examined by identifying potential priorities, gaps and dissensus. Results The survey finds that there is a majority agreement (76%) that Israel’s food policies are lacking or severely lacking. Respondents relate strongly to both concepts of nutritional security (90% think that access to nutritious food is relevant or highly relevant) and national food security (more than 80% perceive food security as part of national security). Respondents overwhelmingly recognize the benefits of Israeli agriculture with 60–90% agreeing or strongly agreeing that it benefits food security, economic value and national identity. Top-ranked problems include overall systemic problems such as the lack of national goals, strategic planning, and integrated policymaking across ministries, and specific ones such as food waste, costly farming inputs, and food affordability. The most preferred policy actions include establishing a national strategy for food and agriculture, making food affordable for vulnerable households, and incentivising sustainable farming methods. The key policy gaps include the lack of resilience in agriculture and the food system, insufficient data and knowledge for policy action, inadequate attention to the regulation of the food industry for better health and inadequate food policy attention for minority groups. Conclusions Building on this study's findings, further policy research and implementation areas to be covered include government responsibility for universal food security, strategic systemic policies for food systems, prevention and preparedness for future crises, and promoting resilience. The way forward may best be through an inter-ministerial committee with the responsibility, budgets, mandate and executive authority to plan data-driven policies for a sustainable food system for Israel’s future. Keywords
Acknowledgements We thank the survey respondents for sharing their valuable insights. Author contributions ES was involved in the conception and design of the work, analysis and interpretation of data, and has drafted the work and revised it. EMB was involved in the interpretation of data, and revision of the work. EF was involved in the conception and design of the work, interpretation of data, and revision of the work. Funding This paper was partially supported by the Israel Academy of Sciences and Humanities. Availability of data and materials The datasets during and/or analysed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Isr J Health Policy Res. 2024 Jan 15; 13:4
oa_package/2c/ba/PMC10789065.tar.gz
PMC10789066
38221616
Introduction Prostate cancer is the most prevalent cancer in males worldwide and the second leading cause of cancer-related death in men [ 1 ]. Due to the paramount role of androgen receptor (AR) signaling in prostate cancer cell growth and survival in the regulation of cell growth and survival [ 2 – 4 ], androgen deprivation therapy (ADT), which inhibits AR signaling, has been the standard treatment for early-stage and metastatic prostate cancer until recently [ 5 , 6 ]. Unfortunately, after 18–24 months of ADT, including the recently developed potent antiandrogen enzalutamide (Enz), most patients eventually relapse and develop castration-resistant prostate cancer (CRPC) [ 7 – 9 ]. In addition to endocrine therapy, new drugs [ 11 , 12 ] have been investigated and developed in recent years, and chemotherapy can be used as an alternative treatment for patients who have failed endocrine therapy [ 10 ], although its efficacy for CRPC patients is still poor. Moreover, there are currently no curative treatment options for metastatic castration-resistant prostate cancer (mCRPC), and its prognosis is dismal [ 13 ]. Therefore, it is imperative to elucidate the underlying mechanisms of castration resistance after ADT in prostate cancer. The tumor microenvironment (TME) is a highly complex system composed of tumor cells and stromal cells, resulting in the multifaceted nature of malignant tumors [ 14 ]. Traditionally, the primary focus in comprehending carcinogenesis has been the tumor cell and its underlying mechanisms [ 15 ]. However, dynamic cross-talk between cancer cells and stromal cells is also crucial for cancer progression [ 16 – 18 ]. CAFs, which are activated fibroblasts, are components of the tissue microenvironment [ 19 ]. The stromal-to-tumor interaction is largely influenced by CAFs, which stand as the most prominent stromal component within the TME [ 20 ]; they can secrete cytokines, chemokines, and growth factors that exert direct and indirect effects on tumorigenesis, proliferation, progression, and invasion of cancer cells [ 21 – 23 ]. At the same time, novel therapies targeting cancer, with mechanisms such as interfering with tumor metabolism to inhibit its malignant progression [ 24 ], treat tumors through targeted nanomedicine delivery systems [ 25 , 26 ]. Moreover, progress in drug delivery using nanoparticles has led to substantial improvements in the efficiency of delivering drugs to disease sites, consequently greatly enhancing therapeutic effectiveness [ 27 – 30 ]. In contrast to traditional endocrine therapy drugs, nanoparticles efficiently direct anti-cancer medications towards metastatic sites of prostate cancer while minimizing adverse effects on the host [ 31 , 32 ]. In the development and application of these therapies, CAFs exhibit important activities. As with the diversity in CAF origins, the heterogeneity in CAF fate and function has received great attention and has led to the possibility of targeting a subpopulation of CAFs to combat cancer. The common classification of CAFs is myofibroblastic CAFs (myCAFs) and inflammatory CAFs (iCAFs) [ 33 , 34 ]. Antigen-presenting CAFs (apCAFs), which present MHC class-II-restricted antigens and activate CD4 T cells to demonstrate antigen-presenting capacities, were identified and reported for the first time in pancreatic cancer with the efficacy of single-cell transcriptomics [ 35 ]. Nevertheless, the comprehensive characteristics of apCAFs and their associations with prognosis and immunotherapy response in prostate cancer remain inadequately understood. In this study, we utilized single-cell RNA-sequencing (scRNA-seq) data and transcriptome data to identify subclusters of APPCAFs and develop an APPCAF-associated risk signature for PRAD. In addition, we analyzed the clinical characteristics associated with the APPCAF signature and investigated the immune landscape and immunotherapy responsiveness. Furthermore, using the risk score and clinicopathological characteristics, we developed a prognostic nomogram to analyze the correlation between APPCAF characteristics and PRAD prognosis. Our findings provide novel microenvironment insight into the pathophysiology of PRAD and may provide ideas and approaches for treating prostate cancer.
Materials and methods Isolation and transcriptome sequencing of primary CAFs Tumor tissues were collected from patients with prostate cancer who underwent radical prostatectomy at the Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine. The methodology and procedures for isolating primary CAFs from prostate cancer tissues were previously described [ 38 ]. A total of 12 primary CAF samples were obtained from 12 patients. Among them, 6 patients did not receive any treatment within the 3–6 months before radical prostatectomy, while the remaining 6 patients underwent androgen deprivation therapy with abiraterone and leuprorelin before the surgery. Subsequently, RNA sequencing analysis was conducted on 6 pre-ADT and 6 post-ADT primary CAFs using the switching mechanism at the 5' end of RNA template (SMART) technology. DEG analysis and enrichment analysis The DEG analysis was performed on the SMART sequencing results using the "Limma" package [ 39 ]. The selection criteria for DEGs were set as |logFC|> 1 and p value < 0.05. Subsequently, we conducted an enrichment analysis on the DEGs by utilizing the GO and KEGG databases. Data collection and processing The scRNA-seq data utilized in this study were obtained from the GSE141445 dataset available in the GEO database [ 40 ]. After an initial integration of the samples, we generated gene expression and phenotype matrices consisting of 36,424 scRNA-seq datasets. Additionally, bulk RNA-seq data with clinical characteristics were downloaded from both the TCGA database and the GEO database. The study encompassed three distinct cohorts, namely, TCGA-PRAD, GSE116198 [ 41 ], and GSE70769 [ 38 ]. To ensure robust data quality, we selected patients with clearly defined biochemical recurrence outcome information, considering a minimum follow-up duration of 30 days. The final analysis included a total of 419 samples from TCGA-PRAD, 248 samples from GSE116918, and 90 samples from GSE70769. Visualization of major cell types and subtypes in PCa Using the "Seurat" package [ 42 ], a Seurat object was generated based on scRNA-seq data from the GSE141445 dataset. Initially, cells with gene expression exceeding 4000 or below 200, as well as cells with high levels of mitochondrial gene expression (pctMT > 15%), were excluded. Subsequently, the top 2000 variable genes were selected for data normalization using the FindVariableFeatures function in the Seurat package. We applied the ScaleData and RunPCA functions to the normalized data for principal component analysis (PCA). Dimensionality reduction and visualization of the data were achieved using the t-distributed stochastic neighbor embedding (t-SNE) and uniform manifold approximation and projection (UMAP) methods. Finally, cell annotation and visualization of major cell types or subtypes were conducted based on the expression of specific marker genes for different cell types. Nonnegative matrix factorization (NMF) of APPRGs in CAFs A gene set comprising 431 antigen processing and presentation-related genes (APPRGs) was obtained from the InnateDB database ( https://www.innatedb.com ) (Table S1). To further investigate the CAFs involved in antigen processing and presentation, we employed the NMF algorithm [ 43 ] to perform dimensionality reduction analysis on the 431 APPRGs in CAFs. Different cell types within CAFs were determined based on the scRNA expression matrix. Identification of the DEGs and characteristics of APP-related CAF subtypes in PCa Differential gene analysis was conducted using the FindAllMarkers function, employing filtering criteria of |logFC|> 1 and adjusted p value < 0.05, to identify genes that exhibited significant differences between CAF subtypes, thus warranting further investigation. Subsequently, the CellChat package [ 44 ] was utilized to infer and analyze intercellular communication. The netVisual circle function was employed to visualize the strength of cell‒cell communication networks between APP-related CAF subtypes, ranging from the source cell cluster to various other cell clusters. Finally, the AddModuleScore function was applied to calculate feature scores for the APP-related CAF subtypes based on their characteristic genes. NMF clustering identification of subtypes of APPCAF-related genes The correlation between APPCAF-related genes and BCR was evaluated in TCGA-PRAD samples using univariate Cox regression analysis. Subsequently, the mRNA expression matrix of APPCAF-related genes was collected from the TCGA-PRAD dataset. The "NMF" package in R was utilized, and the Brunet method was applied to perform NMF clustering. We determined the optimal value of k, representing the number of clusters, by considering cophenetic correlation coefficients and silhouette scores. Analysis of the characteristics of clusters based on APPCAF-related genes Differential expression analysis was conducted on distinct clusters of APPCAF-related genes in TCGA-PRAD samples using the "limma" package. Subsequently, the ssGSEA function in the “GSVA” package was utilized to evaluate the disparities in immune cell infiltration among the clusters of APPCAF-related genes [ 45 ]. Additionally, the GSVA package was employed to perform GO and KEGG enrichment analysis on the DEGs identified between the clusters of APPCAF-related genes. Development and validation of a risk signature based on APPCAF-related genes To mitigate overfitting, we employed the "glmnet" package to perform LASSO regression analysis. Utilizing the APPCAF-related genes associated with prognosis, we constructed an APPCAF-related signature (APPCAFRS). Subsequently, we utilized the "predict" function in R to assign risk scores to each sample in the TCGA-PRAD cohort. Based on the median risk score, the samples were categorized into low-risk and high-risk subgroups. Moreover, employing the median risk score derived from the TCGA-PRAD cohort, we stratified PCa patients from the GSE116918 and GSE70769 cohorts into high-risk and low-risk subgroups. Finally, we conducted the following analyses in the three PCa cohorts: (1) Kaplan‒Meier analysis was performed to assess the survival differences between the high-risk and low-risk subgroups; (2) the "pheatmap" package was utilized to visualize the expression levels of genes in the APPCAF-related signature and the distribution of outcomes in the cohorts; and (3) univariate and multivariate Cox regression analyses were employed to evaluate the independence of the risk score and clinical features as prognostic factors. Establishment of a nomogram and comparison of clinical features between low- and high-risk patients To predict the 1-year, 3-year, and 5-year biochemical recurrence-free survival (BCRFS) in PCa patients, a nomogram was developed. Subsequently, to assess the performance of the nomogram, a calibration curve was generated by comparing the predicted probabilities with the observed outcomes at 1 year, 3 years, and 5 years, thereby gauging its accuracy. Additionally, a chi-square test was conducted to examine the associations between the APPCAF-related signature (APPCAFRS) and clinical features such as T stage, Gleason grade, and PSA level. Investigation of immune-related differences based on the APPCAF-related signature To explore the immune-related differences associated with the APPCAFRS, we examined the variations in the tumor immune microenvironment between low-risk and high-risk subgroups. Initially, the CIBERSORT algorithm was applied to calculate the infiltration composition of 22 immune cell types in each PCa sample, allowing for an analysis of the changes in immune cell infiltration between the low-risk and high-risk subgroups [ 46 ]. Subsequently, the activity differences of immune-related pathways between the high-risk and low-risk subgroups were assessed using the ssGSEA function in the "GSVA" package, with the Wilcoxon rank-sum test employed for analysis. Exploring immunotherapeutic responsiveness and potential drug treatments To pinpoint patients who could potentially benefit more from immune checkpoint inhibitor (ICI) therapy, we conducted a correlation analysis between the risk score and immune checkpoint-related genes. Subsequently, the Tumor Immune Dysfunction and Exclusion (TIDE) tool was applied to assess the immune treatment response in prostate cancer patients, evaluating the responsiveness to immune therapy between the high-risk and low-risk subgroups using a chi-square test. Moreover, we utilized the "oncoPredict" package to predict drug sensitivity between the high-risk and low-risk subgroups. Cell culture The human benign prostate stromal cell line (WPMY-1) was obtained from the Cell Bank of the Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences. WPMY-1 cells were cultured in DMEM containing 5% fetal bovine serum and 1% penicillin/streptomycin. All cells were cultured at 37 °C in a 5% CO2 environment. Validation of feature gene expression by in vitro qRT‒PCR To further validate the APPCAFRS, we conducted qRT‒PCR analysis to assess the expression of characteristic genes in normal prostate stromal fibroblasts (WPMY-1), prostate cancer-associated fibroblasts (hTERT PF179T CAF), and pre-ADT and post-ADT CAFs. The ADT of CAFs was performed following a previously described method [ 47 ]. RNA was extracted from the cells using TRIzol reagent (Takara, Japan), and cDNA was synthesized using a reverse transcription kit (Vazyme, China). Real-time PCR was carried out using the SyberGreen method to quantify the expression of target genes. The primer sequences utilized in this study can be found in Table S2. CD4 + T-cell early activation assay After primary CAFs were sorted, 10 nM DHT and 10 nM ETOH were applied to simulate androgen intervention and cultured in vitro. A total of 1250–2500 sorted CAFs were cultured with 25 μg/ml OVA peptide 323–339 or without a peptide in U-bottom 96-well plates and incubated at 37 °C and 5% CO2. PBMCs corresponding to primary CAFs were used to isolate and enrich CD4 + naïve T cells using the MojoSort Human CD4 Naïve T-Cell Isolation Kit (Biolegend #480,041). The CAF plates were washed twice, and 2500 CD4 + T cells were cocultured with 10% FBS/DMEM per week for 17 h. Cells were then washed and blocked and stained with the following antibodies (all from Biolegend at 1:200): CD4 (Clone RPA-T4), CD25 (Clone PC61.5), and CD69 (Clone FN50) for 30 min at 4 °C.
Results Isolation and transcriptomic profiling of CAFs from patients with PCa Figure 1 depicts the framework and experimental procedures of the entire study. To elucidate the differential gene expression profiles between the pre-ADT and post-ADT groups, transcriptome sequencing was performed. Based on our transcriptome sequencing data, a total of 281 differentially expressed genes (DEGs) were identified between the pre-ADT and post-ADT groups, including 134 upregulated genes and 147 downregulated genes (Fig. 2 A). The heatmap of DEGs revealed distinct clustering of samples from the pre-ADT and post-ADT groups (Fig. 2 B). Subsequently, enrichment analysis was conducted on the identified DEGs. The DEGs were significantly enriched in GO-BP terms, such as antigen processing and presentation processes (Fig. 2 C); GO-CC terms, such as MHC class II protein complex and transport vesicle (Fig. 2 D); GO-MF terms, such as MHC class II protein complex binding (Fig. 2 E); and KEGG signaling pathways, such as intestinal immune network for IgA production (Fig. 2 F). These results indicated a significant decrease in CAFs implicated in APP following ADT treatment. Identification of APP-related CAFs contributing to the TME in PCa The PCa scRNA-seq used in our study consisted of 36,424 cells from 13 samples of prostate cancer patients, with major cell types such as epithelial cells, T cells, myeloid cells, stromal cells, and B cells annotated (Fig. 3 A). The analysis of cellChat unveiled a myriad of interactions among these cellular types (Fig. 3 B). The proportions of the six cell types across the 13 prostate cancer samples are shown in Fig. 3 C. Within the PCa dataset, stromal cells were classified into CAFs and endothelial cells (Fig. 3 D). Subsequently, NMF clustering of CAFs was performed using APPRGs, resulting in the identification of seven subtypes (Fig. 4 A). Next, pseudotime analysis displayed the trajectories of NMF-clustered CAF subtypes (Fig. 4 B). Further analysis of the feature genes for the seven NMF subtypes revealed APP-related CAFs, namely, CTSK + MRC2 + CAF-C1. CAFs that did not exhibit APP-related effects were designated as NoneAPP-CAF-C2 (Fig. 4 C). In addition, cellChat analysis demonstrated that compared to Non-APP-CAF-C2 cells, CTSK + MRC2 + CAF-C1 cells exhibited more ligand‒receptor connections with epithelial cells and T cells (Fig. 4 D, E). Additionally, by calculating the Pan-CAF score based on previously reported signatures [ 36 ], we found a strong association between CTSK + MRC2 + CAF-C1 cells and inflammatory CAFs (Fig. 4 F). As shown in Fig. 4 G, various genes related to the ECM, MMPs, and proinflammatory processes were significantly upregulated in CTSK + MRC2 + CAF-C1 cells. Identification of APPCAF-related genes using single-cell RNA-sequencing data Differential gene expression analysis of the CTSK + MRC2 + CAF-C1 and NoneAPP-CAF-C2 subgroups revealed 55 significantly differentially expressed genes, which were designated APPCAF-related genes (APPCAFRGs) (Additional file 3 : Table S3). Subsequently, through univariate Cox regression analysis, we identified 20 genes with prognostic value for BCR (Fig. 5 A). The correlation circus plot in Fig. 5 B depicts the relationships among these genes. Furthermore, analysis of copy number variation (CNV) rates revealed a high frequency of deletions in POSTN, COL10A1, and MARCKS in TCGA (Fig. 5 C). The genomic loci of these genes on human chromosomes are illustrated in Fig. 5 D. NMF Clustering Analysis Based on TCGA-PRAD Patients Next, we performed NMF consensus clustering analysis using the expression profiles of the 20 APPCAFRGs. As depicted in Fig. 6 A and B, we successfully partitioned the TCGA-PRAD cohort into two clusters and optimized the grouping using the cophenetic coefficient as an evaluation metric. Subsequently, we assessed the prognostic disparities between the clusters using KM curve analysis and found a significant distinction in patient outcomes between the C1 and C2 subtypes (p = 0.011). Additionally, patients within the C2 cluster exhibited a markedly shorter median time to BCR (Fig. 6 C). Moreover, to further investigate the distinctive characteristics, we applied PCA, tSNE, and UMAP dimensionality reduction techniques, which unequivocally demonstrated significant discrepancies between the C1 and C2 subtypes (Fig. 6 D). Characteristics of APPCAF-related gene subtypes in PCa patients We performed gene differential expression analysis between the C1 and C2 subtypes, and the top 50 significant DEGs are presented in Fig. 7 A. Subsequently, we investigated the expression patterns of the 20 APPCAF-related genes in the C1 and C2 subtypes. Consistent with the findings from the univariate Cox analysis, genes with HR > 1 exhibited higher expression levels in the C2 subtype, while genes with HR < 1 showed higher expression levels in the C1 subtype (Fig. 7 B). The distribution of various clinical features in the C1 and C2 subtypes within the TCGA-PRAD samples is depicted in Fig. 7 C. Furthermore, we assessed the proportions of 23 tumor immune cell infiltrations in the C1 and C2 subtypes utilizing the ssGSEA method (Fig. 7 D). In addition, we performed GO and KEGG enrichment analyses to identify significant pathways and functions linked to the DEGs between the C1 and C2 subtypes. The KEGG enrichment analysis revealed significant enrichment of pathways such as tryptophan metabolism, propanoate metabolism, and alanine, leucine, and isoleucine degradation in the C1 subtype. In contrast, the C2 subtype exhibited significant enrichment in pathways related to the cell cycle, homologous recombination, and DNA mismatch repair (Fig. 7 E). The GO enrichment analysis showed significant enrichment of the C1 subtype in various amino acid catabolic processes, while the C2 subtype exhibited significant enrichment in pathways related to STAT protein family binding, negative regulation of tyrosine kinase activity, and Fcγ receptor signaling (Fig. 7 F). Establishment and validation of the prognostic signatures of APPCAF-related genes To establish the APPCAF-related signatures, we initially employed Lasso regression and conducted tenfold cross-validation using the 20 APPCAF-related genes (Fig. 8 A, B). Four key genes, THBS2, DPT, COL5A1, and MARCKS, were identified and incorporated into the development of the prognostic model. The TCGA-PRAD cohort served as the training set, whereas the GSE116918 and GSE70769 cohorts served as the validation sets. Using the median signature risk score as a threshold, we classified patients in both the training and validation sets into high-risk and low-risk groups. The KM curve revealed a significant difference in prognosis between the high-risk and low-risk groups, with the former exhibiting worse outcomes (Fig. 8 C–E). Furthermore, Fig. 8 F–H revealed that higher risk scores were associated with an increased likelihood of BCR occurrence in prostate cancer patients. Identification of independent prognostic factors and construction of the nomogram To further assess the predictive performance of our constructed prognostic model for BCR in PCa, we investigated the contribution of various indicators, including Gleason grade, PSA, T stage, and risk score, using univariate and multivariate Cox regression analyses in patients from the TCGA-PRAD, GSE116918, and GSE70769 cohorts. As depicted in Fig. 9 A, B, the risk score emerged as an independent prognostic factor for BCR in TCGA-PRAD patients (HR: 2.447, p = 0.002). Similarly, in the GSE116918 cohort, the risk score also demonstrated an independent prognostic value for BCR (HR: 2.150, p = 0.048) (Fig. 9 C, D). Moreover, in PCa patients from the GSE70769 cohort, the risk score was identified as an independent prognostic factor for BCR (HR: 1.969, p = 0.010). Subsequently, we developed a nomogram based on Gleason grade, PSA, T stage, and risk score (Fig. 9 G). Furthermore, calibration curve analysis indicated a high level of concordance between the predicted BCRFS and the actual BCRFS (Fig. 9 H). Decision curve analysis revealed that the nomogram and risk score were more stable and accurate in predicting BCR than Gleason grade, PSA, and T stage (Fig. 9 I). Correlation between the signatures of APPCAF-related genes and clinical characteristics We further evaluated the differences in clinical characteristics between high-risk and low-risk prostate cancer patients. In the TCGA-PRAD cohort, the composition differences in clinical features between the high-risk and low-risk subgroups revealed significant disparities in the Gleason grade, T stage, and N stage (Fig. 10 A). Figure 10 B illustrates a significant positive correlation between the risk score and PSA (R = 0.18, p = 0.00024). Furthermore, in the GSE116918 cohort, significant differences were observed between the high-risk and low-risk groups in terms of Gleason grade and T stage (Fig. 10 C). Additionally, there was a significant positive correlation between the risk score and PSA (R = 0.19, p = 0.0023) (Fig. 10 D). APPCAFRS-based immune-related discrepancies in PCa In light of the disparities in the tumor immune microenvironment between the low-risk and high-risk subgroups, we conducted a comprehensive investigation into the immune-related differences in the TCGA-PRAD and GSE116918 cohorts. Initially, a differential analysis of immune cell infiltration between the low-risk and high-risk subgroups in both cohorts revealed an increased abundance of resting memory CD4 + T cells, M1 macrophages, and resting dendritic cells in the high-risk subgroup, while plasma cells exhibited a lower infiltration abundance (Fig. 11 A, B). Subsequently, we calculated immune functional scores for patients in both cohorts. As illustrated in Fig. 11 C and D, the high-risk group displayed significantly elevated scores for inflammation-promoting, MHC Class I (MHC I), and T helper cells. Additionally, an examination of the correlation between the risk score and immune checkpoint-related genes revealed a negative association between the risk score and multiple immune checkpoint-related genes (Fig. 11 E, F). In addition, the analysis of immune therapy sensitivity prediction in low-risk and high-risk patients using the TIDE database revealed that high-risk patients exhibited a diminished response to immune therapy and a greater likelihood of immune evasion (Additional file 4 : Figure S1A, B). Furthermore, using the previously established pancancer genomic instability features [ 37 ], we analyzed immune subtypes within the low-risk and high-risk subgroups. As depicted in Additional file 4 : Figure S1C, high-risk patients demonstrated a higher prevalence of the C1 subtype, which scored higher in tumor mutation burden, noninteger copy number alterations, and homologous recombination deficiency. Finally, in the IMvigor 210 cohort, we stratified patients based on the risk score, and in alignment with our previous findings, patients in the high-risk group exhibited a worse prognosis (Additional file 4 : Figure S1D). Prediction of chemotherapy sensitivity To further elucidate the differences in chemotherapy drug response between the low-risk and high-risk subgroups, we evaluated the predictive capacity of the risk model for chemotherapy drug sensitivity using the TCGA-PRAD dataset. Our analysis revealed that the high-risk group exhibited increased sensitivity to eight chemotherapy drugs, including SB505124 and JAK1_8709 (Fig. 12 A, B). Conversely, the low-risk group demonstrated higher sensitivity to drugs such as WIKI4 and WEHI-539 (Fig. 12 C, D). Validation of the feature genes and CD4 + T-cell early activation assay We conducted qRT‒PCR analysis to investigate the relative differential expression of these signature genes in vitro. The prostate CAFs exhibited a significant upregulation of THBS2, COL5A1, and MARCKS, while DPT showed a significant downregulation when compared to the normal prostate stromal fibroblast line WPMY-1 (Fig. 13 A–D). Subsequently, we examined the expression of these signature genes in the pre-ADT and post-ADT groups by subjecting the CAF cell lines to in vitro androgen deprivation therapy. As depicted in Fig. 13 E, following ADT treatment, THBS2, COL5A1, and MARCKS were significantly upregulated, whereas DPT showed an insignificant downregulation. Using an early activation assay, compared to pre-ADT CAFs, post-ADT CAFs did not induce measurable OVA-specific T-cell activation in coculture with T cells, as indicated by early activation markers of TCR ligation (CD25 and CD69) (Fig. 13 F–H).
Discussion Growing evidence suggests that the malignant biological behaviors of tumor cells are dependent on the intercellular communication between tumor and stromal cells in a complex microenvironment [ 16 – 18 , 48 ]. As essential components of the TME, CAFs regulate tumor proliferation, angiogenesis, invasion, metastasis, and treatment resistance in numerous malignancies [ 49 , 50 ]. With recent advancements in cancer research, there are numerous approaches to overcome drug resistance, among which immunotherapy has revolutionized cancer treatment [ 51 , 52 ]. Evidence suggests that antigen-presenting cells are essential for T-cell activation and tumor immunity and that cancers can circumvent this immunity through means of immune editing, such as immune dominance, the absence of immune checkpoints, or downregulating antigen-presenting cells [ 53 , 54 ]. Antigen-presenting cells are crucial for launching, programming, and regulating tumor-specific immune responses [ 55 , 56 ]. ApCAFs, a novel type of cancer-associated fibroblast capable of presenting MHC II-mediated antigens within the TME, were recently identified in pancreatic ductal adenocarcinoma and breast cancer [ 26 , 57 ]. In this study, we sorted CAFs from radical prostatectomy samples of patients who received neoadjuvant therapy and patients who did not receive neoadjuvant therapy within 3–6 months. We discovered that the expression level of MHC-II-related molecules (HLA-DQA1, HLA-DRB1, and HLA-DRA) was significantly reduced in CAFs after neoadjuvant treatment, as were the corresponding functional pathways (MHC class II protein complex assembly, antigen processing, and presentation of peptide antigen via MHC class II). We focused on specific CAFs associated with antigen presentation and systematically characterized and classified CAFs in PRAD using scRNA-seq data. Ultimately, we identified the CTSK + MRC2 + CAF cluster as an APPCAF cluster that interacts strongly with T cells, which may help regulate different aspects of tumor immune microenvironment (TIME) biology. Capitalizing on the unique attributes of these CAF cells, precise modulation of the tumor microenvironment is feasible, thus enhancing the post-ADT antitumor immune profile in PCa patients. Furthermore, we devised a predictive model for biochemical recurrence in PCa patients. This model not only holds substantial promise for biomedical applications but also facilitates accurate stratification of PCa patients at an early stage, consequently elevating long-term prognostic outcomes. Expanding upon the groundwork of our preliminary research, it is evident that the antigen presentation and processing functions of CAFs within the prostate cancer tissue microenvironment tend to wane following ADT treatment, potentially facilitating immune evasion. Consequently, directing interventions toward CAFs equipped with antigen presentation capabilities could hold significant promise in benefitting patients with prostate cancer. Against the backdrop of flourishing advancements in novel biomaterials and nanotechnology, more refined targeted nanoparticle systems have been meticulously devised for efficient drug delivery. Conventional inorganic nanomaterials, including metal nanoparticles and carbon-based counterparts, have been associated with inherent neurotoxicity [ 58 ]. Conversely, green nanomaterials are surfacing as a groundbreaking avenue, boasting reduced toxicity [ 59 ]. The research conducted by Mousavi et al. substantiates that environmentally friendly synthesized silver nanoparticles induce apoptosis and unveil dose- and time-responsive cytotoxic as well as anticancer effects on gastric cancer cells [ 60 ]. Moreover, Patrascu et al. elucidated the efficacy of a hybrid nanosystem comprising biopolymeric membranes and silver nanoparticles, manifesting pronounced cytotoxicity against murine fibroblast L929 cells [ 61 ]. Our enthusiastic outlook is grounded in the convergence of biomaterials and biomedicine directed at CAFs and strategic intervention in the intricate tumor microenvironment—a burgeoning domain of research. Increasing evidence has confirmed the prognostic value or therapeutic prediction of CAF-related gene markers in PRAD [ 62 ]. Based on the prognostic value of the CTSK + MRC2 + CAF cluster, we identified the differentially expressed genes between the CTSK + MRC2 + CAF cluster and other CAF clusters and further developed an APPCAF-based risk signature with 4 genes; it was composed of one protective gene (DPT) and three risk genes (THBS2 COL5A1 MARCKS). Among these four genes, MARCKS had the highest CNV loss frequency. CNV mutation burden affects gene expression level or activity, thereby influencing genetic modulation and causing PRAD progression [ 63 , 64 ]. We further clarified the differentiation ability of APPCAF signature genes for prostate cancer via NMF clustering, which suggests that patients in APPRG Cluster C2 are associated with poorer clinical outcomes, and we discovered that prostate cancer samples with varying APPRG expression levels were significantly correlated with various pathways. APPRG Cluster C2 samples were significantly associated with mismatch repair, DNA replication, and somatic diversification of immunoglobulins, while APPRG Cluster C1 samples were significantly associated with fatty acid metabolism, glutathione metabolism, and linoleic acid metabolism. Previous studies demonstrated a significant correlation between microsatellite instability or mismatch repair status and the efficacy of immune checkpoint inhibitors in prostate cancer [ 65 – 67 ]. Despite being uncommon, immune checkpoint blockades are effective for those with advanced prostate cancer and mismatch repair gene mutations [ 68 ]. CAFs interact intimately with immune T cells in the TME, thereby promoting the progression of the tumor [ 69 , 70 ]. Sample risk exhibited a significant positive correlation with CD4 + T cells and a significant negative correlation with CD8 + T cells in our signature. In addition, a negative association was found between a high-risk score and the expression level of immune checkpoint inhibitor target genes. The TIME is composed of numerous and diverse immune cells in tumor tissues and significantly influences the immune status of the TME, thereby influencing the immunotherapy efficacy of patients [ 71 – 73 ]. As essential components of the TME, CAFs can interact directly with immune infiltration and remodel the immunosuppressive TME, allowing tumor cells to evade immune surveillance [ 74 – 76 ]. Despite the paucity of research on CAFs, Elyada and Friedman et al. demonstrated the effect of CAFs on the TIME via the antigen presentation method [ 26 , 57 ]. In addition, apCAFs can trigger the local activation of CD4 + T cells and induce memory. Kerdidani et al. demonstrated that apCAFs were also capable of activating tumor-specific CD4 T cells and recruiting near CD4 T cells both in vivo and in vitro. Concurrently, in previous studies, CAFs were preferable for activating tumor cells and delivering microRNAs or other substances to tumor cells after androgen deprivation [ 26 , 47 , 77 ]. Few studies considered the changes in the immune characteristics of CAFs before and after castration to study their effect on tumor cells. In regards to novelty, we demonstrated for the first time that androgen withdrawal treatment leads to a decline in antigen presentation and a process-related phenotype in prostate CAFs. This observation is pivotal, as it holds the potential to amplify the effectiveness of ADT in patients with prostate cancer. Based on the above clues, we further identified a potential CAF subtype in the prostate associated with antigen presentation and processing at the single-cell level. Similar phenotypic features of CAFs have been previously reported in pancreatic cancer [ 35 ] but not in prostate cancer. Notably, no antecedent research within the domain of the prostate cancer immune microenvironment has hitherto singled out antigen presentation and processing-associated CAFs. Furthermore, a novel signature related to APPCAF in prostate cancer was established. Compared with previously published CAF-associated prostate cancer signatures [ 61 , 78 ], our signature is more reliable and clinically instructive, based on clinical specimens and derived from a group of potential CAF subtypes. Meanwhile, our findings revealed that the APPCAF-based signature had predictive potential for both prognosis and treatment response. These findings shed new light on the role of APPCAFs in the remodeling of tumor niches and the immune status of the TME. However, our study also has several limitations. The generation of APPCAF clusters and APPCAF-based risk signatures was accomplished using retrospective data obtained from a public database. To avoid selection bias and enhance the accuracy of the analysis, future validation of this signature will require more prospective and multicenter PRAD cohorts. In addition, we only evaluated the APPCAF-based risk signature in predicting prognosis. Therefore, our next objective is to conduct a comprehensive study aimed at elucidating the potential mechanisms underlying this signature, with the ultimate goal of its clinical administration.
Conclusions We used a comprehensive bioinformatics analysis to identify DEGs between patients who received or did not receive ADT and discovered that CAFs downregulate the activity of antigen presentation and process-related pathways after castration. CTSK + MRC2 + CAF-C1 was identified as a CAF subtype associated with potential antigen presentation and processing. A signature based on four APPCAFRGs (THBS2, DPT, COL5A1, and MARCKS) was developed and validated, and the risk score derived from the signature demonstrated an inverse correlation with the infiltration of various immune cells, indicating that high risk was significantly correlated with poorer prognosis and clinical outcomes in PRAD. In vitro experiments were conducted to confirm the expression levels of four APPCAFRGs. These findings contribute to a better comprehension of the causes of immunotherapy's poor efficacy in prostate cancer. Meanwhile, the study and investigation of the CAF subgroup of prostate cancer are aimed at exploring the relevant characteristics of antigen presentation and the process of CAFs in PRAD and offer new avenues for exploring potential combination treatment strategies.
Background Cancer-associated fibroblasts (CAFs) are heterogeneous and can influence the progression of prostate cancer in multiple ways; however, their capacity to present and process antigens in PRAD has not been investigated. In this study, antigen presentation and process-related CAFs (APPCAFs) were identified using bioinformatics, and the clinical implications of APPCAF-related signatures in PRAD were investigated. Methods SMART technology was used to sequence the transcriptome of primary CAFs isolated from patients undergoing different treatments. Differential expression gene (DEG) screening was conducted. A CD4 + T-cell early activation assay was used to assess the activation degree of CD4 + T cells. The datasets of PRAD were obtained from The Cancer Genome Atlas (TCGA) database and NCBI Gene Expression Omnibus (GEO), and the list of 431 antigen presentation and process-related genes was obtained from the InnateDB database. Subsequently, APP-related CAFs were identified by nonnegative matrix factorization (NMF) based on a single-cell seq (scRNA) matrix. GSVA functional enrichment analyses were performed to depict the biological functions. A risk signature based on APPCAF-related genes (APPCAFRS) was developed by least absolute shrinkage and selection operator (LASSO) regression analysis, and the independence of the risk score as a prognostic factor was evaluated by univariate and multivariate Cox regression analyses. Furthermore, a biochemical recurrence-free survival (BCRFS)-related nomogram was established, and immune-related characteristics were assessed using the ssGSEA function. The immune treatment response in PRAD was further analyzed by the Tumor Immune Dysfunction and Exclusion (TIDE) tool. The expression levels of hub genes in APPCAFRS were verified in cell models. Results There were 134 upregulated and 147 downregulated genes, totaling 281 differentially expressed genes among the primary CAFs. The functions and pathways of 147 downregulated DEGs were significantly enriched in antigen processing and presentation processes, MHC class II protein complex and transport vesicle, MHC class II protein complex binding, and intestinal immune network for IgA production. Androgen withdrawal diminished the activation effect of CAFs on T cells. NMF clustering of CAFs was performed by APPRGs, and pseudotime analysis yielded the antigen presentation and process-related CAF subtype CTSK + MRC2 + CAF-C1. CTSK + MRC2 + CAF-C1 cells exhibited ligand‒receptor connections with epithelial cells and T cells. Additionally, we found a strong association between CTSK + MRC2 + CAF-C1 cells and inflammatory CAFs. Through differential gene expression analysis of the CTSK + MRC2 + CAF-C1 and NoneAPP-CAF-C2 subgroups, 55 significant DEGs were identified, namely, APPCAFRGs. Based on the expression profiles of APPCAFRGs, we divided the TCGA-PRAD cohort into two clusters using NMF consistent cluster analysis, with the genetic coefficient serving as the evaluation index. Four APPCAFRGs, THBS2, DPT, COL5A1, and MARCKS, were used to develop a prognostic signature capable of predicting BCR occurrence in PRAD patients. Subsequently, a nomogram with stability and accuracy in predicting BCR was constructed based on Gleason grade (p = n.s.), PSA (p < 0.001), T stage (p < 0.05), and risk score (p < 0.01). The analysis of immune infiltration showed a positive correlation between the abundance of resting memory CD4 + T cells, M1 macrophages, resting dendritic cells, and the risk score. In addition, the mRNA expression levels of THBS2, DPT, COL5A1, and MARCKS in the cell models were consistent with the results of the bioinformatics analysis. Conclusions APPCAFRS based on four potential APPCAFRGs was developed, and their interaction with the immune microenvironment may play a crucial role in the progression to castration resistance of PRAD. This novel approach provides valuable insights into the pathogenesis of PRAD and offers unexplored targets for future research. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04807-y.
Supplementary Information
Acknowledgements We thank the other partners of our laboratory for their helpful suggestions. Author contributions Concept and design: WW, TL, ZX. Technical Support: TL. Methodology and experiment: WW. Acquisition, analysis or interpretation of data: WW, TL, ZX. Software and statistical analysis: JZ, YZ. Pictures and Tables: ZX. Drafting of the manuscript: YR. Language modification and guidance: BH. All authors approved the final version of the manuscript for submission. Funding This research was funded by the National Natural Science Foundation of China (No. 82072849) and National Natural Science Foundation of China (No. 82102857), Shanghai Sailing Program (21YF1437000), and Shanghai Sailing Program (22YF1435100). Data availability The data used to support the findings of this study are available from the corresponding author upon request. Declarations Ethics approval and consent to participate The clinical samples were approved by the Ethics Committee of Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine. We also obtained handwritten informed consent from all subjects. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
J Transl Med. 2024 Jan 14; 22:57
oa_package/f6/69/PMC10789066.tar.gz
PMC10789067
38225657
Background Suboptimal diets, which contribute to malnutrition and dietary risks, are a leading cause of chronic disease and poor health globally [ 1 – 3 ]. As such, there is a need to prioritize achieving global nutrition security. Nutrition security refers to consistent access to food of sufficient quantity and quality in terms of variety, diversity, nutrient content, and safety to allow people to meet their dietary needs and food preferences for a healthy life [ 4 ]. Access to nutrient-dense foods is important for nutrition security, and consuming a diet that reduces the risk of chronic diseases, including Type 2 diabetes, cardiovascular disease, and certain cancers [ 5 ]. Nutrient-dense foods are those that provide vitamins, minerals and other health-promoting components with little to no added sugars, saturated fat, and sodium [ 5 ]. Unfortunately, access to nutrient-dense foods is threatened by climate change, as climate change and rising levels of carbon dioxide threaten crop yields and nutrient-density [ 6 – 8 ]. Furthermore, suboptimal-diet related health risks are expected to worsen as climate change progresses [ 6 ]. To reduce the risk of diet-related chronic diseases, and to protect human and planetary health, a global shift towards sustainable diets is imperative [ 6 , 7 , 9 – 11 ]. Sustainable diets are those with low environmental impacts which contribute to food and nutrition security, and to a healthy life for present and future generations [ 12 ]. Prioritizing this shift is an important shift to ensure access to nutritious, health supporting diets for a growing population within planetary bounds [ 6 , 7 , 9 – 11 ]. The food environment, or the physical, economic, policy and sociocultural surroundings [ 13 ] in which someone makes decisions about the foods they eat, can impact access to and consumption of healthy or nutrient-dense foods [ 14 – 16 ]. The food environment is also a critical place to implement initiatives aimed at supporting sustainable dietary patterns [ 17 ]. In the present study, we examine the consumer nutrition sub-environment, where consumers interact with food and its purchasing [ 14 ]. The consumer nutrition environment includes assessment of the availability of nutrient-dense food options, price, in-store marketing/promotion, and placement of food items, and availability of nutrition information, which may impact what foods people select to consume and eat [ 14 – 16 ]. Because the consumer nutrition environment is a place where consumers make decisions about which foods they will purchase and consume, these environments offer an opportunity to implement interventions to support sustainable, healthy diets [ 13 , 17 ]. Consumer nutrition environments hold a high potential for impact, but at present, tend to be less measured than some components of the food environment, as they have a potentially large number of variables to measure [ 14 ]. There is a need to optimize food environments, including consumer nutrition environments, to allow for greater nutrient-dense food access and opportunities to consume sustainable dietary patterns [ 9 ]. To inform research and policy interventions, it is important to establish rigorous, reliable and valid assessment of consumer nutrition environments for assessment and planning, surveillance, research, evaluation and advocacy [ 18 , 19 ]. However, there is a lack of standard methods for assessing food environments, including consumer nutrition environments [ 20 , 21 ]. While many food environment measurements exist, very few consider sustainability [ 21 ]. Furthermore, there is a lack of validity and reliability data on many measures [ 22 ]. This review aimed to summarize literature on existing consumer nutrition environment measurements that measure nutrient-dense foods and food sustainability. The present study aimed to summarize validity and reliability assessments of existing measurements to summarize rigor of existing measures.
Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-ScR) extension for scoping reviews was used for planning and presentation of results [ 23 ]. The PRISMA-ScR checklist contains 20 essential items plus 2 optional items for good reporting in scoping reviews [ 23 ]. Search strategy A systematic literature search was conducted using PubMed, Web of Science, Scopus, PsycINFO and the Cochrane library electronic databases. Search strategy terms included “grocery”, “supermarket”, “retailer”, “bodega”, “corner store”, “market”, AND “nutrition environment”, “food environment”, AND “audit”, “assess”, “measure”, AND “sustainable” or “climate”. Specific search strategies used for each database searched can be found in the study protocol as Supplementary file 1 . This study included articles published in English between January 1, 2002 to the date of the search, June 4, 2022. The authors found very few research articles about consumer nutrition environment measurement prior to 2007, but selected a search start date of 2002 to ensure any relevant research articles published 20 years prior to the search date were included. Additionally, previous reviews and reference lists of included studies were manually searched, and relevant articles were added accordingly. Covidence software was used to manage abstract and full text screening, and data extraction. Inclusion and exclusion criteria Eligibility criteria were developed to attempt to capture relevant peer-reviewed literature about auditing measures designed to assess consumer food environments in food stores, specifically those that measured the availability of nutrient-dense foods. They were also developed to capture measurements of in-store sustainability practices in select consumer nutrition environments (with an emphasis on supermarkets, grocery stores, or corner stores/bodegas). Studies that included a measurement of assessing nutrient-dense food availability and/or sustainable food practices in consumer nutrition environments, specifically, food retail stores were included, as a primary objective of the study was to summarize tools that assess these constructs. Financial and cultural inclusivity were included as access to affordable and culturally acceptable foods is a key component of a sustainable dietary pattern [ 12 ]. Studies that focused on modifications, or establishment of reliability or validity, of existing consumer nutrition environment measures were also included to help provide context of the rigor of existing measurement tools. Exclusion criteria were also applied. Studies focused on measures designed for assessing food retailer types that are not supermarkets, grocery stores, or corner stores/bodegas were excluded (e.g., measurement tools that measured farmers markets, restaurants, etc.), as these tools are functionally different than those that measure grocery stores, supermarkets, and bodegas/corner stores. Measures designed for assessing nutrient dense food availability or food sustainability via analysis of advertisements or using online resources (e.g., Yelp), were also excluded, as the focus of the present study was, primarily, on the in-store experience. Furthermore, measurement tools designed specifically for rural food environments were also excluded, as rural food retail stores may have different assessment needs, and to reduce scope, the present study opted to focus on urban or similar environments. Studies that used geospatial (GIS) approaches to assessing community nutrition environments were excluded from the present study, as its focus is on consumer nutrition environments. Studies published before January 1, 2002 or after June 4, 2022 were not included. Finally, systematic reviews were excluded as most on similar topics have not been published within the past 5 years, limiting their relevance due to the volume of recent publications in this area. Thus, only original research studies were included. Screening The screening process followed the PRISMA extension for scoping reviews [ 24 ]. Two members of the research team first independently applied inclusion and exclusion criteria to the title and abstracts to determine eligibility. Researchers applied inclusion and exclusion criteria to full-text articles that were deemed eligible after title and abstract screening. To ensure reliability, the reviewers met to discuss and resolve discrepancies after abstract and title, and full-text screenings. All disagreements between researchers throughout the screening processes were resolved in a group discussion with at least two members of the research team. Data extraction Two researchers independently extracted data from each article related to: the country each study took place in, study aims, funding source, food retailer types measured, assessment tool formats, assessment tool name, whether or not each tool was a modification of an existing tool, constructs assessed by each tool, foods assessed by each tool, total number of items assessed by each tool, measurement of federal food assistance programs, and mentions of validity and reliability assessment. All extraction disagreements between the researchers were resolved in a group discussion. A detailed description of each construct extracted and rationale for extraction, can be found in Supplementary file 2 . Synthesis of data Two researchers independently extracted selected data from each manuscript using Covidence. For some categories, such as assessment tool type, and constructs assessed, researchers could select from a list of common options for each data point. If available options were not reflective of data in a manuscript, researchers also had the opportunity to write in answers, verbatim. Researchers had space to fill in other data constructs, including country or countries of study origin, and assessment tool name, verbatim from manuscripts. Other constructs assessed, such as validity and reliability, were answered as a binary (yes/no). Two researchers met to resolve any discrepancies using Covidence software. In the case of any data extraction discrepancy, the research team reviewed each manuscript carefully as a team and determined the most correct or accurate representation of the data to complete the data extraction sheet. Once the final data extraction sheets were agreed upon by the team, the lead researcher reviewed each extraction sheet for completion and accuracy. The agreed upon data was synthesized in a table (see Supplementary file 3 ).
Results The search strategy yielded a total of 2459 studies, including 2 studies added from backwards citation chasing. One thousand one hundred twenty-five duplicates identified by Covidence were removed, resulting in a total of 1334 articles for title and abstract screening. During title and abstract screening, researchers determined that 1244 articles did not meet inclusion criteria, and the 90 that did were next screened as full-texts. The most common reasons for exclusion in the final review process included: (1) studies published on existing measures that were not modifications or adaptions of existing measures but rather, utilized a tool already documented in the review without any original contribution, (2) the measurement did not look at consumer nutrition environments, but rather other parts of the food environment, (3) the measurement was made specifically to be used in rural contexts, or (4) the measurement was created to assess food outlets that were not food retailer types listed in the inclusion criteria (e.g., farmer’s market or restaurant assessments). A total of 58 articles were included for data abstraction. Figure 1 provides additional details on the study identification, screening, and inclusion process. A complete summary chart of data extracted from each manuscript can be found as Supplementary file 3 . Location Instruments were developed primarily in the United States (US) ( n = 37), [ 25 – 58 ], Australia ( n = 4), [ 59 – 62 ], New Zealand ( n = 3), [ 63 – 65 ], Canada ( n = 3), [ 66 – 68 ], Brazil ( n = 3), [ 69 – 71 ] the United Kingdom ( n = 2), [ 72 , 73 ], and Chile ( n = 2), [ 74 , 75 ]. Two studies ( n = 2) were developed to be used in multiple countries [ 76 , 77 ]. Additional countries examined included China ( n = 1), [ 78 ], India ( n = 1), [ 79 ], South Africa ( n = 1), [ 80 ], and Spain ( n = 1) [ 81 ]. Assessment method The most common assessment method was a checklist or similar format ( n = 36), [ 25 , 26 , 28 , 32 – 37 , 39 – 41 , 44 – 49 , 51 , 52 , 56 – 59 , 61 , 66 – 70 , 72 – 74 , 78 , 79 , 81 , 82 ]. Additional assessment methods included the use of a market basket approach which aims to measure foods commonly consumed ( n = 1), [ 50 ], use of an observational form or tool, [ 29 , 71 ], and assessment of shelf space ( n = 5) [ 29 , 31 , 53 , 64 , 65 , 83 ]. Other studies used technology, including an electronic store survey, [ 42 ], a mobile app ( n = 1), [ 60 ], photo assessments ( n = 1), [ 75 ], wearable cameras, ( n = 1) [ 63 ] and a combination of photo and voice assessment of food environments ( n = 1) [ 27 ]. Some measures used a combination of methods [ 54 , 77 ]. Constructs assessed The majority of measures assessed food availability ( n = 53), [ 25 , 26 , 28 – 33 , 33 , 33 , 34 , 34 , 35 , 35 – 37 , 47 , 48 , 50 – 56 , 58 – 72 , 74 , 76 – 83 ], and food prices ( n = 36) [ 25 , 26 , 29 , 32 – 34 , 39 , 41 , 42 , 46 – 48 , 51 , 53 , 56 , 58 – 60 , 66 – 70 , 72 – 74 , 76 , 78 – 83 ]. Seven studies examined advertisements [ 34 , 55 , 69 , 70 , 76 , 83 ] and 13 examined promotion [ 39 , 43 , 53 , 59 , 60 , 62 , 63 , 65 , 70 , 71 , 73 , 76 ]. Other constructs assessed included variety ( n = 16) [ 25 , 26 , 33 , 37 , 44 , 59 , 64 – 67 , 71 – 74 , 81 , 83 ], comparison of healthier vs. less healthy options ( n = 7), [ 26 , 33 , 40 , 67 , 72 , 82 ] placement ( n = 9), [ 39 , 43 , 55 , 59 , 60 , 62 , 63 , 69 , 73 ] and accessibility ( n = 5) [ 27 , 31 , 39 , 61 ]. Few studies ( n = 2) assessed food sustainability [ 47 , 68 ]. Foods assessed Among foods assessed, the most common food categories included fruits ( n = 45), [ 25 – 30 , 32 – 37 , 40 , 42 – 47 , 49 – 54 , 56 – 60 , 63 , 66 – 71 , 73 , 75 , 77 , 78 , 80 , 82 , 83 ], vegetables ( n = 44), [ 25 – 30 , 32 – 37 , 40 , 42 – 47 , 49 – 54 , 56 – 60 , 63 , 66 – 69 , 71 , 73 , 75 , 77 , 78 , 80 , 82 , 83 ], cow’s milk/dairy, ( n = 32) [ 25 , 26 , 28 , 30 , 33 – 35 , 37 , 39 – 42 , 44 , 45 , 47 , 48 , 50 – 53 , 56 , 59 , 60 , 63 , 66 , 67 , 69 , 75 , 78 , 80 – 82 ], grains or grain products (such as bread or cereal), ( n = 24) [ 25 , 33 , 35 , 39 , 42 , 44 , 46 – 48 , 59 , 60 , 63 , 67 , 69 , 75 , 78 , 81 , 82 ], and meat ( n = 23) [ 28 , 36 , 40 , 41 , 44 , 46 , 50 – 53 , 56 , 59 , 60 , 63 , 66 , 68 , 69 , 75 , 78 , 80 – 82 ]. Other food categories commonly assessed included snack foods ( n = 17), [ 29 , 40 , 45 , 49 , 53 – 55 , 59 , 60 , 63 , 66 , 68 , 71 , 77 , 80 ], candies ( n = 5), [ 29 , 39 , 43 , 54 , 71 ], ultra-processed foods ( n = 4), [ 69 , 74 , 79 , 83 ], sugary beverages or sugar-sweetened drinks, ( n = 7) [ 29 , 44 , 63 , 71 , 78 , 80 ] and meat alternatives ( n = 4) [ 40 , 56 , 66 , 67 ]. Several studies ( n = 6) broadly compared healthier or ‘minimally processed’ foods to those that were less healthy or processed [ 31 , 45 , 62 , 64 , 76 , 82 ]. Some studies focused on singular or dual categories of foods, such as junk foods [ 65 ] or fruits and vegetables [ 57 , 58 ]. Fruits and vegetables were among the most commonly assessed food items or food categories. Eighteen studies measured both fresh fruits and fresh vegetables [ 25 , 26 , 28 , 30 , 35 – 37 , 40 , 42 , 45 , 46 , 51 , 53 , 54 , 56 , 67 , 81 , 82 ]. Twenty-two studies assessed fruits, [ 27 , 29 , 32 – 34 , 39 , 43 , 47 , 50 , 52 , 57 – 60 , 66 , 68 , 71 , 73 , 77 , 78 , 80 , 83 ] and vegetables, [ 27 , 29 , 32 – 34 , 39 , 43 , 47 , 50 , 52 , 57 – 60 , 66 , 68 , 71 , 73 , 77 , 78 , 80 , 83 ], without specifying the type (fresh, frozen, canned, etc.). Frozen fruits ( n = 10) [ 30 , 35 , 36 , 40 , 44 , 51 , 56 , 67 , 68 , 82 ], and frozen vegetables ( n = 15), [ 25 , 30 , 35 , 36 , 40 , 44 , 51 , 56 , 67 , 68 , 82 ], canned fruits ( n = 13) [ 25 , 26 , 30 , 35 , 36 , 40 , 44 , 51 , 54 , 67 , 68 , 75 , 82 ], and canned vegetables ( n = 13), [ 25 , 26 , 30 , 35 , 36 , 40 , 44 , 51 , 54 , 67 , 68 , 75 , 82 ], were also frequently assessed. One study measured “single” fruits and vegetables [ 44 ], one measured dried fruit, [ 50 ], and another “all fruits and vegetables” [ 49 ]. Four additional studies mentioned assessing “produce” [ 35 , 41 , 48 , 55 ]. Access Among studies conducted in the US ( n = 37), [ 25 – 30 , 32 , 34 – 37 , 39 – 42 , 44 , 46 , 47 , 51 – 53 , 56 , 59 , 62 , 65 – 69 , 71 – 73 , 78 , 79 , 81 – 83 ], eight studies collected information on whether or not stores accepted Supplemental Nutrition Accessibility Program (SNAP) [ 25 , 26 , 28 , 32 , 40 , 42 , 48 , 50 , 55 ], and seven collected information on whether or not stores accepted Special Supplemental Nutrition Program for Women, Infants and Children (WIC) [ 25 , 26 , 28 , 32 , 40 , 48 , 50 ]. Few studies ( n = 2) looked at other aspects of accessibility such as physical accessibility [ 27 , 39 ]. Measure development and adaptations Thirty-six studies in the review presented measures that were adaptations of other, existing measures ( n = 36) [ 25 – 30 , 32 , 34 – 37 , 39 – 42 , 44 , 46 , 47 , 51 – 53 , 56 , 59 , 62 , 65 – 69 , 71 – 73 , 78 , 79 , 81 – 83 ]. For example, some studies would modify or adapt an existing measure to fit a new geographic context or food retailer stores type. The most commonly adapted measure is the Nutrition Environment Measures Survey for stores (NEMS-S) developed by Glanz et al., (2007) ( n = 20) [ 25 , 26 , 28 , 34 , 35 , 37 , 40 , 46 , 52 , 53 , 56 , 66 – 68 , 71 , 78 , 79 , 81 – 83 ]. Some measures modified or combined several measures; for example, the FoodNest measure created by Glickman et al., 2021 [ 34 ] modified the Nutrition Environment Measurement Score in Corner Stores (NEMS-CS) and the Bridging the Gap Community Obesity Measures Program. Food items assessed ranged from 7 to 196 items in studies that reported items measured. Validity and reliability assessment Among 58 studies included in the final review, 24 mentioned assessing validity of the food environment measures [ 30 – 33 , 35 , 39 , 40 , 42 , 43 , 49 , 50 , 59 , 60 , 63 , 66 – 71 , 73 , 78 , 81 , 82 ]. Five additional studies mentioned basing their measures off of existing validated studies [ 28 , 41 , 52 , 68 ]. Specifically, seven examined construct validity, [ 33 , 43 , 59 , 65 , 69 – 71 ] four examined face validity [ 32 , 35 , 53 , 81 ] and one examined criterion validity [ 39 ]. Thirty-one studies mentioned assessing reliability [ 25 , 26 , 28 , 30 – 33 , 35 , 39 – 43 , 53 , 54 , 58 – 60 , 62 , 65 – 73 , 78 , 81 , 82 ]. The most common means of reliability assessment was inter-rater, inter-observer or inter-coder reliability ( n = 24) [ 25 , 26 , 30 , 32 , 33 , 35 , 39 , 40 , 42 , 43 , 49 , 50 , 59 , 60 , 63 , 66 – 68 , 70 – 73 , 78 , 82 ].
Discussion Given that food represents a key opportunity to protect human and planetary health, and that the consumer nutrition environment represents an important opportunity to improve access to nutrient-dense and sustainable foods [ 17 ], robust and comprehensive measures of these environments are important [ 84 , 85 ]. This review aimed to summarize literature on existing consumer nutrition environments measures, including access to nutrient-dense foods, food sustainability practices, and reliability and validity. Regarding study aims, many studies included in this review aimed to develop or validate existing tools, to assess, describe, document, or compare food environments. Other aims included measurement of specific foods (e.g., fruits and vegetables), or to assess healthiness of food environments. Many measures exist, including many checklist or similar formats (i.e., questionnaire), shelf space assessments, market basket approaches, and some technology-enhanced methods (i.e., mobile apps). Constructs frequently measured include availability, price, quality, variety, placement, accessibility, and comparison of healthy vs. less healthy food choice options. Only two studies had any assessment of environmental sustainability. Regarding foods assessed, almost all studies included in this review measured fruits and vegetables. Other foods assessed included cow’s milk/dairy, grains or grain products, meats, snack foods, sugar-sweetened drinks, and candies or ultra-processed foods. Thirty-six measures were adaptations or modifications of other measures. The most commonly assessed food retailer types included convenience stores ( n = 31), supermarkets ( n = 29), and grocery stores ( n = 28), with other food retailer types including corner stores ( n = 5) and dollar stores ( n = 4). Of the 58 studies included in the review, 24 assessed validity, and 31 assessed reliability. Many studies measured “healthy” foods or food items. The definition of healthy varied by study, and some did not specify or clarify what criteria was used to define foods as healthy. This makes comparison of findings across various consumer nutrition environment assessments challenging. The authors hence suggest future consumer nutrition environment measures clearly define criteria for categorizing foods as healthy or nutrient dense. For example, the definition of nutrient-dense could follow the definition in the most recent Dietary Guidelines for Americans (DGAs) [ 5 ], or the Food and Drug Administration’s definition of “healthy” for food labeling could be used [ 86 ]. While these two definitions differ slightly, they exemplify potential means of systematically categorizing foods according to objective metrics, such as sodium and added sugars content. While many studies measured availability of healthy foods, only two studies included any assessment of environmental sustainability. Specifically, a study by Lupolt and colleagues examined availability of sustainable food choices, food waste, packaging reduction, availability of organic foods, milk produced without hormones or antibiotics, grass-fed milk and plant-based milk [ 47 ]. Mollaei and colleagues developed a measure assessing availability of foods available to achieve a low-carbon dietary pattern based on what is available to Ontario residents [ 68 ]. The latter is more in line with major efforts to shift towards sustainable diet patterns that emphasize plant-based diets and away from high meat consumption, particularly ruminant animal crops like beef and lamb, for overall food sustainability [ 9 ]. While these measures acknowledge the importance of examining food sustainability as part of food environment assessment, research gaps remain on how much certain aspects of various production measures, such as organic agriculture, matter in terms of overall sustainability of a food product [ 87 ], which may limit current attempts to quantify food sustainability capacity in consumer nutrition environment settings. In addition to increasing standardization for measuring nutrient-dense foods and sustainable foods, there is also opportunity to determine appropriate scope and method for consumer nutrition environment measures. The large range of items or varieties of food types assessed (7 to 196) indicates marked differences in the scope of existing measures. Given that a seven-item measure was found to have comparable validity with the original NEMS-S [ 52 ], it may be worth exploring what number of measurement items is considered adequate to measure consumer nutrition environments, as, at present, the optimal scope of assessment that meets research needs while maintaining logistical feasibility remains unclear. Regarding methodology, results of the current review show that many studies measured entire food categories by a single or a few specific foods. For example, NEMS-S and other studies assess availability of fruits and vegetables based on whether a store has a checklist of items, such as apples and carrots [ 33 ]. While measuring selected foods may be useful in some food retail stores, surveying specific foods as proxies for larger food categories has the potential to miss other foods that may be available to build nutrient-dense and sustainable food patterns, especially across geographic and cultural contexts. There are several measures that assess foods relevant to certain geographical or cultural contexts, including specific cities, states, or cultural food patterns [ 26 , 30 , 35 , 36 , 67 , 81 ]. Future measures may build upon these tools, and aim to broadly assess overall nutrient-dense food availability and food sustainability capacity across cultural contexts. Lastly, regarding rigor, the present study found 54.44% of the 58 studies were assessed for reliability and 41.38% for validity. A 2017 systematic review focused on food environment assessment reported that 25.9% of tools measuring the food environment assessed reliability, and 28.2% reported validity [ 84 ]. Establishing validity and reliability is important for ensuring data are replicable and results are accurate [ 88 ]. There is thus an overall need for improved reliability and validity assessment of food environment tools, including consumer nutrition environment tools [ 89 ] to improve measurement capacity and rigor [ 18 , 19 , 90 ]. Overall, the results of the current review suggest a wide range of consumer nutrition environment measures. They also highlight opportunities to improve systematic measures of both nutrient-dense foods and sustainability capacity, and the importance of considering cultural context and inclusivity. These findings align with those of a 2012 systematic review of consumer nutrition audit tools [ 84 ], suggesting continued room for improvement the consumer nutrition assessment space. Measures in this review were also heterogeneous, making it difficult to draw conclusions across studies. This limitation is not unique to consumer nutrition environment measures, but exists across food environment assessment as whole [ 19 , 84 ]. Enhanced reliability and validity may help to increase the rigor of existing and future measurements. Furthermore, while the field of food environment measurement has collected copious amounts of data, there is still no consensus on best ways to manage or utilize the data (82). Future efforts may establish best practice for managing, analyzing, and interpreting consumer nutrition environment data. This study has several limitations. First, as a scoping review, critical appraisal of evidence quality is not a requirement and was not conducted [ 91 ]. This is a limit as it does not identify gaps in literature that may exist due to low quality [ 91 ]. Additionally, as a scoping review, it is not exhaustive or comprehensive, but rather assesses an area of inquiry, in our case, consumer nutrition environment assessment of nutrient-dense food availability and food sustainability capacity [ 91 ]. This research is intended to map key concepts to inform future systematic reviews and/or research [ 91 ]. Thus, this review alone is not a complete and representative example of all aspects of consumer nutrition assessment. It excluded literature about tools that measure rural consumer nutrition environments, or other aspects of the consumer nutrition environment, including those tailored to other food retailer types, such as farmers markets or restaurants, which may play important roles in the lives and diets of consumers. Lastly, studies outside of the electronic databases used in this review may have been missed. Despite these limitations, this review offers a systematic search strategy completed in five databases that cover a range of health and public health-related subject areas, including those related to nutrition and sustainability. Furthermore, the application of the PRISMA-ScR guidelines to the planning and dissemination of the review add rigor to the scoping review methodology adopted [ 24 ]. Finally, this study contributes to existing consumer nutrition environment literature by adding a sustainability component, which is critical to support efforts on nutrition and food security, as well as planetary health going forward.
Conclusions Many consumer nutrition environment measures exist, with range in scope, and constructs assessed. Most commonly, consumer nutrition environment measures assessed availability, price, quality, variety, placement, accessibility, and comparison of healthy vs. less healthy food choice options, and only two measures had any mention of environmental sustainability. Furthermore, many studies lack reliability and validity. There is opportunity to improve consumer nutrition environment assessment with validated, reliable measures that utilize recent data on nutrient-dense foods and food sustainability capacity. Such measures will help public health researchers, practitioners and policy makers for research, planning, evaluation and advocacy, targeting improved nutrient-dense food availability and food sustainability capacity in consumer food environments.
Consumer nutrition environments are defined as places in which consumers interact with the food they eat; these food choices can impact human and planetary health. Assessment measures for consumer nutrition environments are numerous, and vary widely on what, and how, they assess the food environment. The objective of this scoping review was to synthesize existing evidence on nutrition environment measurements and their capacity to assess nutrient-dense food access and food sustainability capacity. Eligibility criteria were developed to capture relevant peer-reviewed literature about auditing measures designed to assess nutrient-dense foods and food sustainability capacity in the consumer nutrition environment. A search strategy was conducted to collect articles published between January 1, 2002-June 4, 2022, using PubMed, Web of Science, Scopus, PsycINFO and the Cochrane library electronic databases. After screening 2457 manuscripts, 58 met inclusion criteria. Study aims, funding source(s), types of retailers assessed, assessment format and name, constructs measured, food categories measured, assessment of validity and/or reliability, and other relevant data were extracted from each manuscript. Results showed that most measures use checklists, surveys, questionnaires or similar methods to assess availability, quality, and price of select food items as assessment constructs. Most do not assess nutrient-dense food availability, and even fewer assess food sustainability. Development of comprehensive, reliable, and valid consumer nutrition environment measures that assess nutrient-dense food availability and food sustainability is important for research, planning, evaluation and advocacy aimed at improving consumer food environments for human and planetary health. Supplementary Information The online version contains supplementary material available at 10.1186/s13690-023-01231-y. Keywords
Supplementary Information
Abbreviations Preferred Reporting Items for Systematic Reviews Preferred Reporting Items for Systematic Reviews and Meta-Analyses Federal food assistance program Nutrition Environment Measures Survey Supermarkets Supplemental Nutrition Assistance Program Women, Infants and Children Acknowledgements The authors would like to thank Matthew Kibbee, PhD, for his assistance with evidence synthesis methods and best practices. The authors also thank Rachel Kuzmishin, Milagro Lara, Sydney Nhambiu and Ellie Ji for their contributions to the data extraction process. Authors’ contributions KB wrote and ran the search strategy. RF was involved in conceptulaization. KB and LB were involved in the screening and data extraction process. KB wrote the main manuscript text and made the tables and figures. All authors reviewed the manuscript. Funding Research was funded by a Cornell Center for Health Equity pilot project grant. Availability of data and materials Not applicable. Declarations Ethics approval and consent to participant Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
Arch Public Health. 2024 Jan 15; 82:7
oa_package/67/29/PMC10789067.tar.gz
PMC10789068
0
Background Diabetic retinopathy (DR) is a microvascular disorder resulting from prolonged exposure to diabetes mellitus. It represents a grave complication of diabetes and is the foremost cause of blindness globally. As per the estimates provided by the International Diabetes Federation (IDF), the worldwide population suffering from diabetes mellitus (DM) was 463 million in 2019, and it is projected to increase to 700 million by 2045 [ 1 ]. This upsurge in diabetes prevalence is accompanied by a heightened risk of DR and its associated complications. The Global Burden of Disease Study identifies DR as the fifth most common cause of blindness and moderate to severe vision impairment in adults aged 50 and above and also the main cause of vision impairment in adults of productive ages [ 2 ]. Diabetic maculopathy and complications of proliferative DR, such as vitreous hemorrhage, tractional retinal detachment, and neovascular glaucoma, account for the majority of vision loss [ 3 ]. Furthermore, the increasing number of diabetics also increases the incidence of DR. According to the American Academy of Ophthalmology's most recent epidemiological data, the number of adults worldwide with DR is estimated to be 103.12 million, with a projected increase to 160.50 million (55.6%) by 2045 [ 4 ]. In low- and middle-income (LMICs) countries, especially in Indonesia, the DR population was 30.7% among diabetics populations, with 7.6% suffering from vision-threatening DR. Ages 50 and up, diabetes duration of five to ten years, and more than ten years, and postprandial blood glucose of 200 mg/dl or higher were all associated with a higher prevalence of any DR [ 5 , 6 ]. As the number of diabetes patients grows, so is the demand for ophthalmic care patients and medical specializations (e.g., exams, screening, treatments, and ophthalmologists). The screening target for DR cases is expected to reach 80% in 2030, as determined by the WHO [ 7 , 8 ]. Therefore, a cost-effective and efficient program is needed to achieve this target. The development of optimal screening programs using accessible ophthalmic infrastructure resources is prominent, considering it has been proven more cost-effective than no or opportunistic screening [ 9 , 10 ]. Furthermore, ophthalmologists have been occupied with a massive growing population of people with diabetes (PwD); it is not realistic to rely on ophthalmologists to examine the whole population, so ophthalmologists need help from community representatives to achieve the target of health services primarily to remote areas. Another issue regarding the screening process is awareness regarding DR as a complication of diabetes due to the lack of literature in society. Mostly, diabetic patients didn't seek any eye examination until they developed symptoms. As a result, patients are frequently identified with DR in a severe, vision-threatening stage, making treatment challenging [ 11 , 12 ]. Nevertheless, cost-effectiveness screening necessitates high coverage, which is problematic in LMICs, where accessible eye health services are concentrated in cities and access to them from remote and rural populations would be difficult [ 13 ]. In addition, the insufficiency of ophthalmologists in LMICs, particularly in Indonesia, is a significant barrier to tackling the higher prevalence of DR. The ratio of ophthalmologist density to population is 1:155,618; it should be 1:250,000, as targeted by the Ministry of Health in "Peta Jalan Penanggulangan Gangguan Penglihatan di Indonesia Tahun 2017–2030" [ 14 ]. The uneven distribution of ophthalmologists also exacerbates the lack of numbers, so people in remote villages struggle to access adequate eye care services [ 14 ]. Furthermore, screening at the primary level using non-medical people is suggested by WHO, which would help cover a larger population. In response to anticipated interest among policymakers in many countries in resolving the health worker shortage, the World Health Organization (WHO) supported task shifting by cadre as a community-based intervention [ 15 ]. Cadrs are non-medical individuals who undergo specific medical training to serve in designated roles within the community [ 16 ]. They encompass various titles that are adapted to specific countries, such as social workers, lady health workers, community health workers, and village health workers [ 11 , 17 ]. The World Health Organization (WHO) defines task shifting as the delegation of responsibilities either to existing healthcare professionals with less training and credentials or to newly established cadres who receive competency-based training for particular activities [ 18 ]. Cadre should be part of the healthcare system by helping doctors and nurses with eye screening and health promotion tasks [ 16 , 19 ]. In this way, they are seen as 'another pair of hands', as they contribute to providing care to underserved communities and increasing the health system's capacity to deal with financial and human shortages in a resource-poor situation [ 18 ]. Cadres have been regarded as social and cultural intermediaries and supporters, enhancing the link between the current health system and the community [ 11 ]. Therefore, their job should be facilitating community engagement and taking the necessary steps to overcome the social and cultural barriers contributing to poor health [ 20 ]. Community-based interventions involving non-physician cadres have been proposed to improve the management of DR. Screening at the primary level performed by cadre would help cover a larger population. In addition, community-based interventions can reactivate the healthcare pyramid to achieve universal coverage. However, the extent and nature of the literature on the role of cadres in the community on DR management, and the challenges associated with implementing such interventions, are poorly understood. This scoping review aims to map the available literature on the role of cadre in the community on DR management, with a particular focus on the challenges associated with implementing such interventions. By identifying gaps in the literature and highlighting areas for further research, this scoping review can inform the development of community-based interventions to improve the management of DR in LMICs.
Methods Literature searching and concept scoping review A scoping review is conducted using the Arksey and O'Malley methodological framework. Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) criteria also were followed in conducting and reporting the scoping review [ 21 , 22 ]. The 'PCC' mnemonic is recommended as a guide to constructing a clear and meaningful title for a scoping review. The PCC mnemonic stands for population, concept, and context. The population is an essential characteristic of participants, and characteristics should be detailed, including age and other qualifying criteria that make them appropriate for the objectives of the scoping review and the review question. Next, a concept should be clearly articulated to guide the scope and breadth of the inquiry. This may include details on elements in a formal systematic review, such as the 'interventions' and/or 'phenomena of interest' and/or 'outcomes.' Last, context may include cultural factors such as geographic location and/or specific racial or gender-based interests. In some cases, context may also encompass details about the particular setting. The population, concept, and context were “cadre”, “role of cadre in the management of DR”, and LMICs”, respectively. Data sources and search strategy This scoping review is as comprehensive as possible without a time limit to obtain the data. The primary sources used were PUBMED/MEDLINE, Embase, and the Cochrane Database of Systematic Reviews (CDSR), and the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library. Manual searching and grey literature were obtained from reference lists of included articles. Details of keywords with or without MeSH terms that have been used are listed in Supplementary file 1 . In addition, a manual search was also performed by checking the reference lists of all the retrieved studies to identify studies not yet included in computerized databases. Study selection The eligibilities of the study are (1) provide non-medical and non-paramedic personnel (cadre) despite of the term used in their research; (2) offer the explanation of the cadre’s role; (3) involve cadre in screening for DR detection: assessment of the target population for the presence of DR by individuals other than ophthalmologists through history-taking, (4) be conducted in LMICs to generate evidence to inform the development of national- or subnational-level DR screening and treatment programs, (5) be published in English. However, titles and abstracts that did not meet the eligibility criteria were excluded, and full-text articles were retrieved for those that did meet the criteria. The selected studies were rated using the Oxford Center of Evidence-based Medicine Levels of Evidence [ 23 ]. Data extraction Data were gathered from each study that met the inclusion and exclusion criteria. The author(s), year of publication, study location in low-middle income countries, study populations (cadre/community health workers/lady health workers/village health worker), aims of the study, methodology, characteristics of cadre (including age, numbers, the scope of the task, and training process of cadre), the role of cadre in society, society response towards cadres, and challenges/ barrier. Operational definition The operational definitions of the terminology used for this study is summarized in Supplementary file 2 [ 6 , 18 – 23 ].
Results Search results The search yielded 2777 articles, with an extra fifteen entries discovered from other sources (websites and bibliographies). Then, Mendeley software removed duplicates (1845/2777; 66%). After that, additional records were identified through other sources or manual hand searching for 20 articles. Next, 918 articles were excluded because they did not include DM, DR, or cadre in the title, and many of the articles had not been completed for the study. Finally, 34 articles were assessed for eligibility with full text. As a result, six articles meet our inclusion criteria. Figure 1 summarizes the process of searching and recruiting for this study. General characteristics of included studies The included research studies and reports were published between 2005 and 2020 and comprised 6 published studies including interventional studies ( n = 4) and qualitative studies ( n = 2). All included documents were in English, where most studies were interventional studies, including one randomized clinical trial and two qualitative studies with II-III levels of evidence. All countries in our study are from LMICs, with India being the dominant country. Other countries were Fiji, Pakistan, and Kenya. The data about the distribution of our study is shown in Fig. 2 . Cadre terms and the purpose This study uses a variety of cadre words. Cadre terms are tailored to the countries and conditions handled by the cadre. Rani et al., for instance, refer to them as cadre social workers since they are voluntarily involved in the social community to assist medical workers in connecting with the community [ 16 ]. Fiji, on the other hand, defined cadre as community health workers (CHW) or village health workers (VHW), which have the same meaning: health care providers who live in the community they serve and receive lower levels of formal education and training than professional health care workers such as nurses and doctors. While in Pakistan, the cadre is known as lady health workers (LHW) who must have at least 8 years of schooling, be recommended by their community, and undergo extensive training to become a lady health worker [ 11 , 24 ]. Every program health worker is allocated to a specific government health institution, where they receive training, a limited subsistence allowance, and medical supplies. Also, provincial and district coordinators supervise the Lady Health Worker Program which holds quarterly review meetings and provides analytical input on health records from LHW [ 25 , 26 ]. In India, Cadres are addressed as ASHA, or accredited social health activists, women village residents who are married, widowed, or divorced, preferably between the ages of 25 and 45. They are selected and trained to act as a mediator between the community and the public health system [ 27 ]. The majority of cadres involved in this research shares several primary tasks. These individuals play a crucial role in serving local communities, particularly those residing in remote areas and historically marginalized populations. Firstly, they aim to raise awareness about diabetes and diabetic retinopathy within society. This awareness encourages individuals to willingly undergo screenings for early detection and prompt treatment of sight-threatening diabetic retinopathy. Secondly, the screening process helps in initiating the referral process, categorizing patients based on their need for further evaluation, direct treatment, and even surgery. While some cadres have relatively straightforward responsibilities, others, like India's ASHAs, have more complex duties. ASHAs, for instance, are required to familiarize themselves with the health status of villagers. They visit every family and conduct sample surveys of the village's population to assess their health [ 27 ]. Additionally, ASHAs perform basic tasks such as glucose screening and rudimentary visual testing, which significantly assist medical officers in their duties [ 16 , 17 ]. Task shifting proves to be highly beneficial in redistributing the duties and responsibilities of ophthalmologists within the community. These cadres play a crucial role in maximizing the screening of eye problems at the grassroots level. The purpose and diversity of cadres, as well as their roles, will be summarized in Table 1 in the comprehensive review of related studies. Study results The importance of cadres in local community DM is increasing around the world. The IDF estimates that the number of people with DM will reach 700 million in 2045 [ 1 ]. The increasing number of diabetics also increases the incidence of DR [ 4 ]. In fact, DR is an important cause of vision impairment and blindness [ 2 ]. On the other hand, there is strong evidence that good control of DM and associated systemic conditions reduces the incidence of sight-threatening retinopathy and improves prognosis after standard treatment of DR [ 7 ]. However, human resources are a constraint in LMICs. An acute shortage of retina specialists causes overwhelming work to handle this issue [ 14 , 28 ]. Health worker shortages can impede access to quality healthcare services, and the impact is exacerbated if such shortages are followed by unequal worker distribution [ 29 , 30 ]. Screening at the primary level using non-ophthalmic, trained technicians would help cover a larger population. In response to anticipated interest among policymakers in many countries in resolving the health worker shortage, the WHO supported task shifting by cadre as a community-based intervention [ 15 ]. Cadre should be part of the healthcare system by helping doctors and nurses with eye screening and health promotion tasks [ 16 , 19 ]. In this way, they are seen as 'another pair of hands', as they contribute to providing care to underserved communities and increasing the health system's capacity to deal with financial and human shortages in a resource-poor situation [ 18 ]. Cadres have been regarded as social and cultural intermediaries and supporters, enhancing the link between the current health system and the community [ 11 ]. Therefore, their job should be facilitating community engagement and taking the necessary steps to overcome the social and cultural barriers contributing to poor health [ 20 ]. Characteristic cadres The average age of the cadre in this study was > 18 years old, but one study included ages from 25 to 65 years old. The setting of this study took place in a rural area and included a community. All the cadres in this study were trained to perform their community tasks. The training included health education, such as screening, and inviting the community to participate in the screening event and eye examination. In addition, they also supported the community in obtaining treatment and served as a reminder to control the patient's treatment and help refer the patient to a higher health facility for further examination by an ophthalmologist. One study by Shah et al. assessed that cadres could perform simple vision checks for patients. This skill is beneficial for task shifting, which can be utilized as an initial vision screening stage. The role of this cadre can optimize the screening of eye issues at the most basic level by dividing the responsibilities and functions of ophthalmologists in the community with task-shifting. Furthermore, educational media used by cadres in educating the community can be in the form of leaflets, posters, education sheets, peer-to-peer discussions, annual meetings, and mobile phone groups tailored to the needs and events provided to patients. The media is considered helpful in conducting health promotion in the community. The use of the local language is also the most critical thing in carrying out health promotion in the community in the area; therefore, almost all cadres must be able to speak and use local languages to participate in the research project [ 31 , 32 ]. Information regarding the cadre’s characteristics will be summarized in Table 2 . The requirement to be cadre All cadres in this study were given an orientation to the anatomy and physiology of the eye, diabetes and diabetic DR. They also have training regarding peer supporter training, a retinal screening, and communication [ 16 ]. Rani et al. in his randomized study conducted in India, stated that all cadre underwent intensive training for one week, eight hours a day. This training is to prepare them for the 2 most prominent events, which are World Diabetes Day and World Sight Day, to encourage society to attend the camp for screening [ 16 ]. Afterward, all ages cadre recruited in this study ranged between 18 and 65 years old. Following this, older cadres over 50 are still included because they have voluntarily served in the community for more than 15 years, so they are considered to have a good understanding of community conditions [ 33 ]. According to the interventional studies by Singh et al. in India and Mwangi et al. in Kenya, and also a non-randomized control study by Chariwala et al. in India, they employ cadres with experience in handling diabetic care and training and are already performing several health promotion activities [ 11 , 31 , 32 ]. The role of cadres in local community From this study, a good-quality cadre from their training and selective selection process could create societal awareness. According to Ram et al., they conducted a survey of the cadre evaluating the impact of training. They discovered that the cadre thought that the activity had increased their knowledge of diabetes, its signs and symptoms, prevention, care and management, and DR as a diabetic complication, and also improved their communication skills [ 34 ]. As a result, people are motivated to attend the screening event because of the cadre’s motivation. In addition, the uptake numbers for screening were increased to achieve effective diabetic retinopathy screening [ 33 ]. In addition, the cadre could also help health workers remind society members who have high-risk diabetes to have regular screenings every month by telephone. They also collaborate with a primary health care worker to build a recall system to monitor and follow up on treated patients with diabetes and DR. The other activities of the cadre in society are making group health talks as peer supporters, informal discussions among PwD, planning for advocacy, and awareness-raising activities. Table 3 summarizes the role of cadres and the community's response to cadres. Challenges The first and foremost challenge found in studies is inadequate infrastructure. First, access to primary health care is difficult and expensive for people living in rural areas [ 11 , 34 ]. For example, in India in Maharashtra district regions far from primary care health facilities [ 11 ]. They will attend the facility if they are facilitated. As mentioned by Singh et al., their study revealed that the uptake was significantly higher to PHC facilities because of the provision of transport to PHC from villages [ 11 ]. Therefore, delivering care closer to the people is equally important. This study shows more acceptance for DR screening in the PHC closer to the residence. However, this is possible only with an adequate increase in infrastructure and skilled workforce [ 11 , 12 ]. Besides, in rural areas, health workers are sometimes not provided with adequate equipment, such as outdated, incomplete, and limited diagnostic tools and medicines, and the distance of health facilities is far. In contrast, in city and suburban areas, health equipment is quite complete, medicines can be easily found, and the distance of health facilities is very close [ 11 , 35 ]. Lastly, awareness regarding the disease and its risk is low due to the scarcity of literature [ 11 , 12 , 34 ]. Most people believe that a screening examination is only essential when eye problems appear [ 33 ]. Patients are frequently identified with DR in a severe, vision-threatening stage, making treatment challenging [ 3 , 5 ]. In addition, another challenge found in the studies is that the process originates from patients and cadres themselves [ 26 ]. First, the high poverty rates in rural areas result in people being unable to access health care because they do not have enough money for transportation, which is sometimes far away and quite expensive [ 12 , 33 , 34 ]. Second, the lack of awareness within the communities made the screening process difficult. Third, the trust issue in society towards cadre and cadre towards local medical health workers aggravates the health care process. For instance, when cadres want to promote health, the community distrust’s identity cadres because non-medical workers require them to show their identity. Next, the cadre’s problem with other health workers in primary health care (nurses) who canceled meetings with the community suddenly resulted in community disappointment [ 33 ]. Last, the educational attainment issue in LMICs became central to this vicious cycle of health problems. A recent survey study revealed that most people of productive age (15–49 years old) had not completed primary education [ 36 ]. Consequently, this problem also affected the language barrier, which limited the community's ability to understand health promotion and the importance of eye examinations [ 33 , 36 ]. A summary of the challenges to the role of cadres is summarized in Table 4 .
Discussion DR can lead to potentially blinding problems, which can be avoided by early detection with routine dilated fundus checks and referral as necessary [ 37 ]. In diabetic care facilities, the necessity of early diagnosis and screening is highlighted [ 38 ]. The screening target for DR cases is expected to reach 80% in 2030 [ 7 , 8 ]. It is challenging for the government, ophthalmologists, and eye care providers to carry out their societal duties. Previous studies have identified several barriers to implementing good DR screening, including the lack of ophthalmologists, uneven distribution of ophthalmologists, lack of awareness in society, accessibility and affordability of health care facilities, poor infrastructure, a lack of skilled workforce, and outdated technology [ 12 , 34 , 39 ]. Indeed, due to the uneven distribution of ophthalmologists, most ophthalmologists have been overwhelmed with a rapidly growing population of diabetics, making it impossible to expect them to examine the whole population. As a result, ophthalmologists require assistance from community members in achieving targeted goals, particularly in rural areas. The role of cadres in Indonesia has been recognized since 2019 through a law issued through the regulation of the Minister of Health of the Republic of Indonesia number 8 in 2019 regarding community development in the health sector, where a cadre is anyone who is chosen by the community and trained to mobilize the community to participate in community empowerment in the health sector [ 40 ]. Cadres play a role in improving the knowledge and ability of the community to recognize and address health problems. These community services include maternal health, infants, school children, productive age, the elderly, community nutrition, communicable and non-communicable diseases, and mental health [ 40 ]. Based on these functions and roles, task sharing is a solution to the growing shortage of eye care personnel to manage eye care services for patients with diabetes [ 17 ]. The WHO defines task shifting as the transfer of tasks to existing cadres of healthcare professionals with less training and credentials or to newly developed cadres who obtain competency-based training for the specific activity [ 18 ]. As a result, the objective of universal coverage can be reached with this task-shifting method provided the screening is cost-effective, covers the target group, and is accepted by the community, particularly in the context of controlling blindness due to DR [ 41 , 42 ]. Furthermore, one of the possible areas for task sharing or shifting is health education, and available cadres might be assigned new duties related to DR education and awareness. Shah et al. found the importance of cadre and diabetes educators in educating people with diabetes about DR and associated risk factors in Pakistan [ 17 ]. Similarly, they can help detect high-risk diabetics as part of their regular house-to-house visits for maternal and child health care [ 17 ]. The recommendations of various cadres for health education were similar to the role of cadre in mother and child health education in Pakistan, [ 43 , 44 ] as well as the Aravind Eye Hospital and the LV Prasad Eye Institute models, where health education is provided by trained community health workers chosen from the same community [ 45 , 46 ]. Therefore, work sharing should not only be considered a solution for the ophthalmologist shortage but also part of an entire strategy in a successful health system. Another study by Singh that emphasized the role of cadres (ASHAs) revealed that ASHA participation in giving health education to people with diabetes could enhance DR screening [ 11 ]. ASHAs may share tasks as local change agents, role models, and mentors [ 17 ]. Performing DR screening closer to the health facilities with transportation and health education was more successful, resulting in increased DR screening uptake among patients known to have diabetes in remote areas [ 11 , 17 ]. In India, the ASHA is the cadre that serves as a connection between the community and the health system, as agents of social change for health promotion, and as the primary pillar of achieving government policy goals at the grassroots level [ 11 ]. DR is the sixth-leading cause of blindness and visual impairment in India [ 47 ]. Due to the scale of the problem in India, DR screening gains excellent relevance. This service must be simple and low-cost to be effective in a resource-constrained nation like India. According to the Diabetic Retinopathy Study and the Early Treatment Diabetic Retinopathy Study, laser photocoagulation can minimize 90% of severe vision loss when administered on time [ 48 , 49 ]. Therefore, ASHA might bridge this knowledge gap and work as a motivator to encourage more patients to seek DR screening at the primary care level at Community Health Centers [ 50 , 51 ]. For the same reason, this can be a reference for Indonesia, where DR is also the fifth cause of blindness [ 14 ], so the cost-effective screening system can be learned and applied in Indonesia. In addition, recruiting cadres (ASHA) in India is inspiring and worth remembering. Besides having a cost-effective screening system, the ASHA model in India also provides an effective and efficient model. First, a cadre from India is part of the community. Cadres in India are women village residents who are married, widowed, or divorced, preferably between the ages of 25 and 45. They are chosen and trained to mediate between the community and the public health system. Second, the selection process is tight and serious. The ASHA will be chosen through a rigorous selection process and will get 23 training days divided into five episodes. Asha's training is held continuously to equip them with essential skills and competence through on-the-job training [ 52 , 53 ]. Third, the ASHA works to be educated with DR screening and knowledge and other useful understanding regarding the frequently happening health issues . Following six months of working in the community, the ASHA will receive several competencies including HIV/AIDS topics such as sexually transmitted infections, respiratory tract infections, prevention, referrals, and newborn care. It is what characterizes cadres in India since they are authorized under government laws and procedures [ 27 ]. Lastly, ASHA has enough knowledge of the villager's health information. After selecting ASHAs, the next phase would familiarize her with the villagers' health status and help her adapt to rural settings. Even though ASHA is from the same village, she may not be aware of or have information about the village's current health. Towards that purpose, she should be advised to visit every family and conduct a sample survey of the village's population to ascertain their health [ 50 , 52 ]. This way, she will have the opportunity to know the villagers, the common diseases that impact them, the number of pregnant women, the number of newborns, the educational and socioeconomic status of various groups of people, the health status of the weaker sections, particularly scheduled castes and scheduled tribes, and so on. They also can be given a basic structure for doing surveys [ 27 , 50 , 52 ]. A study by Shukla et al. evaluated a training program to engage ASHAs in delivering primary eye care to vulnerable urban populations [ 51 ]. The ASHAs will give training about the basic structure and function of eyes through an eye model. They knew the definitions of blindness and visual impairment and their causes. ASHAs were given an overview of common eye conditions and their referrals [ 51 ]. Afterward, ASHAs were given hands-on training in screening the vision of individuals aged ≥ 40 using two "E" charts of 6/60 and 6/18 optotypes. They provided a training kit comprising measuring tape, screening cards, referral slips, and educational material. They were given 3–4 months to screen the vision. Apart from vision screening, ASHAs could indicate in referral slips if the person had diabetes, diagnosed glaucoma, symptoms of presbyopia (near vision difficulties after 40 years of age), or any other eye conditions [ 51 ]. As a result, ASHAs showed a significant increase in knowledge immediately after training, sustained even after a year. They enhanced their knowledge about common eye diseases such as cataracts, glaucoma, the effects of diabetes on the eyes, presbyopia, and conjunctivitis, and they would be able to talk confidently in the community about eye care. Then, the ASHA referral increased by more than four times [ 51 ]. In conclusion, investing in scalable approaches such as Cadre training is a critical first step in managing diabetes and DR in communities, particularly at the grassroots level in low-resource settings, by improving community awareness of DR and improving access to screening, diagnosis, and treatment [ 33 ]. Furthermore, a good-quality cadre will have an impact on society. People are motivated to attend the screening event because of the cadre’s motivation. Mwangi et al. revealed that eighty percent of members attend at least two-thirds of the meeting annually. In addition, the uptake numbers for screening were increased to achieve effective DR screening [ 32 ]. Rani et al. claimed that they could detect new cases of diabetes mellitus out of the 4.5 percent that might have a risk for DR, and community participation is the key to success for any awareness or screening model [ 16 ]. In addition to increasing awareness, cadres help to reach a more cost-effective screening process. It is cost-effective because there is potential to increase screening uptake in a relatively short period (3 months), with a striking uptake in the first two weeks [ 11 ]. The rate of incidence of eye examination was about six times higher compared to the control group before the start of the intervention compared to control group.. They could begin referring not only the existing diabetes patients with vision loss or blindness for DR screening but also those who were 40 years of age or older and had other symptoms such as vision loss, blindness, or increased thirst to visit a health facility for screening for diabetes [ 33 ]. In contrast, providing incentives to ASHAs was of no extra advantage. While incentives could increase screening uptake, they were more selective, particularly in people with uncontrolled diabetes, low literacy, and a longer duration of diabetes than in only education groups [ 31 ]. Cadres also can be used effectively to refer PwD for DR screening, especially when a DR screening program is introduced in a population with low awareness and poor accessibility to increase uptake of DR screening. This good response of society is assisted by the cadre's ability to speak the local language so that the community can more easily understand and understand the invitation and motivation of the cadre [ 33 ]. Cadre could be a good supporter and an excellent reminder to society as they gain respect and acknowledgment for their work, resulting in a positive response from the community [ 32 , 33 ]. It becomes a continuous positive root formed in the community, and cadres feel like an essential part of the health system. They created a link between the community and diabetes healthcare professionals. The role of good cadres in the community is also not free from challenges in the field. As mentioned, there are several challenges in providing eye health services to reduce blindness due to DR. In addition to the unbalanced distribution of ophthalmologists and the overworked capacity of doctors to meet targets, other issues also affect this community-based health service. First, accessibility to primary health care remains difficult and unaffordable for people in rural areas [ 11 ]. Second, the infrastructure to support diagnostics and services is outdated compared to healthcare facilities in urban areas. As a result, the number of DR referrals seems low or goes unreported due to barriers in diagnosing and accessing referrals [ 32 ]. Third, awareness of the disease is low due to the scarcity of literature and understanding of the importance of early screening to prevent complications due to diabetes. Society believes that a screening exam is only necessary once ocular symptoms develop [ 32 ]. Consequently, people are often diagnosed with DR in a severe, sight-threatening phase, making management difficult [ 3 , 5 ]. Furthermore, within the communes there are some challenges the society is dealing with. According to Ram S et al., is that people living in poverty cannot afford a balanced diet for a healthy life to prevent diabetes. Transportation costs to health facilities and other expenses required to treat patients are expensive and unaffordable for the community [ 33 ]. Sometimes, if they could attend a health facility, they would face long waiting times, frustrating for elderly patients who travel long distances [ 12 ]. This is compounded by the challenges faced by cadre when they are doing their task in the communities; the individuals are having relatively trust issues regarding the cadre's role as a medical helper or questioning their role so that the cadre must show his identity in front of the community. The language barrier of some cadres, who do not speak the local language, limits the community's ability to understand the context of health promotion and the importance of eye examinations [ 12 , 33 ]. Consequently, people are sometimes late or do not present themselves at a health facility despite being advised to do so by the cadre, even though they have been explained the dangers of the disease. When the community manages to come to PHC to receive counseling and basic examinations by health workers, sometimes it must be canceled at the last minute by the nurse so that it becomes a matter of trust by the community because they are disappointed that their presence is a waste of time [ 12 , 33 ]. Some of these problems have emerged in the field and are experienced by cadres and the community. In addition, government supervision and appreciation of cadres are not adequate. Many cadres who serve in the community are not given proper incentives and resources to fulfill their role in the community [ 12 , 31 ]. In India, despite the cadre's excellent training and recruitment process, they receive small and irregular incentives, which can demotivate ASHAs. However, Chariwala said that sometimes incentivizing cadres based on how much they can afford to spend on health education, health promotion, and motivating people to get checked, come to PHC facilities, and get treated at secondary and tertiary health services is not sustainable [ 31 ]. The provision of incentives may be helpful in the short term, but in the long run, it needs to be phased out, and education and self-care through non-incentivized mechanisms should drive PwD for DR screening [ 31 , 51 ]. Strengths This is the first scoping review to explore the role of cadre in the community in DR management and its challenges in LMICs. This review has identified gaps in the existing literature regarding the role of the cadre in the management of DR and its challenges in LMICs setting, which is particularly important as it highlights the role of the cadre and its barriers to reducing blindness due to DR through community-based intervention. This study successfully explored task shifting by cadres in DR management. Then, cadres are also known to create relationships between the community and diabetes health professionals. In this study, we also evaluated the outcomes of the cadre’s efforts to uptake diabetic eye screening, which will increase referrals and is also known to create relationships between the community and diabetes health professionals. After that, this study also explains the community's behavior towards cadres. In addition, this study also describes the barriers that occur in the community, not only from the perspective of the community but also from the perspective of the cadres themselves. The findings in this study are useful for program planners and policymakers to implement task shifting by healthcare providers in their respective countries. Limitations The lack of good-quality publications regarding the cadre's role, focusing on DR in LMICs, was a significant challenge in this scoping review. Despite the author's best efforts, relevant evidence may inevitably have been missed. After all, it is difficult to get a good-quality journal based on the Scopus standard because most are paid journals. Then, the variety of term cadres creates obstacles in this scoping review. The author must do a full-text review which takes time. Implications and recommendations This study is a potential recommendation for LMICs as a model for implementing a task-sharing system carried out by non-health workers as a guideline for community-based interventions. The DR screening program in each country could be maximized by looking at the mapping study from this research. This community-based intervention can also reactivate the healthcare pyramid so that universal coverage is expected to be achieved. The recruitment, training, and handling of cadres can be learned from this study to involve cadres in managing DR. In this study, the role of cadres had a good response in the community. The community's enthusiasm for eye screening, which resulted in a high referral coverage rate, could be one way to achieve the DR screening coverage target of 80% by 2030. Nevertheless, the role of this cadre should not be forgotten by the government and health authorities in the processes of supervision, training, protection, and welfare. These need to be outlined in detail and with clarity in law and policy related to the operational standards of the work of the cadre itself. Therefore, the role of cadres here is expected to be not only to help health workers but also to act as a bridge between the community, health workers, and the government to reduce the country's blindness rate.
Conclusion The current study highlighted significant gaps in the literature that focus on the cadre's role as a community-based intervention in managing DR in LMICs. From this study, we can find that cadre can motivate people to come to the diabetic eye screening event. The rate of incidence of eye examaminations was about six times higher compared to the control group before the start of the intervention. They could begin referring not only the existing diabetes patients with vision loss or blindness for DR screening but also those who were 40 years of age or older and had other symptoms such as vision loss, blindness, or increased thirst to visit a health facility for screening for diabetes [ 33 ]. Education is a possible area for task sharing. Moreover, 70% reported that the cadre could perform the task of vision testing [ 17 ]. Additionally, the cadre could be a good peer supporter and an excellent reminder to society as they gain respect and acknowledgment for their work, resulting in a positive response from the community. Hence, developing a targeted research agenda based on recent results is a crucial next phase in generating the essential improvements within and across LMICs to address the existing and upcoming issues of DM and DR. Further research is needed to develop a body of evidence that is adequate to support cost-effective screening services and cadre-related policy development in LMICs. As a result, the national government and district governments should take a leadership role in developing and implementing comprehensive policies that make DR prevention a national policy.
Introduction Diabetes is a serious public health problem, with low- and middle-income countries (LMICs) bearing over 80% of the burden. Diabetic retinopathy (DR) is one of the most prevalent diabetic microvascular problems, and early diagnosis through eye screening programs for people with diabetes is critical to prevent vision impairment and blindness. Community-based treatments, including non-physician cadres have been recommended to enhance DR care. Methods The review protocol was determined and scoping review was conducted.The population, concept, and context were “cadre”, “role of cadre in the management of DR”, and LMICs”. Data were collected from databases and searches, including grey literature. Results Cadre can motivate people to attend a diabetic eye screening event when the rate of eye examinations is about six times higher than before the start of the intervention. Health education is a possible area for task sharing, and the cadre reported could also perform the task of vision testing. The cadre could be a good supporter and a good reminder for society. However, several challenges have been faced in this study and inadequate infrastructure is the foremost challenge found in this study. Other challenges encountered in the studies include poverty, lack of community awareness, trust issues, and low education levels contributing to poor health. Conclusion The current study highlighted significant gaps in the literature, which focus on the role of cadre as a community-based intervention in managing DR in LMICs. Further research is needed to develop evidence to support cost-effective screening services and cadre-related policy development in LMICs. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-17652-5. Keywords
Supplementary Information
Abbreviations Accredited social health activist Community health centers Community health workers Diabetes mellitus Diabetic retinopathy Diabetes support groups International diabetes federation Information, education, and communication Lady health worker Low- and middle-income countries Population, concept, context Primary health care People with diabetes Randomized controlled trial Village health workers Acknowledgements None. Dual publication The results, data, and figures presented in this manuscript have not been previously published, and they are not currently under review or consideration, either by you or any of your contributing authors, with another publisher. Third party material All of the material is owned by the authors and/or no permissions are required. Authors’ contributions I.S.S, Y.D.L, and A.A.V made substantial contributions to the conception and design of the work. I.S.S acquired the data, prepared the tables and figures, and drafted the manuscript. I.S.S and Y.D.L analyzed, interpreted, critically reviewed, and revised the work. A.A.V contributed to the discussion. All authors read and approved the final manuscript. Funding Each named author received no financial support for the research, authorship, and/or publication of this article. Availability of data and materials All data generated or analyzed during this study are included in this published article and its supplementary information files. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Public Health. 2024 Jan 15; 24:177
oa_package/d5/31/PMC10789068.tar.gz
PMC10789069
0
BMC Infectious Diseases (2023) 23:520 10.1186/s12879-023-08496-2 The original publication of this article contained an incorrect author name. The incorrect and correct information is listed in this correction article. The original article has been updated. Incorrect: Abozer Y Eldedery Correct: Abozer Y Elderdery
CC BY
no
2024-01-16 23:45:34
BMC Infect Dis. 2024 Jan 15; 24:85
oa_package/ce/db/PMC10789069.tar.gz
PMC10789070
0
Introduction Parasitic infestation of ocular structures can cause significant intraocular inflammation with subsequent complications. Presumed trematode-induced uveitis is a distinct clinical entity, most frequently observed in children. Granulomatous inflammation that usually presents as one or more pearl-like nodules in the anterior chamber (AC) is the hallmark of this condition. Less frequently, subconjunctival lesions or corneal lesions can also occur [ 1 – 4 ]. In rare instances, intermediate uveitis with snow banking and posterior uveitis could be the presenting feature of this disease [ 5 ]. Children who swim in rivers harboring trematode-infected snails have a high risk of infestation by several waterborne trematodes [ 6 ]. Ocular inflammation may be caused by the tissue damage by the parasite and its toxic product or secondary to the reaction by the immune system of the host [ 7 ]. Procerovum varium, a trematode, was identified as the causative agent responsible for AC granuloma in children in South India [ 8 , 9 ]. Schistosomiasis, also known as bilharziasis, is an endemic disease caused by trematodes of the Schistosoma genus in Egypt. relatively rare ocular involvement has been attributed to Bilharziasis, mainly in the form of granuloma reaction induced by the ova or adult worms in different ocular tissues. The first case of bilharzial conjunctival granuloma was reported in Egypt by Sobhy in 1928. A myriad of ocular lesions including keratitis, chronic conjunctivitis, chronic uveitis with complicated cataract, vitreous opacities, posterior uveitis in the form of chorioretinitis, subretinal granulomas, and pre-retinal hemorrhage was reported in the literature [ 10 , 11 ]. Schistosoma mansoni was also observed and isolated from the AC angle [ 12 ]. There is a significant association between the the patients' residencies close to ponds and snail habitats and this condition of granulomatous reactions [ 13 ]. The cercariae (infective stage) of the trematode can reach maturity and lay ova directly in the veins of the richly vascularized conjunctiva, which subsequently leads to the development of sub-conjunctival granuloma. In some cases, cercariae can penetrate limbal structures and gain access to the anterior chamber leading to the development of anterior chamber granuloma with or without granulomatous uveitis. This has been considered the most accepted theory that explains how the ova of the trematode or adult worms reach the eye [ 10 , 14 , 15 ]. Many treatment modalities for the treatment of AC granulomas were reported, including topical corticosteroids which is considered a the standard line of treatment. Resistant cases may be treated with oral prednisone starting at a daily dose of 1 mg/kg which is gradually tapered over 3–6 weeks [ 16 ]. Therefore, many cases may develop steroid-related complications such as cataracts and glaucoma [ 17 , 18 ]. In addition, persistent intra-ocular inflammation may contribute to development of cataract in those cases [ 18 ] This study aims at evaluation of the outcome of cataract surgery in childern with persumed trematode -induced granulomatous anterior uveitis.
Patients and methods All study procedures adhered to the tenets of the Declaration of Helsinki. Approval of our review was obtained from the IRB of the Faculty of Medicine, Assiut University (IRB No. 17200420). The parents of study subjects provided written informed consent before acquisition of the data of study subjects. A retrospective chart review of the patients who presented to the uveitis service at outpatients clinic of the department of Ophthalmology, Assiut University hospital, Egypt between December 2020 and December 2021 with cataracts secondary by presumed trematode AC granuloma and underwent cataratct surgery was performed. The inclusion criterion was childern less than 15 years old with controlled uveitis for at least three months before cataract surgery. Exclusion criteria were active anterior uveitis within the three months before enrollment in the study,and evidence of trauma. Also, patients with other ocular pathology that could influence the final visual acuity (VA), such as macular scars, glaucomatous optic nerve changes, and central corneal opacity, were excluded. Data collected from the patients included the demographic characteristics, medical and ocular history, and the results of complete baseline ocular findings, including measurment of best corrected visual acuity (BCVA) using Snellen's chart, which was converted to logarith of minimum angle of resolution (logMAR) for statistical analysis, slit lamp anterior segment examination (including cornea, conjunctiva, eyelids and AC), intraocular pressure measurement, fundus examination, and optical coherence tomography (OCT) of the macular area if it was performed before surgery if the ocular media were sufficiently clear to obtain an adequate OCT signal. Pre-operative control of uveitis for at least three months was achieved using topical, peri-ocular, systemic steroids, or a combination of these. Intraocular lens (IOL) calculation was performed using IOL master by utilizing multiple IOL calculation formulas. Specular microscopy was performed to evaluate the corneal endothelial status. Surgical technique employed started by creating a 2.4-mm clear corneal incision using a microkeratome and two side port incisions using an MVR blade. Subsequently anterior chamber (AC) was filled with an ophthalmic viscoelastic device (OVD). This was followed by manual stretching and dilatation of the pupil using two push–pull instruments and peeling of the iris and pupillary membranes by capsulorhexis forceps. Trypan blue dye was injected and the excess dye was washed off in cases with dense white cataracts to allow visualization of the anterior lens capsule during the creation of the continuous curvilinear capsulorhexis (CCC), which was started and completed using the capsulorhexis forceps. Irrigation aspiration of soft lens and implantation of hydrophobic or hydrophilic IOL into the capsular bag was performed (Fig. 1 ). Subconjunctival injection of dexamethasone was done at the conclusion of surgery. Postoperative medications included topical moxifloxacin five times a day, topical cycloplegic mydriatic thrice daily, and difluprednate five times a day for one week with gradual tapering postoperatively. Topical fluromethanole was administered twice daily for one month, and systemic steroid 1 mg/kg was administered for one week followed by gradual tapering. Postoperative follow-up of those patients included measurement of BCVA, slit lamp examination incuding fundus examination, and intraocular pressure measurement using a handheld rebound tonometer. Assessment was repeated at one week, one month, three months, and six months postoperatively (Fig. 2 ). Statistical analysis Data were collected in an Excel spreadsheet and analyzed using SPSS statistical software version 22 (IBM, Chicago, IL). The changes in the variables were compared with that at baseline using paired T-test.
Results Demographic and baseline characteristics Study subjects had an average age of 12 ± 1.58 years (range, 10–14 years), all patients were boys with unilateral involvement, and all resided in rural villages around Assiut city. All participants had a positive history of swimming in local ponds and canals and developed uveitis after exposure. The duration of symptoms was 22 ± 13 days, and the duration between the control of uveitis and cataract surgery was 5 ± 4.3 months.The demographic and baseline characteristics are summarized in Table 1 . Visual acuity The mean preoperative BCVA represented as log MAR was 2.4 ± 0.894 log MAR, and the mean value of the 6-month postoperative BCVA was 0.22 ± 0.192 log MAR. A statistically significant improvement was observed in VA in the sexth postoperative month compared with the baseline measurements ( p = 0.004) as mention in Tables 2 and 3 . The mean value of the postoperative spherical equivalent was 0.2 ± 0.64. Endothelial cell count The mean value of the endothelial cell count at baseline was 2928 ± 214, and the mean value of the 3-month postoperative endothelial cell count was recorded in 4 patients and was 2817 ± 458. No statistically significant difference was observed between the preoperative and postoperative endothelial cell counts ( p = 0.696). Changes in endothelial cell count are summarized in Table 4 . Central macular thickness (CMT) The mean value of the postoperative CMT was 259.75 ± 32.26, indicating cystoid macular edema did not occur as a postoperative complication. Posterior capsular opacification (PCO) PCO was observed in 2 patients. Reactivation of uveitis was reported in 2 patients one month postoperatively as a trace of cell and a mild flare and cell + 1.0 were observed in one patient each. We summarized details of all five patients in the study as mentioned in Table 5 .
Discussion Granulomatous anterior uveitis in children is presumed to be caused by a trematode infections, especially in rural areas. There is a significant risk associated with swimming in unsanitary local ponds or rivers, as reported by several studies [ 1 , 3 , 5 , 7 , 19 ]. Cataract is the main complication in patients with uveitis and the leading cause of vision loss. Intraocular inflammation was considered a significant risk factor for cataract surgery [ 20 ]. All patients were males in our analysis due to cultural reasons, as it is rare to allow girls swimming in rivers or in public. All patients were below 15 years of age. All cases showed unilateral involvement with only a single nodule at the lower portion of AC (at the 6 o'clock position). This condition is reported to have a high incidence in Egypt, accounting for 22.2% of anterior uveitis cases in a study from Egypt. 4 All cases in this study showed significant diminution of vision secondary to the presence of complicated cataract (the mean preoperative VA was 2.4 ± 0.894 Log MAR); however, in other studies, diminution of vision secondary to increased corneal thickness, flare, and cells ( VA 0.3 Log MAR) [ 21 ] This could be attributed to different inclusion criteria as we included patients after control of active uveitis. Objectively, all patients showed significant improvement in the postoperative VA (0.22 ± 0.647LogMAR) ( P < 0.001). Other studies have shown results similar to the present study. There is a significant difference between the preoperative and postoperative BCVA ( P < 0.001) [ 22 ]. Strict control of the preoperative inflammation was shown to significantly improve the visual outcomes of cataract surgerywith IOL implantation [ 23 ]. In this study, patients with trematode anterior uveitis showed better visual results than other patients with uveitis because strict preoperative control of inflammation is possible with less liability to recurrences after control of the disease compared to other uveitic entities especially autoimmune uveitis, less severe pre-existing vision-limiting pathology, and less postoperative inflammation. Patients with complicated cataracts are expected to experience a higher incidence of postoperative complications than patients with age-related cataracts.Moreover, reactivation of uveitis after surgery leads to a higher risk of development of complications and may contribute to unfavourable outcomes. [ 24 ]. In our study, the most common postoperative complications were PCO (two patients developed PCO) and reactivation of uveitis; however, there was no elevation of intraocular pressure (IOP) or CME. A previous study reviewed 17 patients who underwent coaxial micro-incision cataract surgery and reported that the occurrence of postoperative complications was rare; IOP elevation and uveitis exacerbation were observed in one eye [ 25 ]. In this current study, no statistically significant difference was observed between the preoperative and postoperative endothelial cell count. Our results suggest that perioperative systemic therapies and recurrent uveitis are important risk factors for postoperative complications such as PCO, CME, and elevated IOP. In contrast, previous studies have indicated that the administration of prophylactic corticosteroids is associated with decreased incidences of Nd: YAG capsulotomy and CME [ 26 ]. There are some limitations to our study, namely the small sample size, retrospective design and the short follow-up period. In conclusion, this study showed that most patients with trematode-induced anterior uveitis experienced significant improvement in visual acuity after cataract surgery compared to patients with other uveitis entities. Patients with relapse inflammation were at an increased risk of developing postoperative complications, including PCO and CME. Future extensive randomized, double-blind prospective studies are warranted to demonstrate the factors predisposing to devvelopment of surgery complications.
Purpose To examine the 6-month visual outcomes and complications following cataract surgery in patients with persumed trematode induced granulomatous anterior uveitis. Setting Assiut university hospital, Assiut, Egypt. Design This is a retrospective non comparative case series study . Methods Patients presenting with significant cataract secondary to uveitis caused by trematode induced anterior chamber granuloma were included in this study. Cases with active anterior uveitis, within the last 3 months preceding surgery, and those with a history of trauma, were excluded from this study. Data collected included demographic characteristics, history of the condition including when uveitis started, treatment received and history of other health conditions that may be relevant to uveitis.Complete opthalmologic examination including assessment of best corrected visual acuity (BCVA) and OCT macula, if possible, were done. These was repeated 1 week, 1 month, 3 months and 6 months after surgery. Specular microscopy was performed preoperatively and 3 months after surgery. Patients underwent cataract surgery with posterior chamber intra ocular lens and statistical analysis was performed to compare preoperative and postoperative BCVA and corneal endothelial cell counts. Postoperative complications were recorded. Results Five eyes of 5 patients were included in the study. All study eyes showed improvement in the post-operative visual acuity. A statistically significant improvement was observed in VA in the sixth postoperative month compared to the baseline measurements ( p = 0.004). No statistically significant difference was observed between the preoperative and postoperative endothelial cell counts ( p = 0.696). Cystoid macular edema did not occur as a postoperative complication. Conclusion Visual outcomes of cataract surgery in eyes with persumed trematode induced granulametous anterior uveitis are favorable. No sight threatening complication was observed in our series. Keywords Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Abbreviations Best-corrected visual acuity Anterior chamber Continuous curvilinear capsulorhexis Ophthalmic viscoelastic device Intra ocular lens Hand motion good projection Optical coherence tomography Central macular thickness Cystoid macular edema Posterior capsular opacification Endothelial cell count Acknowledgements Not applicable. Authors’ contributions Mona Abdallah: Data acquisition, drafting the manuscript, statistical analysis and data interpretation. Ashraf K Al-Hussaini: study idea, critical revision of the manuscript, performing surgery and approval of the final version. Wael Soliman: critical revision of the manuscript, study design. Mohamed G.A. Saleh: Conceptual design of the study, performing surgery and writing the manuscript. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). This is to certify that: 1 The article has not been presented in a meeting. 2 The authors didn't receive any financial support from any public or private sources. 3 The authors have no financial or proprietary interest in a product, method or material described here. Availability of data and materials All data generated or analyzed during this study are included in this article. Further inquiries can be directed to the corresponding author. Declarations Ethics approval and consent to participate This study was performed in accordance with the tenets of the Declaration of Helsinki and approved by the Institutional Review Board of Pusan National University Hospital (IRB No. 17200420). Parents of all patients were provided with written informed consent. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:34
BMC Ophthalmol. 2024 Jan 15; 24:21
oa_package/7d/3e/PMC10789070.tar.gz
PMC10789071
0
Background Identifying the genetic determinants of complex traits is challenging because their contributions are often diluted across many variants of small effect. Single variants of large effect are simpler to identify and have been well-characterized [ 1 – 3 ], but genome-wide association studies (GWAS), which test millions of variants for statistical association with a trait, have demonstrated that these large-effect loci are rare. Moreover, the vast majority of trait-associated variants are located in non-coding regions [ 4 – 7 ]. For the < 10% of GWAS hits in protein-coding regions, inferences about their evolutionary history and mechanisms of action are often readily available thanks to studies that have focused on these regions. The remaining > 90% of GWAS hits in non-coding regions are thought to affect traits by altering gene expression levels, but causal mechanisms are obscured by a combination of linkage disequilibrium (LD), a genome-wide phenomenon in which nearby variants tend to be inherited together leading to a correlation of their effects [ 8 ], and the paucity of information about non-coding relative to coding regions. Even before the GWAS era variants with highly divergent allele frequencies between populations, measured by estimates of Wright’s fixation index ( F ST ) [ 9 ], were found to be enriched in disease-associated genes [ 10 ]. Since then, genome-wide scans using associations of allele frequencies with environmental variables as evidence of natural selection have shown signals of positive selection to be somewhat enriched in coding regions [ 11 – 15 ], and even more enriched in cis-regulatory elements [ 16 ]. Overall, the preponderance of non-coding variants implicated in human GWAS is paralleled by a similar trend among human genetic variants involved in local environmental adaptation [ 16 ]. Intersecting non-coding GWAS hits with information from assays measuring regulatory activity, such as quantitative trait loci (QTL) for molecular-level traits (mol-QTL), has been effective at pinpointing causal variants and molecular mechanisms underlying complex trait variation [ 17 – 22 ]. QTL studies using gene expression as the trait (eQTL) test all variants within a predefined distance (usually one megabase (Mb)) of a gene for an association with that gene’s expression, so each eQTL is linked to a target gene [ 20 ]. Since transcription factor (TF) proteins bind gene regulatory elements such as enhancers in a sequence-dependent manner to regulate transcription, eQTL can act by altering a TF’s binding affinity (i.e., one allele has higher binding affinity than the other, termed a bQTL) [ 18 ]. In most cases, increased TF binding is associated with decompaction of chromatin, the DNA-protein complex that packages meters of linear DNA into a nucleus a few microns wide. This opening of the chromatin allows more TFs to bind to previously inaccessible stretches of DNA and to each other in a positive feedback loop of chromatin accessibility. Thus, chromatin accessibility can be used as a proxy for regulatory activity to identify enhancers and their relative activity levels, as is accomplished with Assay for Transpose-Accessible Chromatin (ATAC-seq) [ 23 ]. Since enhancers operate in three-dimensional space and can contact target gene promoters ( cis -regulation) several Mb away, ATAC-seq and high-throughput methods based on Chromatin Conformation Capture (HiC) [ 24 – 27 ] can be combined to identify enhancer-promoter interactions [ 18 , 19 , 22 , 28 ]. The activity-by-contact (ABC) model was recently developed to predict enhancer-target gene pairs in a given cell type under the premise that the extent to which an element regulates a gene’s expression depends on its strength as an enhancer (activity level), scaled by how often it is near that gene’s promoter in 3D space (contact frequency) [ 29 ]. HiChIP, which combines HiC with chromatin immunoprecipitation (ChIP) on a protein of interest, is well-suited to generate input for this model, particularly when performed on the histone modification H3K27ac, a hallmark of active chromatin. Since the end product is paired-end reads from H3K27ac-associated long-range interactions, H3K27ac HiChIP provides a simultaneous measure of activity level and contact frequency without the high sequencing depth and cell number required to generate the all-by-all interaction maps of HiC [ 25 , 30 ]. The ABC model has been shown to attain peak performance with chromatin accessibility and HiChIP data as input and outperforms other enhancer target gene prediction methods [ 29 ], making it a powerful metric for hypothesis generation about the mechanisms of non-coding GWAS hits [ 31 ]. Additional support for the mechanisms and causality of these hits can come from intersecting molecular-level QTL with putative locally adaptive variants [ 16 ]. However, since selection acts on fitness, its impact may be more directly observable at the level of chromatin activity than at the level of DNA sequence, where it is relatively more diluted (Fig. 1 a, left). For example, chromatin activity is a better predictor of TF binding than DNA sequence since we do not fully understand the cis -regulatory “code” that governs TF binding [ 32 ]. This can lead to cases where sequence-level changes, even those disrupting TF binding sites, do not correspond to changes in regulatory function and gene expression when regulatory activity is buffered by the binding of multiple TFs. Studies in primates have suggested that directional selection may have contributed to differences in chromatin activity that distinguish each species [ 33 ]. For example, sites with decreased chromatin accessibility in human relative to chimpanzee and rhesus macaque white adipose tissue tend to be cis- regulatory elements for lipid metabolism-related genes, consistent with humans’ greater body fat percentage [ 34 ]. Such analysis of chromatin activity divergence has not been conducted on more recent evolutionary timescales within the human lineage, where mechanistic insights could aid understanding of ancestry-dependent disease prevalence [ 35 – 37 ]. Here, we use ATAC-seq and H3K27ac HiChIP, a combined measure of activity and contact frequency [ 25 ], to generate ABC scores linking candidate cis- regulatory elements (CREs) to candidate target genes (hereinafter “target genes”) in eight populations of African or European ancestry. We then decompose these scores into their activity and contact components to identify differential CREs (diff-CREs) for each score between individuals of African and European ancestry (Fig. 1 a). Intersecting our diff-CREs with bQTL reveals three transcription factors (NF- B, JunD, and PU.1) whose binding sites show signs of lineage-specific selection for differences in binding between the African and European ancestry populations. Our findings illustrate the utility of ABC scores to identify previously unappreciated population-specific activity of CREs, their target genes, and potential mechanisms of gene regulation.
Methods Cell culture and ATAC-seq For detailed methods on cell culture conditions and processing, see our previous study [ 19 ]. Briefly, 2×10 3 cells from each LCL were collected and pooled by population after growth to 6–8×10 5 cells/mL. To prevent disproportionate cell line growth within pools throughout the collection and pooling process, sub-pools were frozen in liquid nitrogen at −180 °C. After collection of all LCLs, sub-pools were combined by population, and cells from each of the 10 pools were purified, isolated, and split into two replicates of 10 5 cells each and pelleted according to [ 19 ] for a total of 20 samples. ATAC-seq was performed using the protocol from [ 23 ] in which each sample was resuspended in 100 μl of transposition mix containing 5 μl of Tn5 Transposase and incubated in a ThermoMixer for 30 min at 37 °C and 750 rpm. Transposed DNA fragments were then eluted and PCR-amplified with total cycles determined according to [ 23 ]. Following two PCR cleanup steps, purified ATAC-seq libraries were sequenced on an Illumina HiSeq 4000 to generate 2×150 bp, paired-end reads. HiChIP We thawed each −180°C-stored sub-pool described above and in [ 19 ] on ice, combined them by population and removed dead cells. As for ATAC-seq, to avoid disproportionate cell line growth we did not passage the cells before or after combining sub-pools. We then split each population pool into 2 replicates for crosslinking and HiChIP. For more detailed HiChIP methods, see [ 30 ]. Briefly, cells from each pool were pelleted and resuspended in 1% formaldehyde (Thermo Fisher) for crosslinking at a volume of 1 ml per million cells with incubation at room temperature for 10 min with rotation. Formaldehyde was then quenched with glycine at a 125-mM final concentration with 5 min room temperature incubation with rotation. Cells were then pelleted, PBS-washed, re-pelleted, and either used immediately in the HiChIP protocol or stored at −80 °C for HiChIP later. HiChIP was performed as described in [ 25 ] with H3K27ac antibody (Abcam, ab4729) and the following modifications. We used a 2 min sonication time, 2 μg of antibody, 34 μl of Protein A beads (Thermo Fisher) for chromatin-antibody complex capture. Post-ChIP Qubit quantification was performed to determine the amount of Tn5 used and a number of PCR cycles performed for library generation, accounting for varying amounts of starting material. We performed size selection by PAGE purification (300–700 bp) to remove primer contamination and sequenced all libraries on an Illumina HiSeq 4000. ATAC-seq read mapping For the complete mapping pipeline see https://github.com/kadepettie/popABC/tree/master/mapping , which contains a nextflow implementation of the steps described in [ 19 ]. Cutadapt was used to remove sequencing adapters (arguments: -e 0.20 -a CTGTCTCTTATACACATCT -A CTGTCTCTTATACACATCT -m 5). PCR duplicate reads generated during library preparation were removed using picard MarkDuplicates (v2.18.20) ( http://broadinstitute.github.io/picard/ ) (arguments: SORTING_COLLECTION_SIZE_RATIO=.05 MAX_FILE_HANDLES_FOR_READ_ENDS_MAP=1000 MAX_RECORDS_IN_RAM=2500000 OPTICAL_DUPLICATE_PIXEL_DISTANCE=100 REMOVE_DUPLICATES=true DUPLICATE_SCORING_STRATEGY=RANDOM). To minimize allelic mapping bias, a modified version ( https://github.com/TheFraserLab/WASP/tree/atac-seq-analysis/mapping ) of the WASP pipeline [ 38 ] was used for read mapping. Reads were aligned to the hg19 genome using bowtie2 [ 51 ] (arguments: -N 1 -L 20 -X 2000 --end-to-end --np 0 --n-ceil L,0,0.15) and filtered to a minimum mapping quality of 5 using samtools (v1.8) [ 52 ]. HiChIP read mapping HiChIP reads were mapped using the nf-core [ 53 ] HiC-Pro [ 54 ] mapping pipeline ( https://github.com/nf-core/hic ) modified to include the same version of the WASP pipeline as was used for ATAC-seq to minimize allelic mapping bias ( https://github.com/kadepettie/popABC/tree/master/hicpro ). In this version, however, allele-swapped remapping was performed separately on each read end, after which reads were re-paired, to accommodate the long-range nature of the paired-end reads as in the original HiC-Pro pipeline. Filtering reads down to valid cis interaction pairs, we took the raw 5-Kb resolution contact maps (the “.matrix” and corresponding “.bed” file output from process “build_contact_maps”) as input to our differential activity-by-contact pipeline. Differential activity-by-contact Candidate element definition We used the ABC model ( https://github.com/broadinstitute/ABC-Enhancer-Gene-Prediction ) to predict enhancer-gene connections in each pooled LCL population replicate (sample), with modifications to facilitate comparison of AFR and EUR population samples. For the complete differential activity-by-contact (diff-ABC) pipeline see https://github.com/kadepettie/popABC/tree/master/selection_1000G (“ABC_pipeline.nf”). We used Genrich (v0.5_dev, available at https://github.com/jsh58/Genrich ) to call AFR and EUR ATAC-seq peaks jointly on the 8 samples from each with default parameters except for the following: -y -j -d 151. We then summed the reads in each peak across the corresponding 8 samples, kept the top 150,000 by read count, and resized them to 500 bp centered on the peak summit. To ensure equal contribution from peaks called separately in AFR and EUR to our candidate element input to the ABC model, we again made separate rankings by read count for each, then interleaved the two lists evenly by ranking, merging any overlaps, and taking the top 150,000 elements. We next added 500 bp gene TSS-centered regions and removed any from the resulting list that overlapped regions of the genome with known signal artifacts ( https://sites.google.com/site/anshulkundaje/projects/blacklists ) [ 55 , 56 ]. Overlapping regions resulting from summit extensions and/or TSS additions were merged immediately following each of these steps. We defined promoter elements as those within 500 bp of an annotated TSS and the rest as enhancer elements. Score component normalization To ensure comparability of ABC scores between populations and replicates, and particularly samples with differing signal-to-noise ratios, we quantile normalized ATAC-seq reads per million sequenced reads (RPM), HiChIP valid cis interaction pair counts from 5 kb bins overlapping CREs, and each of these bins’ total count (for ChIP score computation) to the mean of their respective distributions across samples separately for enhancers and promoters. Since there is a lack of consensus on an appropriate library size normalization method for HiChIP data, due to the violation of the assumption of equal visibility between interacting regions often used in HiC normalization [ 54 , 57 , 58 ], we relied on the combination of quantile normalization and subsequent score normalization steps to control for library size and other technical artifacts. Quantitative HiChIP signals were computed using the quantile normalized HiChIP contact counts according to [ 29 ]. Briefly, for each gene TSS, all contact counts in CREs within 5 Mb were normalized to sum to one, then divided by the maximum of these values to normalize for comparison across genes. Score computation As in [ 29 ], we computed ABC scores using H3K27ac HiChIP with the fraction of regulatory input to gene contributed by element given by: Here, the activity component ( ) is quantile normalized ATAC-seq RPM, as in the original ABC score formula, but we have replaced the HiC contact component ( ) with the quantitative HiChIP signal ( ) described above. We computed ATAC scores as follows: We computed ChIP scores as follows, using the geometric mean of the quantile normalized HiChIP bin totals overlapping each element (vanilla coverage square root (VC-sqrt)) to estimate the aggregate H3K27ac signal at both elements: where is the total quantile normalized valid cis interaction pair count from the HiChIP bin overlapping element or the promoter of gene . VC-sqrt normalization is commonly applied to HiC data for comparison of contact frequencies across samples since the assumption of equal visibility is reasonable when considering data generated from proximity-based ligation alone (i.e., without ChIP). When applied to HiChIP, the VC-sqrt measures the difference in visibility between interacting regions relative to one another within a sample that is due to the levels of H3K27ac present at each region. Thus, when normalized by the sum of this signal across all elements within 5 Mb of the target gene, the resulting ChIP score reflects the contribution of H3K27ac levels to an ABC score. We can then use VC-sqrt normalization to estimate the contact frequency between each element independent of H3K27ac levels and extend this to compute the HiC component of an ABC score as follows: where is the quantile normalized number of valid cis interaction pairs connecting the HiChIP bin overlapping element and the promoter of gene . E-G pair definition To perform differential ABC score analysis across ancestries, we took predictions from the ABC model for each sample (population and replicate) and processed them according to the following steps. First, we excluded pairs with ABC score < 0.015 in all samples to avoid testing pairs unlikely to be true regulatory connections in any population [ 31 ]. Second, we excluded promoter-gene pairs with ABC scores below a stringent threshold of 0.1 because experimental data has shown the ABC model has poorer performance for this class of interactions, likely due to transcriptional interference, trans effects, and/or promoter competition [ 29 ]. Third, we required each enhancer-gene pair to be supported by non-zero quantile-normalized HiChIP contacts and ATAC values at the CRE in all samples, to avoid testing pairs where low ABC scores could be driven by mapping biases or low sequencing depth. Due to the difference in sequencing depths between samples, this final filtering step reduced the number of enhancer-gene pairs under consideration from 580,474 to our final set of 52,454 after removing CEU from the enhancer-gene pair-calling pipeline. Clustering analysis For each score type and enhancer-gene pair, values were z-score normalized across samples for comparison and visualization of enhancer-gene pairs with large differences in mean score. PCA was performed with “prcomp” and heatmaps were generated using the pheatmap package (v1.0.12) in R (v4.1.0). Differential analysis We called diff-CREs using unpaired, two-sample t-tests on each score type in AFR versus EUR samples. Log 2 fold change effect sizes were estimated as the log 2 -ratio of the mean EUR score over the mean AFR score. We estimated a false discovery rate (FDR) for each score type at t -test P < 0.05 as the ratio of expected over observed enhancer-gene pairs with P < 0.05, where the null P -value distribution was derived from unpaired, two-sample t-tests on one set of replicates from each population versus the other. Replicate number was randomized for each enhancer-gene pair. To estimate the null P-value distribution for tests with eight AFR and six EUR samples after CEU removal while maintaining the eight versus six sample structure of each test, one AFR population was held out at random from replicate shuffling for each enhancer-gene pair and both replicates from this population were used in the group of eight (as opposed to the seven versus seven structure that would result from splitting by replicate across all populations). Since ChIP score signal is derived from HiChIP contact count bins at 5 Kb resolution, we counted diff-ChIP and HiC for CREs from the same HiChIP bin as one in each diff-score enrichment test described below. DE enrichments We used hypergeometric tests (i.e., one-sided Fisher’s exact tests) to determine enrichments for DE target genes among diff-CREs and matching ancestry directionality among DE genes with a diff-CRE. For the former across score types, we took the most differential CRE (top diff-CRE) by the corresponding metric (i.e., ABC, ATAC, ChIP, or HiC score) per gene, defining diff-CREs at nominal t -test P -values < 0.05, non-diff-CREs at t -test P -values ≥ 0.5, and DE genes at LFSRs < 0.05 [ 40 , 59 , 60 ]. Then, counting each CRE only once, we classified diff-CRE hits as any with at least one DE target gene, diff-CRE non-hits as any with no DE target genes, non-diff-CRE hits as any with no DE target genes, and non-diff-CRE non-hits as any with at least one DE target gene. For the promoter test, we took the subset of promoter CREs and additionally required diff-CRE hits and non-diff-CRE non-hits to be promoters for at least one of their DE target genes. For the enhancer test, we allowed promoter CREs to be classified as enhancers if they were not promoters for the relevant gene(s) (e.g., a distal promoter for another gene contacting the promoter of the DE gene under consideration). That is, we required diff-CRE hits and non-diff-CRE non-hits not to be promoters for any of their DE target genes. For the matching direction tests, we took the subset of top diff-CREs with DE target genes where all DE target genes were in the same direction (AFR- or EUR-biased) and, again counting each CRE only once, classified hits as diff-CREs with higher scores in the same ancestry as that with higher expression in their DE target gene(s). For the promoter and enhancer tests, we required diff-CREs to be promoters for at least one of their DE target genes and none of their DE target genes, respectively. For each set of tests, we only report P-values in the main text that pass Bonferroni-corrected thresholds. TF bQTL and H3K4me3 QTL enrichment analysis We used hypergeometric tests to determine enrichments for each QTL type among diff-CREs relative to non-diff-CREs and matching ancestry directionality among diff-CREs with a QTL, using the same definitions for diff- and non-diff-CREs as in our DE enrichment analyses. In each test, we considered the QTL with the lowest P-value per CRE for CREs with multiple QTL of the given type. For the directional analyses, we defined bQTL directionality as AFR if the high-affinity allele was present in AFR at a greater frequency than in EUR and vice versa. For CREs with multiple bQTL, we additionally required that they all match the direction for inclusion in each test. For binomial sign tests (see Additional file 1 : Fig. S23a-b), we performed two-sided binomial tests on the number of QTL matching directionality in diff-CREs in the AFR direction out of the total number matching direction in diff-CREs with a null probability of this proportion across all CREs. GO analysis We used the R package fgsea (v1.20.0) [ 61 ] to perform gene set enrichment analysis on genes ranked by the value of their most differential CRE according to the following T-test statistic [ 62 ]: Where is the mean score, is the standard deviation, and is the number of samples for each ancestry. fgsea was run on these ranked lists for each score type using the C5 GO biological processes and MSigDB Hallmark gene sets with default arguments except: minSize = 15, maxSize = 500. F ST analysis F ST for all variants was obtained using VCFtools’ calculation of Weir and Cockerham F ST [ 9 ] between individuals from the African (ESN, GWD, LWK, and YRI) and European (CEU, FIN, IBS, and TSI) populations in our ATAC-seq and HiChIP data on a per-site basis. Variants with NA values were removed and negative estimations were adjusted to zero. For diff- versus non-diff CRE F ST Wilcoxon tests independent of their containing bQTL or H3K4me3 QTL, we took the maximum F ST value per CRE. To control for possible allele frequency differences in our diff- versus non-diff CRE bQTL F ST Wilcoxon tests, we took the combined set of diff- and non-diff CRE bQTL in each test, split them by mean allele frequency across AFR and EUR populations into 10 decile bins, and performed separate tests within each of these bins. iHS analysis iHS for all populations were obtained from Johnson and Voight (2018) [ 63 ] and overlapped with bQTL in our CREs. For Wilcoxon tests analogous to those in our F ST analysis, we used the maximum iHS observed across all populations.
Results Differential CRE activity is linked to differential expression between ancestries We previously performed ATAC-seq in lymphoblastoid cell lines (LCLs) from ten different global populations sequenced by the 1000 Genomes Project [ 19 ]. This was carried out in a pooled study design, with each population represented by a single pool of ~100 unrelated individuals. We selected the four African (ESN, GWD, LWK, and YRI) and four European (CEU, FIN, IBS, and TSI) ancestry (hereinafter AFR and EUR, respectively) populations for comparison to isolate the effects of any lineage-specific selection on gene regulatory elements that have occurred since the divergence of human populations native to these two continents. The AFR and EUR ancestries were represented by 418 and 413 individuals, respectively. We first identified a common set of CREs by (1) calling peaks on ATAC-seq data combined across the four population pools of each ancestry, then (2) resizing them to 500 bp centered on each peak summit to avoid any potential peak width bias, and (3) retaining the top 150,000 by read count ranking. We ensured equal peak contributions between ancestries (see Methods ) to balance statistical power and for consistency with how the ABC model was developed [ 29 ]. To obtain the additional activity component and the contact component necessary for computing ABC scores, we performed H3K27ac HiChIP, which enriches first on the level of H3K27ac and second on HiC contact frequency of the two interacting regions [ 25 ], in two replicates per population of the same pooled LCLs (see the “ Methods ” section). H3K27ac HiChIP was shown to perform at least as well as H3K27ac ChIP-seq and HiC assayed separately when used in ABC scores adapted for this data type [ 29 ]. We mapped reads from each replicate to a common reference. To minimize allelic mapping bias, we retained only reads overlapping variants that mapped to the same unique location after swapping out one allele for the other [ 38 ]. Subsequent filtering to reads in valid cis interaction pairs yielded ~540 million paired-end reads qualified for use in ABC score computation (see Additional file 1 : Fig. S1; Additional file 2 ). To calculate ABC scores for each population, we jointly estimated activity level and contact frequency as the product of normalized ATAC-seq reads overlapping a given element and normalized HiChIP reads overlapping that element and the promoter of a given gene at 5 Kb resolution (see the “ Methods ” section). We identified 50,478 CRE-target gene (enhancer-gene) pairs with nonzero ATAC and HiChIP signal in all samples that passed enhancer-gene pair candidacy thresholds (see the “ Methods ” section) in at least one sample. Since ABC scores are designed to identify enhancer-gene pairs, but not the relative expression levels of target genes, we reasoned that decomposing each score into three independent components—ATAC, H3K27ac ChIP, and HiC scores (Fig. 1 a–b)—could allow us to search for evidence of selection on each as a distinct mechanism of differential gene expression regulation. Thus, for each enhancer-gene pair defined using ATAC-seq data in combination with our newly generated HiChIP data, ATAC scores represent the chromatin accessibility at the enhancer (Fig. 1 b, teal gradients). ChIP scores estimate the enhancer-gene pair’s collective H3K27ac signal as the geometric mean of total HiChIP signal at the enhancer and gene promoter (also known as the vanilla coverage square root (VC-sqrt)) (Fig. 1 b purple gradients). HiC scores estimate the contact frequency of the enhancer and promoter independent of H3K27ac levels by dividing the HiChIP signal from read pairs specifically connecting the enhancer and promoter (Fig. 1 b, magenta gradient) by the VC-sqrt (see the “ Methods ” section). To assess how each of these scores captures differences between populations and replicates we performed principal component analysis (PCA) and hierarchical clustering across samples on all enhancer-gene pairs for each score type. Since both CEU replicates were outliers (see Additional file 1 : Supplemental text, Fig. S5–8; Additional file 3 ) [ 39 ], we removed this population, redefined enhancer-gene pairs, and computed scores for downstream analyses using the remaining 14 samples. Although FIN rep1 and ESN rep1 are also outliers for HiC scores, and thus also for ABC scores (Additional file 1 : Fig. S9–10), this is likely driven by low coverage HiC contacts since these are the two samples with the lowest number of valid HiChIP cis interaction pairs (Additional file 1 : Fig. S1). For ChIP scores, which quantify the total H3K27ac signal at a CRE (not only that contributed by reads explicitly defining an E-G pair, as in HiC and ABC scores where the aforementioned low coverage effects manifest), these replicates are not outliers, so it is unlikely that coverage or batch effects contribute to any signal differences in this chromatin activity metric. We then reanalyzed the differences between populations and replicates captured by our score types and quantified their ancestry-associated differential signals. PCA and hierarchical clustering on these scores show that ChIP scores are highly similar between replicates when considering either all enhancer-gene pairs (Fig. 1 c) or the 5000 most variable pairs. Clustering by ancestry is apparent when considering the 5000 most variable enhancer CREs, but not promoter CREs (see Additional file 1 : Fig. S9; Additional files 4 and 5 ). To assess the ancestry-associated differential regulatory activity of each ABC score component, we identified differential score (diff-score) enhancer-gene pairs (diff-score P < 0.05, see Methods ; Additional file 1 : Fig. S4). We found little or no differential signal between ancestries for this score type in the diff-HiC scores (FDR = 0.87 at diff P < 0.05 relative to FDR = 0.093 and 0.057 for diff-ATAC and diff-ChIP, respectively; see Methods ; Additional file 1 : Fig. S11; Additional file 6 ), or in downstream functional analyses. To determine the extent to which diff-scores are associated with differential gene expression (DE) between African and European ancestry individuals, we analyzed gene expression data from two previous studies. Lea et al. (2022) measured gene expression across 12 cellular conditions (11 exposures and one unexposed control) in many of the same LCLs from African and European populations used in our study. Randolph et al. [ 40 ] measured gene expression in non-infected (NI) and IAV-infected (flu) peripheral blood mononuclear cells (PBMCs) at single-cell resolution from a panel of donors with varying degrees of African versus European ancestry. Both studies identified ancestry-associated DE (ancestry DE) genes, Lea et al. by modeling expression as a function of the African or European ancestry of each population, and Randolph et al. by modeling expression as a function of the proportion of African ancestry estimated from whole-genome sequencing. Although the context we assayed our LCLs in to generate ABC scores was closest to Lea et al.’s baseline/unexposed condition, by comparing diff-scores in a baseline (unstimulated) context to DE in other contexts we were able to ask if CREs could be poised for DE regulation upon stimulation and/or in another cell type. We then asked if chromatin accessibility, H3K27ac levels, HiC contact frequency, and/or the combination of these components in ABC scores were associated with ancestry DE across these 22 combinations of cell type and stimulation conditions. We found six enrichments for ancestry DE in diff-ATAC and five in diff-ChIP genes among the 22 tested contexts (hypergeometric P < 1.13 × 10 −3 , Fig. 2 b). For example, target genes of diff-ChIP CREs were overrepresented among ancestry DE genes in LCLs after four hours of exposure to B-cell-activating factor (BAFF, odds ratio (OR) = 1.94, P = 6.1 × 10 −8 ), a strong B cell activator and tumor necrosis factor family cytokine. As expected based on the lack of signal in our initial FDR analysis, no contexts were enriched for ancestry DE in diff-HiC genes, and only DE genes in LCLs after four hours of exposure to ethanol (labeled “ETOH”) were enriched in diff-ABC genes at the same Bonferroni-corrected P-value threshold used for diff-ATAC and ChIP (see Additional file 1 : Fig. S12, S13b). This indicates that the inclusion of the contact frequency component in ABC scores weakens the association of the activity components with DE. Overall, the strength of the associations of differential chromatin accessibility (diff-ATAC) and H3K27ac levels (diff-ChIP) with DE across several contexts suggests CREs could be poised for DE regulation upon stimulation or differentiation to another cell type. Importantly, although we observe little-to-no differential HiC signal between ancestries, this component was critical in defining enhancer-gene pairs to test, as each pair must have at least one HiChIP read connecting the two elements to have a non-zero ABC score. Although gene expression can be predicted by promoter activity [ 41 ], the contribution of promoter or enhancer activity to ancestry-associated DE remains unknown. Thus, we asked if the associations between differential activity scores and differential expression were driven by genes whose top diff-CRE is a DE promoter. We found some evidence of this among diff-ATAC promoters across the non-infected and flu PBMC cell types [ 40 ] (Additional file 1 : Fig. S14b), but no enrichments passed correction for multiple tests. We observed similar strengths of enrichment across the remaining contexts and score types for top diff-CRE enhancers and promoters (Additional file 1 : Fig. S14a–b); however, given only eleven significant enrichments when testing all CREs together we were likely underpowered to address this question. Since the activity levels of enhancers and promoters usually increase and decrease with the expression levels of their target genes, we hypothesized that true enhancer-gene pairs would have higher expression in the same ancestry as that of the populations with higher ATAC and ChIP scores (Fig. 2 c) and that this matching directionality would hold for pairs that are poised for DE in other contexts. To test this, we asked if among differential genes (diff-score FDRs = 0.093 and 0.057 for ATAC and ChIP, respectively, with differential expression local false sign rate (LFSR) < 0.05) the ancestry direction of the top diff-CRE matched the DE direction of its target gene more often than expected by chance (see Methods ). For example, is a gene with higher AFR ancestry expression also likely to have higher ATAC scores in AFR populations? We found that differential gene directionality matched more often than expected by chance in the same contexts in which differential activity and DE genes overlapped more often than expected by chance, as well as in three additional contexts for diff-ATAC (hypergeometric OR = 1.71–2.54, P < 3.3 × 10 −4 ) and five for diff-ChIP (OR = 2.90–3.70, P < 4.8 × 10 −4 ). Five PBMC contexts [ 40 ] were nominally enriched for diff-ChIP matching DE (OR = 1.63–2.12, P < 0.05), though not significantly after multiple test correction. This was in contrast to genes identified by diff-ATAC CREs, which were only nominally enriched in four PBMC contexts [ 40 ] at lower odds ratios (OR = 1.64–1.82, P < 0.038, Fig. 2 d; see also Additional file 1 : Fig. S13d). We found much weaker enrichment for matching DE directionality again among diff-ABC and HiC genes (see Additional file 1 : Fig. S13d, S15). To better ascertain the relative capacities of diff-ATAC and diff-ChIP (H3K27ac) to identify DE genes and their directionality, we compared the odds ratios across all contexts for all CREs and partitioned by promoter and enhancer top diff-CRE status. We found diff-ATAC to enrich better for DE (Wilcoxon P = 0.0022), whereas diff-ChIP enriched substantially better for matching DE direction (Wilcoxon P = 3.5 × 10 −6 ) (see Additional file 1 : Fig. S16, left). This difference held when considering enrichments derived from genes whose most differential CRE was a promoter but not an enhancer (see Additional file 1 : Fig. S16, right and middle). While the diff-ChIP score incorporates H3K27ac levels from each target gene’s promoter for enhancer CREs (Fig. 1 b, see the “ Methods ” section), diff-ATAC enhancers, which do not explicitly incorporate promoter accessibility information, performed similarly to diff-ChIP enhancers at identifying DE direction (see Additional file 1 : Fig. S14c–d, S16). These results indicate that of the two types of chromatin activity assayed in our study, accessibility is the better indicator of which genes are DE, while H3K27ac levels better identify which ancestry has higher expression of these genes across numerous cellular contexts. Differential CRE activity is associated with ancestry-divergent variants that affect binding of specific TFs Although diff-ATAC and diff-ChIP CREs are associated with DE of their target genes, the mechanism behind this association is unclear. To investigate this we sought to link potentially causal genetic variants to the activity of our CREs by intersecting them with QTL for the binding affinity of five transcription factors (bQTL, Fig. 3 a) and H3K4me3 levels (H3K4me3 QTL) previously mapped in the same YRI LCLs used in our study [ 18 ]. If differential CRE activity were driven in cis by any of these QTL types, as opposed to in trans by a difference in transcription factor expression level, we would expect strong associations between those QTL and differential activity CREs. We followed the same approach as in our DE analysis, first testing if our diff-CREs were enriched for any of these QTL relative to non-diff-CREs (Fig. 3 a, see the “ Methods ” section). Any diff-CRE was counted as a “success” overlap in hypergeometric enrichment tests if it contained a bQTL for the TF being tested (or H3K4me3 QTL). We found several significant bQTL enrichments across diff-ATAC and diff-ChIP CREs (Fig. 3 b). We further asked if these enrichments were driven by bQTL in enhancers or promoters by performing separate tests on these two CRE types. Interestingly, enrichments for bQTL became even stronger when considering diff-ATAC and diff-ChIP enhancers, while diff-promoters showed no enrichments for any TF across all score types (Fig. 3 c). This was despite greater coverage at promoters than at enhancers in both ATAC and HiChIP data (Additional file 1 : Fig. S2a–b). These results suggest that many ancestry differences in CRE activity could be associated with differences in binding of specific TFs in cis . To investigate the extent to which higher TF binding affinity corresponds to an increase in CRE activity, we asked if the high-affinity bQTL allele was at higher frequency in the ancestry with higher CRE activity (Fig. 3 d, “matching direction diff-CRE bQTL”). We also included bQTL for CTCF [ 42 ], a protein that mediates chromosomal looping and chromatin, in these tests. The same TFs (JunD, NF- B, and PU.1) were enriched for bQTL matching diff-CRE directionality as were enriched in diff-CREs overall, with the addition of STAT1 for matching diff-ATAC direction. Interestingly, PU.1 bQTL were enriched for matching diff-ATAC direction (Fig. 3 e, left), but not diff-ChIP direction (Fig. 3 e, right). This was in contrast with this TF’s overall bQTL enrichment in diff-ChIP CREs over non-diff (Fig. 3 b–c), suggesting that this TF’s activity could be linked to context-dependent increases and decreases in H3K27ac levels, but is associated with increased chromatin accessibility in both cases. If increased TF binding at bQTL is associated with an increase in CRE activity in cis , we should see an increase in correspondence between the ancestry with a higher frequency of the high-affinity bQTL allele and the ancestry with higher CRE activity the more extreme the difference in allele frequencies between ancestries. To test this, we asked if enrichments for matching directionality between bQTL and diff-CREs increase when considering only bQTL in the top 5% of F ST among variants in CREs (corresponding to F ST > 0.1813). Indeed, for all TFs with direction matching-enriched bQTL under no F ST thresholding we observed an average 2.36-fold increase in odds ratios when applying this F ST threshold (Fig. 3 e). These enrichments were again driven by enhancers, as evidenced by the average 3.41-fold increase in odds ratios for the same comparison restricted to this CRE type (see Additional file 1 : Fig. S17, top) and lack of directionality matching enrichment for any TF’s bQTL in promoters (see Additional file 1 : Fig. S17, bottom). As expected, none of the above bQTL enrichment tests were significant for nominally diff-HiC CREs (Additional file 1 : Fig. S18). Having established ancestry-dependent cis differences in TF binding as a possible mechanism for ancestry-associated differential CRE activity, specifically in enhancers, we sought to assess the likelihood that these TF bQTL have been under directional selection. We found that JunD, NF- B, PU.1, and Oct1 bQTL have higher F ST in diff-ATAC than in non-diff-ATAC enhancers (Wilcoxon P = 2.4 × 10 −9 , 5.0 × 10− 4 , 2.2 × 10 −4 , and 6.0 × 10 −4 , respectively), consistent with differential binding of these TFs as drivers of differential enhancer activity, as well as the possibility that their binding specifically in differential activity CREs has been subject to selection. While none reached significance in diff-ChIP enhancers after multiple test correction, all of the QTL types except CTCF bQTL were nominally significant (Fig. 4 a, top). This is likely due in part to power reduction in the diff-ChIP test due to the combination of lower resolution of HiChIP cis interaction pairs relative to ATAC-seq peaks (see the “ Methods ” section) and only counting the most significant bQTL per diff-score CRE (see Methods ). Notably, there were no significant differences between bQTL in diff- versus non-diff promoters after multiple test correction (Fig. 4 a, bottom; see Additional file 1 : Supplemental text, Fig. S19). To assess evidence for selection on bQTL in diff-enhancers over those in diff-promoters more directly we performed the same test within diff-CREs between enhancers and promoters. Nearly all QTL types had higher median F ST in diff-enhancers than in diff-promoters for ATAC and ChIP although none were significant after multiple test correction (Fig. 4 b). Again as expected, there was no difference in F ST between nominally diff- and non-diff-CREs or diff-CRE enhancers and promoters defined by HiC scores (Additional file 1 : Fig. S20). Since F ST can be correlated with allele frequency (i.e., rare alleles introduced by recent mutation have low F ST ), we sought to assess whether higher F ST for diff-enhancer bQTL was driven by differences in allele frequencies between CRE types. Performing the same tests in each of ten allele frequency decile bins, we find more enhancer bins than promoter bins with mean F ST greater in diff- versus non-diff CREs (see Additional file 1 : Fig. S21–22). Additionally, although binning reduces the power of each test, more of these bins have nominally significant differences in F ST between diff- and non-diff enhancers. These results suggest that greater allele frequency divergence in differential activity enhancers is not dependent on allele frequency differences between the tested CRE types. Overall, these higher F ST values for select bQTL in diff-enhancers are consistent with selection on TF binding sites in our diff-ATAC and diff-ChIP CREs. Differential CRE activity could be a result of directional selection and/or genetic drift While these results could reflect directional selection, the underlying divergence in allele frequencies and corresponding ancestry-associated differential CRE activity could still be explained by genetic drift. More convincing evidence of directional selection could result from applying the sign test framework [ 43 , 44 ] to ask if the high-affinity alleles for bQTL that match diff-CRE direction are at higher frequency in one ancestry over the other more often than expected by chance. The sign test leverages the expectation that under neutrality, where genetic drift is the dominant force operating on allele frequency in populations of both ancestries, the high-affinity alleles matching diff-CRE direction will not be biased toward higher frequencies in either population (Additional file 1 : Fig. S23a). Among bQTL matching diff-CRE direction, we found no more population-specific allele frequency bias than expected relative to the background of each TF’s bQTL in all CREs (Additional file 1 : Fig. S23b). Thus, genetic drift could be responsible for the association with increased ancestry divergence in diff-CREs matching bQTL directionality. Moving from genotype toward phenotype (Fig. 1 a, left), we next sought to identify the functional pathways most closely associated with our diff-score enhancer-gene pairs and their ancestry directionality. If variants in any subset of diff-CREs linked to target genes associated with a particular pathway have been subject to lineage-specific selection, these may not have been detected in our previous analyses. To address this possibility, we used gene set enrichment analysis with the gene ontology (GO) biological processes and MSigDB Hallmark gene sets on genes ranked by the difference in means between ancestries in ABC component scores of their top diff-CREs scaled by a measure of score variance (i.e., ranked from high EUR activity to high AFR activity, see the “ Methods ” section). Again, under neutrality, we would not expect diff-CREs with target genes in a particular pathway to have higher activity in one ancestry over the other. We did not find any significant enrichments among these gene sets after multiple test correction; however, some immune-related gene sets including interferon gamma (IFNG) response and TNF- signaling via NF- B were among the top nominal enrichments for genes with top diff-ChIP and/or diff-ATAC CREs in the AFR high activity direction (Additional file 1 : Fig. S24). These nominal enrichments are consistent with diff-ATAC and diff-ChIP target gene enrichments for DE genes and matching DE directionality in IFNG-exposed LCLs (Fig. 2 b,d, left), and Randolph et al.’s [ 40 ] finding of TNF- signaling via NF- B enrichment among genes with higher AFR expression in monocytes both before and after flu infection. Thus, although we do not find strong evidence for lineage-specific selection on diff-CREs in aggregate, the possibility that selection has more subtly affected gene regulatory architecture remains.
Discussion We have presented results from the first genome-wide comparison of chromatin activity and contact frequency between human populations with the goal of identifying CREs under recent selection. Since recent evidence points toward gene expression changes as the dominant force shaping recent human adaptation relative to protein sequence changes [ 16 , 45 ], this approach has the potential advantage of directly identifying CREs responsible for adaptive gene expression differences. Using ABC scores to link CREs to target genes and decomposing these scores into their components allowed us to identify genes whose ancestry-associated expression differences across multiple contexts could be identified by the differential activity of their enhancers in the context of LCLs at baseline. This was particularly true for identifying the ancestry-associated direction of DE. Although H3K27ac alone is not required to maintain CRE activity [ 46 ], it seems to be a more reliable indicator of expression direction than chromatin accessibility as measured by ATAC-seq in the context of our study. For example, one of many models capable of explaining this difference would be the binding of a transcriptional repressor to a promoter that yields an increase in chromatin accessibility but not in H3K27ac levels. About 25% of ABC-predicted and validated enhancer-gene pairs were found to have repressive effects via CRISPRi-flowFISH [ 29 ] and any such effects within the matching DE directionality enrichments from our study could have contributed to the 39% of differential activity pairs that “opposed” DE direction. More generally, the strength of these cross-context enrichments for DE and its direction is consistent with the maintenance of ancestry-associated regulatory differences in contexts beyond those where the target genes are DE. Matching differential CRE activity in LCLs at baseline and DE in many other contexts suggests CRE poising for DE regulation upon stimulation or differentiation to another cell type, or footprints of regulatory activity from a previous cell state remaining after the transition from that state. Although our bQTL enrichment results suggest that differential activity is a result of cis-regulatory activity, it is possible that transcription factor differential expression in trans partially accounts for this. Indeed, JunD and NFKB2 (NF- B subunit 2 of 2) show AFR-biased expression in LCLs at baseline (ancestry effect β = − 0.26, LFSR = 0.0026 and ancestry effect β = − 0.18, LFSR = 0.073, respectively); however, given the high odds ratios for bQTL in the top 5% of F ST (Fig. 3 e, Additional file 1 : Fig. S17), differential CRE activity would likely persist even under constant trans conditions. Moreover, the lack of enrichments among diff-ATAC and diff-ChIP promoters for bQTL over non-diff (Fig. 3 c), matching bQTL directionality irrespective of F ST (Fig. S17, bottom), and high bQTL F ST over non-diff (Fig. 4 a), all relative to the positive enrichments found for tests on enhancers (including Fig. 4 b) are consistent with greater evolutionary constraint on promoters and the distinct roles of enhancers in cell types that may be subject to different selection pressures [ 47 ]. Notably, while diff-ChIP enhancers and promoters both identified DE direction (Additional file 1 : Fig. S14c), these results suggest that if JunD and/or NF- B are responsible for any of these expression differences, it is due to differences in their binding at enhancers, rather than at promoters. Moreover, we find similar proportions of diff-ATAC and diff-ChIP enhancers versus promoters (20% versus 18%, and 10% versus 9%, respectively) indicating similar levels of differential signal present in each across both methods. This genotype-level evidence restricted to differential enhancers indicates that our method of using chromatin as a spotlight on genetic variation effectively reveals otherwise hidden patterns consistent with selection (Fig 1 a, left). While our tests for greater transcription factor binding in one ancestry over the other did not show evidence of lineage-specific selection, the most enriched pathways among genes linked to higher activity CREs in AFR suggest more subtle effects of directional selection. For example, if the IFNG response pathway was under selection in one ancestry and this selection acted on a fraction of differential activity CREs regulated by transcription factor complexes more tissue- and/or response-specific than JunD or NF- B, this could remain undetected when aggregating over many more CREs. Importantly, any ancestry-associated differences that may exist in the regulation of these pathways as a result of selection or drift do not imply differences in underlying cellular and physiological mechanisms. Independent of these considerations, our study is limited by any changes to genome architecture introduced by Epstein-Barr virus in transforming B cells into LCLs that further mask the effects of any selection that has acted on B cells or even more relevant cell types and the noise introduced by combined analysis of multiple datasets generated by different people and/or labs. Future studies generating ABC score component data from diverse donors in cellular contexts more like those in which lineage-specific selection could have acted may find stronger evidence of it, especially if bQTL are mapped for more context-specific transcription factors. The demographic processes that shape human genetic variation (e.g., population history, migration, and drift) can obscure the influence of selection on variants that underlie adaptive phenotypes [ 48 ]. Moreover, false signals of selection can result from under-controlled population stratification [ 49 , 50 ]. These confounders along with the prevalence of adaptive variants in non-coding regions with subtle effects [ 16 ] demonstrate the need for complementary methods to identify CREs that have been subjects of selection. We anticipate that extending the application of the method presented here to more populations and cell types will elucidate the molecular underpinnings of recent human evolution with implications for understanding modern disease prevalence.
Conclusions In generating the first population-level maps of candidate enhancer-target gene pairs in humans, we suggest cis -regulatory elements are poised for ancestry-dependent differential expression regulation upon stimulation or differentiation to another cell type. Mechanistically, this poising could be maintained by variants affecting the binding of transcription factors NF-kB, JunD, and PU.1 that show signs of lineage-specific selection in enhancers but not promoters. The potential effects of directional selection on immune-related pathways identified here suggest the promise of applying our chromatin-level selection test in additional cell types with roles in these pathways.
Background Current evidence suggests that cis -regulatory elements controlling gene expression may be the predominant target of natural selection in humans and other species. Detecting selection acting on these elements is critical to understanding evolution but remains challenging because we do not know which mutations will affect gene regulation. Results To address this, we devise an approach to search for lineage-specific selection on three critical steps in transcriptional regulation: chromatin activity, transcription factor binding, and chromosomal looping. Applying this approach to lymphoblastoid cells from 831 individuals of either European or African descent, we find strong signals of differential chromatin activity linked to gene expression differences between ancestries in numerous contexts, but no evidence of functional differences in chromosomal looping. Moreover, we show that enhancers rather than promoters display the strongest signs of selection associated with sites of differential transcription factor binding. Conclusions Overall, our study indicates that some cis -regulatory adaptation may be more easily detected at the level of chromatin than DNA sequence. This work provides a vast resource of genomic interaction data from diverse human populations and establishes a novel selection test that will benefit future study of regulatory evolution in humans and other species. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-024-03165-2. Keywords
Supplementary Information
Acknowledgements We thank members of the Fraser lab for helpful conversations, advice, and feedback on the manuscript; and Joseph Nasser, Kristy Mualim, and Jesse Engreitz for help with Activity-by-Contact scores. Review history The review history is available as Additional file 7 . Peer review information Tim Sands was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Authors’ contributions HBF conceived the study. KPP and HBF conceived analysis methods. HiChIP experiments were performed by MM, MK, and KPP and funded by HYC. KPP performed all analyses and designed all graphics. AJL and JA provided unpublished data. KPP wrote the manuscript with input from all authors. HBF supervised all aspects of the work. All authors read and approved the final manuscript. Funding This work was funded by NIH grant R01GM134228. KPP was supported by NIH training grant T32GM007276 and the NSF Graduate Research Fellowship Program. HiChIP data generation was supported by NIH grant RM1-HG007735 to HYC. Availability of data and materials Genotype data for all individuals from populations used in our ATAC-seq and F ST analyses is from 1000 Genomes Project Phase 3 release [ 19 ] ( ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ ). All HiChIP reads are available as fastq files at NCBI SRA, project ID PRJNA898623 [ 64 ]. Ancestry associated differential expression data from RNA-seq in LCLs after four-hour exposure to twelve cellular environments is from Supplemental Table S8 of Lea et al. [ 59 ] with additional files and analysis code at https://github.com/AmandaJLea/LCLs_gene_exp . Ancestry associated differential expression data from single cell RNA-seq in PBMCs is from Randolph et al. [ 40 ], available at NCBI GEO, Accession no. GSE162632 [ 65 ]. bQTL and H3K4me3 QTL are from supplemental Table S 1 of Tehranchi et al. [ 18 ]. CTCF QTL are from Ding et al [ 42 ]. All pipelines and code used for analyses in this paper are available on Zenodo at https://zenodo.org/records/10396417 and on github at https://github.com/kadepettie/popABC/tree/master [ 66 ]. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:34
Genome Biol. 2024 Jan 15; 25:21
oa_package/61/70/PMC10789071.tar.gz
PMC10789072
38225660
Following publication of the original article [ 1 ], we have been notified that the first and last author names have been swapped. Originally published names: Mikola Katriina, Rebane Katariina, Kautiainen Hannu and Aalto. Kristiina. Correct name order: Katriina Mikola, Katariina Rebane, Hannu Kautiainen and Kristiina Aalto. The original article has been corrected.
CC BY
no
2024-01-16 23:45:34
Pediatr Rheumatol Online J. 2024 Jan 15; 22:14
oa_package/6d/20/PMC10789072.tar.gz
PMC10789073
38225584
In recent years, extracellular vesicles (EVs) have attracted significant attention as carriers in intercellular communication. The vast array of information contained within EVs is critical for various cellular activities, such as proliferation and differentiation of multiple cell types. Moreover, EVs are being employed in disease diagnostics, implicated in disease etiology, and have shown promise in tissue repair. Recently, a phenomenon has been discovered in which cellular phenotypes, including the progression of differentiation, are synchronized among cells via EVs. This synchronization could be prevalent in widespread different situations in embryogenesis and tissue organization and maintenance. Given the increasing research on multi-cellular tissues and organoids, the role of EV-mediated intercellular communication has become increasingly crucial. This review begins with fundamental knowledge of EVs and then discusses recent findings, various modes of information transfer via EVs, and synchronization of cellular phenotypes. Keywords
Subgroups and markers of EVs Extracellular vesicles (EVs) are membranous structures released by cells and categorized into several subgroups with their distinct formation mechanisms; exosomes are formed from the budding of endosomes, microvesicles are directly budded from the cell membrane [ 1 ], and apoptotic bodies are produced by the breakdown of apoptotic cells [ 2 , 3 ] (Fig. 1 ). Zhang and colleagues further defined three distinct subpopulations of exosomes: small exosomes (Exo-S, 60–80 nm), large exosomes (Exo-L, 90–120 nm), and exomeres (< 50 nm). Unlike other vesicles, exomeres are not surrounded by a lipid bilayer and are not enriched with ESCRT-related molecules, making their generation mechanism elusive [ 4 ]. It is important to note that in this study, the Exo-L fraction contains a significant amount of Annexin A1, which has been reported as a characteristic marker of microvesicles [ 5 ]. This suggests the possibility that microvesicles may also be present in the Exo-L fraction. Therefore, the risk of relying solely on size for exosome fractionation should be considered. Exosomes are generated through both ESCRT (endosomal sorting complexes required for transport)-dependent and ESCRT-independent mechanisms (Fig. 2 ). In the ESCRT-dependent mechanisms, the ESCRT complexes catalyze the formation of multivesicular bodies (MVBs) by invagination of the endosomal limiting membrane [ 6 ]. Some ESCRT components are suggested to selectively act on subpopulations of MVBs or intraluminal vesicles (ILVs) destined to be secreted as exosomes [ 7 ]. ESCRT-independent exosome generation requires the production of ceramides by the neutral sphingomyelinase 2 (nSMase2), which hydrolyzes sphingomyelin into ceramides. These ceramides then trigger the budding of exosome vesicles into MVBs [ 8 ]. Furthermore, the metabolic product of ceramide, sphingosine-1-phosphate (S1P), is implicated in the cargo sorting and the maturation of MVBs [ 9 ]. Tetraspanins, Rab proteins, and flotillin-1 are shared between both ESCRT-dependent and independent pathways [ 10 ]. On the other hand, within the Rab family, Rab31 is involved in cargo sorting, supporting ESCRT-independent exosome biogenesis. Additionally, Rab31 promotes exosome secretion by inhibiting the fusion of MVBs with lysosomes through Rab7 inhibition [ 11 ]. Once released from cells, distinguishing among the various subgroups of EVs, such as exosomes, microvesicles, and apoptotic bodies, becomes difficult. The International Society for Extracellular Vesicles (ISEV) has proposed categorizing them by size into small EVs (typically less than 100 nm or 200 nm), and medium/large EVs (greater than 200 nm), according to the Minimal Information for Studies of Extracellular Vesicles (MISEV2018) guidelines [ 17 ]. In this review, we generally use the term ‘EV’ unless specifically referring to a particular subgroup, especially exosomes. EV membranes are composed of various lipids, and various proteins and glycans are expressed on EV membranes. Many lipids and proteins are glycosylated. These glycosylation modifications are altered by cancer (Fig. 3 a) [ 18 , 19 ]. The expression profiles of such components vary depending on the cell types even in commonly used EV biomarkers like CD9, CD63, and CD81 (Fig. 3 b) [ 20 ]. Furthermore, heterogeneity has been suggested even within EVs derived from the same cell type or source, namely the presence of EVs that are single positive, double positive, and triple positive for CD9, CD63, and CD81 has been reported (Fig. 3 c, d) [ 21 , 22 ]. Recently, it was found that syntenin-1 is the most common and consistently included in the proteome of exosomes derived from different cell lines, and was also identified in exosomes recovered from various species. It has been identified in exosomes from plasma, urine, breast milk, and saliva [ 23 ]. These results suggest that syntenin-1 could be used as a unique biomarker to distinguish exosomes purified from human biofluids from other EVs. Specific integrins expressed on exosomes recognize specific distant tissues/cells, and tumor-derived exosomes taken up by organ-specific cells prepare the pre-metastatic niche. Exosomes expressing integrins α6β4 and α6β1 bind to fibroblasts and epithelial cells in the lungs, governing tumor metastasis to the lung, while exosomes expressing integrin αvβ5 specifically bind to Kupffer cells, mediating liver metastasis [ 24 ]. Contents of EVs EVs were originally considered to serve the role of expelling unwanted cellular components. Indeed, it has been reported that they dispose of defective proteins, unnecessary proteins, and harmful DNA, thereby maintaining cellular homeostasis [ 25 , 26 ]. With the recent realization that EVs play a role in intercellular communication [ 27 ], there has been an explosive increase in reports on their roles in various disease states including cancer, analyses of the cargo of EVs released from various cells, and their reparative effects on damaged tissues. EVs contain mRNA, microRNA (miRNA, miR), non-coding RNA (ncRNA), proteins, and lipids [ 28 ], with some reports suggesting the inclusion of mitochondria [ 29 , 30 ]. While numerous studies report the presence of DNA in EVs, it has been noted that the DNA content within EVs is relatively low, and most detected DNA may be adhering to the surface of EVs or embedded in non-vesicular structures [ 5 , 31 ]. EVs contain only a small amount of miRNA, and even the most abundant miRNA is detected at an average of only one copy per 121 EVs [ 32 ]. In another study, specific viral miRNA in EVs from virus-infected cells was found at a frequency of only one copy per 300 to 16,000 EVs [ 33 ]. Even EVs derived from the same cell are not constant in their contents but would be heterogeneous. A proteomic analysis of EVs has revealed the diversity of contents across various subpopulations of EVs [ 34 ]. The heterogeneity of EV contents should be further explored and discussed in future single EV analyses. EV uptake and information transmission mechanisms EVs communicate with recipient cells through three primary mechanisms: (1) uptake of EVs by endocytosis, (2) signal transduction by receptor-ligand binding on the cell membrane, and (3) fusion of EVs with the recipient cell (Fig. 4 ). EVs that have reached the surface of the recipient cell membrane are taken up by clathrin-dependent endocytosis, caveolin-dependent endocytosis, lipid raft-dependent endocytosis, macropinocytosis, or phagocytosis [ 35 ]. After being taken up into endosomes, there is still little known about how the contents of EVs are released into the cytoplasm. A recent study by Joshi et al. demonstrated that the EV membrane fuses with the endosome/lysosome membrane under acidic conditions and releases its contents into the cytoplasm [ 36 ]. While this study did not observe EVs fusing with the cell membrane and releasing their contents into the cytoplasm, such a mechanism cannot be ruled out. As another example, Polanco et al. demonstrated a mechanism by which EVs containing tau protein, thought to be involved in Alzheimer’s disease, escape from endosomes. After EVs were taken up into endosomes, the degradation of endolysosomes increased the permeability of the endolysosomes, causing tau to leak into the cytoplasm and inducing tau aggregation [ 37 ]. Conversely, there are viewpoints challenging the efficiency of EVs in delivering cargo to the cytoplasm of recipient cells, suggesting that the cargo might not be functional or that the process is highly inefficient [ 33 , 38 ]. Approximately 50,000 EVs per cell were incubated for 4 h, but the fusion of EVs with the plasma membrane or endosomal membrane of recipient cells was either extremely low or not detected [ 33 ]. In experiments using EVs incorporated with the virus-derived fusion protein VSV-G (vesicular stomatitis virus glycoprotein), which dramatically increases the efficiency of cargo transport to the cytoplasm, approximately 100,000 EVs per cell were incubated for 24 h, yet no function of miRNA was observed [ 33 ]. This inefficiency might be attributed to the low copy number of miRNA present in EVs. Proteins such as β-lactamase reporter and tetracycline transactivator, which were overexpressed, were clearly observed to function only when transported by EVs incorporated with VSV-G [ 33 , 38 ]. These results suggest that a significantly larger amount of EVs or engineered EVs with improved membrane fusion capability may be required for effective cargo delivery. Additionally, instances of signal transduction independent of EV uptake have been documented. One of the earliest examples of signal transduction by receptor-ligand binding on the cell membrane involves EVs derived from B cells or dendritic cells that could present antigens to T cells and induce a specific antigenic response [ 39 , 40 ]. It was discovered that angiopoietin-2 on the surface of EVs binds to the Tie2 receptor on recipient cells and activates the signal [ 41 ]. Laminin and fibronectin on EVs released from the inner cell mass (ICM) interact with integrins on the surface of the trophoblast, promoting trophoblast migration and embryo implantation [ 42 ]. The multifaceted roles of EVs in molecular dynamics and signaling There are several instances that the dynamics of molecules change when they are transported by EVs, compared to when soluble factors or ligands exist individually. Regarding the distribution of morphogens, the widely accepted model of gradient formation by passive diffusion cannot explain the specificity to certain target cells, the dynamics of long-range distribution, and the formation of intracellular and extracellular gradients [ 43 ]. It has been shown that Hedgehog (Hh) is secreted in an ESCRT-dependent manner within EVs moving along cytonemes (a type of filopodia) to create a gradient within Drosophila tissues [ 44 , 45 ]. Additionally, it has been reported that Hh is transported long distances by EVs through cytonemes [ 44 ]. This mechanism produces a distribution of Hh different from that by passive diffusion. Some Wnts and Hhs undergo lipid modifications (palmitoylation or cholesterol modification) that are essential for signal transmission but can impair their free diffusion in the extracellular environment. Therefore, packaging in vesicles is required for the long-range action of lipid-modified morphogens [ 46 , 47 ]. Notch is a transmembrane protein, and its intracellular domain is cleaved and downstream signaling is activated after binding to Delta on the surface of directly adjacent cells. Alternatively, a model has been proposed in which Delta on the surface of EVs triggers the activation of Notch signaling in the recipient cell [ 48 , 49 ], suggesting the possibility of activating Notch signaling in distant cells without direct contact. Within the mouse embryo, rotating cilia create a fluid flow. By this fluid flow, EVs containing Sonic hedgehog and retinoic acid are transported to the left side of the embryo, influencing the determination of the left–right axis [ 50 ]. Synchronization of cell differentiation through EVs During development, cells coordinate their differentiation in a way that must align their fate determination and synchronize their differentiation stages with those of surrounding cells. While numerous soluble factors inducing differentiation have been reported, there are many instances where the cells producing these factors and the cells induced by them belong to distinct lineages. For example, during vasculogenesis in early development, vascular endothelial growth factor (VEGF) is a potent soluble factor that induces differentiation from mesoderm to vascular endothelial cells, yet its cell source is the endoderm [ 51 ]. Another example can be seen in chicken embryos, where bone morphogenetic protein (BMP) produced by the dorsal aorta prompts the differentiation of neural crest cells into adrenal medulla cells [ 52 ]. It has been challenging to explain the mechanism by which cells of the same lineage synchronize their differentiation with surrounding cells through soluble factors. Our recent research has unveiled a novel mechanism for how neighboring cells synchronize their phenotypes with each other and this synchronization is mediated through EVs [ 53 ]. Our discovery centers on the synchronization of cells in differentiation, particularly focusing on the coordination of fate determination towards mesoderm and the synchronization of the differentiation progression. In order to prove this, it was necessary to create an intentional gap in the degree of differentiation progress. For this purpose, we used a method we previously reported, where we intentionally accelerated the differentiation of embryonic stem cells (ESCs) into mesodermal cells by activating Protein Kinase A (PKA) [ 54 ]. In the established ESC line (PKA-ESCs), we can express activated PKA in a drug-controlled manner (Tet-OFF). When we culture Control-ESCs alone, which has the same differentiation speed as the wild type, less than 20% of the cells become Flk1 positive mesoderm cells only by day 4.5. On the other hand, when we culture PKA-ESCs alone and activate PKA under doxycycline-free (Dox-) conditions, more than 20% of Flk1 positive mesoderm cells in total cells appear from day 2.5 of differentiation. When we create a mixed aggregate of PKA-ESCs and Control-ESCs and co-culture them under differentiation conditions, the differentiation of Control-ESCs accelerates to catch up with PKA-ESCs, reaching a mesoderm positivity rate of 40% at day 3.5. We consider that this phenomenon can be defined as ‘phenotypic synchronization of cells (PSyC)’ (Fig. 5 ) [ 53 ]. When we added an EV inhibitor (an inhibitor of nSMase2 essential for exosome synthesis), Manumycin A or GW4869, to the mixed aggregate of PKA-ESCs and Control-ESCs, only the differentiation of Control-ESCs was inhibited. When we collected EVs from PKA-ESCs (PKA-ESC-EVs) and added them to Control-ESCs during a single culture, mesoderm differentiation was strongly promoted. When we added PKA-ESC-EV to mouse embryos and performed ex vivo culture, beating cardiomyocytes, a mesodermal derivative, was induced. To analyze the functional molecules contained in PKA-ESC-EV, we performed microRNA sequencing and found that miR-132 was particularly potent. We found that when artificial nanoparticles containing miR-132 were applied to cells, they induced differentiation into mesoderm. Moreover, when added to mouse embryos, they induced the differentiation of cardiomyocytes. These results demonstrate that it is possible to use the molecules inside EVs for cellular phenotypic synchronization. This synchronization was notably less efficient in a co-culture system using a transwell, which created a physical distance between PKA-ESCs and Control-ESCs. Also, when we labeled PKA-ESC-EVs with a fluorescence probe, we found that the efficiency of EV reaching Control-ESCs was markedly lower in the transwell system compared to mixed aggregate and 2D co-culture. This could be due to the fact that EVs should be immediately taken up by nearby cells as soon as they are released. From these observations, it was inferred that the delivery of EVs, especially the exchange of EVs between adjacent cells, is important for the phenotypic synchronization. Currently, we are exploring a new mode of cellular communication, focusing on the direct vesicle exchanges between adjoining cells, primarily using live imaging. Another interesting finding is that when we added PKA-ESC-EVs, Control-ESCs differentiated into mesoderm, but at this time, the PDGFRα positivity rate increased depending on the concentration of EVs, while the Flk1 positivity rate tended to decrease. This suggested that EVs have the potential to fine-tune the orientation towards the axial mesoderm within the mesoderm. EVs have been found to contain tens of thousands of entities, including RNAs, ncRNAs, proteins, and more. Considering the additional presence of lipids, DNA fragments, surface ligands, and glycans, we believe that EVs can share high-order information that cannot be achieved by single molecules. Phenomena that imply the involvement of phenotypic synchronization in differentiation have been reported in various environments and cell types (Table 1 ). When EVs collected from differentiated NSPCs (neural stem progenitor cells) were added to proliferating NSPCs, differentiation was induced [ 55 ]. Mesenchymal stem cells (MSCs) received daily treatments for a week with EVs from the neural cell line PC12. After treatment with EVs from PC12 cells, the MSCs exhibited a neuron-like morphology, and the expression of genes and proteins of neuronal markers increased [ 56 ]. In ex vivo experiments, the addition of EVs derived from corneal epithelial cells increased the expression levels of corneal epithelial markers, while the addition of EVs derived from conjunctival epithelial cells increased the expression levels of conjunctival epithelial markers [ 57 ]. When EVs derived from hair papilla cells were added to adipose-derived stem cells, the cells became more likely to acquire hair papilla-like characteristics [ 58 ]. The addition of macrophage-derived EVs to naive monocytes induced differentiation into macrophages [ 59 ]. When EVs derived from ESCs were supplied to Müller cells, these cells changed to a de-differentiated precursor cell phenotype [ 60 ]. Cardiac-derived EVs have been identified to enhance the expression of specific cardiac-associated genes, namely GATA-binding protein 4 (GATA4), T-box transcription factor (Tbx5), NK-2 transcription factor related, locus 5 (Nkx2.5), and cardiac troponin T (cTnT), within human mesenchymal stem cells (hMSC) [ 61 ]. Utilizing EVs extracted from embryonic stem cells (ESCs) undergoing cardiac differentiation has facilitated the direct reprogramming of fibroblasts into induced cardiomyocyte-like cells, with success rates above 60% [ 62 ]. Synchronization and maintenance of cellular phenotypes via EVs We believe that EVs should also contribute to the maintenance of cellular homeostasis, mainly intrinsic properties, within tissues, not just differentiation. Each cell constantly exchanges information with its surroundings. One example is the complementation of missing molecules. When endothelial cells with a knocked-out gene were cultured with adipocytes, it was observed that mRNA of the knocked-out gene was supplied from adipocytes to endothelial cells, compensating for the deficiency [ 63 ]. There have also been reports of cases, i.e. EVs from undifferentiated cells improve the quality of undifferentiated cells [ 64 ], in the co-culture of porcine parthenogenetic embryos and cloned (nuclear transfer) embryos, mRNA of pluripotency genes was delivered via EVs, improving the in vitro development of cloned embryos [ 65 ]. We hypothesize that there may be diseases that arise from the breakdown of mechanisms to maintain such homeostasis. Future perspectives Numerous co-culture experiments of different cell types have been conducted so far [ 66 – 68 ], and while changes in cellular phenotypes have been observed, the molecular mechanisms are complicated and many remain unclear. Not only between different cell types but also between the same cell types, there should be a great deal of intercellular communication via EVs, including mechanisms like synchronization, even if they are not immediately apparent. We expect the molecular mechanisms contributed by EVs to become increasingly clear in the coming years. Furthermore, recent years have seen rapid progress in research on organoids, three-dimensional structures composed of multiple types of cells, and assembloids, that connect organoids from different brain regions [ 69 ]. We anticipate that such research will enable the modeling of complex intercellular interactions, further deepening our understanding of intercellular communication via EVs in vivo. The roles and importance of vesicle-mediated intercellular information transfer are expected to gain further validation in the near future.
Abbreviations Bone morphogenetic protein Ceramide Cardiac troponin T Doxycycline Epidermal growth factor receptor Embryonic stem cell Endosomal sorting complexes required for transport Extracellular vesicle GATA-binding protein 4 Hedgehog Inner cell mass Intraluminal vesicle International Society for Extracellular Vesicles MicroRNA Minimal information for studies of extracellular vesicles Mesenchymal stem cell Multivesicular bodies Non-coding RNA NK-2 transcription factor related, locus 5 Neutral sphingomyelinase 2 Natural killer cell Neural stem progenitor cell Protein kinase A Phenotypic synchronization of cells Sphingosine-1-phosphate Sphingomyelin Sphingosine T-box transcription factor Vascular endothelial growth factor Vesicular stomatitis virus glycoprotein Acknowledgements Not applicable. Authors’ contributions TM wrote the manuscript and prepared the figures. JKY revised the manuscript. Both authors have read and approved the final manuscript. Funding Not applicable. Availability of data and materials Not applicable. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that TM and JKY have received funding from Takara Bio Inc. This does not alter the authors’ adherence to all the policies of Inflammation and Regeneration on sharing data and materials.
CC BY
no
2024-01-16 23:45:35
Inflamm Regen. 2024 Jan 15; 44:4
oa_package/14/37/PMC10789073.tar.gz
PMC10789074
0
Introduction Engineered nanomaterials are a broad class of materials developed to have at least one dimension between 1 and 100 nm, and offer unique, size-dependent properties not exhibited by their bulk counterparts [ 1 ]. The global nanomaterials market size was valued at USD 10.88 billion in 2022 and is expected to grow at a compound annual growth rate (CAGR) of 14.8% from 2023 to 2030, in which titanium and carbon nanotubes are the most used in the market [ 2 ]. Moreover, with the explosive global production and sales of new electric vehicles, cobalt use will continue a bullish trend with an expected CAGR of at least 30% by 2025 [ 3 ]. With the increasing application of engineered nanomaterials, their dissemination into the environment will adversely affect human health, including impairment of the central nervous system. Cobalt nanoparticles (CoNPs), titanium dioxide nanoparticles (TiO 2 NPs), and multi-walled carbon nanotubes (MWCNTs) are widely designed and manufactured in biomedicine, electronics, energy storage, textiles, and cosmetics, as well as high-performance intermediates such as coatings and composites for aerospace, automobiles, and construction [ 4 ]. CoNPs have been applied in pigments, catalysis, sensors, electrochemistry, magnetism, and energy storage owing to their unique physical properties [ 5 ], TiO 2 NPs have been applied in nanodermatology and nanocosmetology [ 6 ] and MWCNTs have been widely used in the medical field as carriers of drug delivery [ 7 ]. However, the toxicity of nanomaterials is largely dependent on their biophysical properties, including their size, surface charge, and aggregation state [ 8 ]. Therefore, it is necessary for people to compare the toxicity of different nanomaterials to understand the influence of physical and chemical properties of nanomaterials on their toxicity. Recent studies have shown that oxidative stress caused by nanomaterials results in excessive ROS production [ 9 ]. Nanomaterials can produce ROS by one-electron oxidative reactions with transition metal or nanomaterial surface groups [ 10 , 11 ], or can directly impair mitochondria structure and function [ 12 , 13 ]. The ROS induced by nanomaterials activates numerous signaling pathways, which may damage cell membranes, intracellular organelles, and nucleic acids, eventually leading to cell apoptosis or necrosis [ 9 ]. Interestingly, the body does not remain responsive to oxidative stress. For example, astrocytes produce functional extracellular mitochondria that support neuronal viability after stroke [ 14 ]. Furthermore, our previous study demonstrated that astrocyte-derived mitochondria can be transferred to neurons via tunneling nanotubes (TNTs) to fight CoNPs-induced neurotoxicity [ 15 ]. TNTs are characterized by their enrichment in F-actin (with few microtubes) and lack of attachment to the extracellular substrate [ 16 ]. TNTs can be transferred to many organelles, such as mitochondria [ 17 ], lysosomes [ 18 ], and even pathological proteins (tau [ 19 ], alpha-synuclein [ 20 ]). Among the substances transferred by TNTs, mitochondria are the most important organelle, as they can rescue energy production malfunction induced by toxicants [ 21 ]. However, whether this intercellular protection strategy via TNTs is common and universal among different engineered nanomaterials and the underlying mechanisms regulating TNTs formation remain unknown. A growing body of evidence has demonstrated that ROS is a major mechanism regulating TNTs formation [ 22 ]. As described above, ROS is the main product after nanomaterials exposure. However, the link between ROS production induced by engineered nanomaterials and TNTs formation has not yet been studied. In this study, we aim to explore and compare whether different types of nanomaterials can induce TNTs formation (to the same degree), and investigate the potential role of ROS in TNTs formation and downstream molecular signaling pathways in response to various engineered nanomaterials. We hypothesized that engineered nanomaterials exposure induces cellular ROS and mitochondrial ROS production, which activates the downstream PI3K/AKT pathway, leading to the formation of TNTs. TNTs formation is an intercellular protective strategy that transfers by functional mitochondria to fight against nanomaterials-induced neurotoxicity. Thus, we investigated the toxic effects and TNTs formation of three types of engineered nanomaterials (CoNPs, TiO 2 NPs and MWCNTs) using mice primary astrocytes and human glioblastoma U251 cells. The properties of the three nanomaterials were fully characterized before the experiments. First, flow cytometry and high-content analysis were used to detect when and to what extent the nanomaterials entered the cells. The ability of the three nanomaterials to increase ROS/mtROS levels and induce cytotoxicity was examined. In addition, high-content dynamic observations and immunofluorescence were conducted to study the influence of nanomaterials on TNTs formation. What’s more, NAC (N-Acetylcysteine, a ROS scavenger) and MitoQ (Mitoquinone, an antioxidant targeting mtROS) were used to explore nanomaterials-induced TNTs formation. Finally, we explored the involvement of the PI3K/AKT/mTOR pathway in the nanomaterial-induced TNTs formation and mitochondrial transfer using various chemical inhibitors (Graphical abstract).
Methods Characterization of TiO 2 NPs and MWCNTs Cobalt nanoparticles (CoNPs, Cobalt–carbon-coated magnetic, nanopowder, ≥ 99%, Product number 697745, Batch Number MKCL5254), Titanium dioxide nanomaterials (TiO 2 NPs, anatase, nanopowder, ≥ 99.7%, Product Number 637254, Batch Number MKCK4358) and multi-walled carbon nanotubes (MWCNTs nanopowder, ≥ 98%, Product number 698849, Batch number MKBH5811V) were purchased from Sigma-Aldrich (USA). The nanomaterials were reconstituted with ddH 2 O and culture medium prior to characterization. The particle size was examined using Tecnai G2 F30 field emission transmission electron microscope (TEM) (FEI, USA) and quantified using Nano Measure 1.2. The dynamic light scattering (DLS), surface zeta potential measurements and Polydispersity were carried out on a Malvern Zetasizer Nano ZS instrument (Zetasizer Nano-ZS90, Malvern, UK). Preparation of Nanomaterials To prepare stock solution, CoNPs, TiO 2 NPs and MWCNTs were diluted with ddH 2 O to a final concentration of 1 mg/mL in 1.5 mL microtubes, respectively. Before applying them to the cells, the solution was sonicated in a bath-type sonicator (KQ-500E, Kunshan Ultrasonic Instruments Co., LTD., China) for 10 min and shaken every three mins. Then, to reach the specific working concentration (such as 30 μg/mL), 60 μL solutions were added to 1940 μL 1640 medium without fetal bovine serum (FBS) (FCS500, Excell, Shanghai, China) to a final concentration of 30 μg/mL (working solution). The working solution was then used to culture cells. The same volume of water is added to the control group in all experiments as for solvent control. Cell culture and nanomaterial exposure U251 human glioma cells were purchased from the State Key Laboratory of Genetic Resources and Evolution (Yunnan, China). U251 cells are a commonly used in vitro model to study neurotoxicity [ 23 , 24 ], and are also widely utilized in studying TNTs formation and mitochondrial transfer [ 25 – 27 ]. Furthermore, we have also examined TNTs formation in the human neuroblast cells SH-SY5Y. Compared to SH-SY5Y cells, U251 cells exhibited a greater capability for TNTs formation under physiological conditions (Additional file 1 : Fig. S1). Therefore, U251 cells were used to elucidate the mechanism underlying TNTs formation in depth. U251 cells were cultured in 1640 medium (BL303A, Biosharp, Anhui, China) supplemented with 10% FBS and 100 units/mL of penicillin–streptomycin. Cells were cultured at 37 °C as monolayers in a humidified atmosphere containing 5% CO 2 . When cell density reached 70–80% confluency, the medium was changed to 1640 without FBS. Cells were then treated with various concentrations of CoNPs, TiO 2 NPs and MWCNTs for 24 h for subsequent measurement. To select the appropriate nanomaterials concentration, we measured the viability of U251 cells by exposing them to a series of nanomaterials concentrations. We selected the concentration with a similar degree of cell damage (30 μg/mL) across tested nanomaterials as the exposure concentration for the following study (Additional file 1 : Fig. S2). The treatment of ROS scavengers and inhibitors To scavenge ROS or mtROS, U251 cells were pretreated with 10 mM NAC (HY-B0215, MedChemExpress, New Jersey, USA) for 30 min or pretreated with 0.2 μM MitoQ (HY-100116A, MedChemExpress, New Jersey, USA) for 30 min and then to wait for measurement, prior to nanomaterials exposure. To inhibit the release of extracellular vesicles, U251 cells were pretreated with 10 μM GW4869 (HY-19363, MedChemExpress, New Jersey, USA) for 30 min prior to nanomaterials exposure. To inhibit TNTs formation, prior to nanomaterials exposure, 1 μM Latrunculin B (LAT-B) (HY-101848, MedChemExpress, New Jersey, USA) pretreated with cells for 30 min. To inhibit PI3K protein, U251 cells were pretreated with 10 μM LY294002 (HY-10108, MedChemExpress, New Jersey, USA) for 30 min prior to nanomaterials treatment. To inhibit AKT protein, U251 cells were pretreated with 10 μM Perifosine (HY-50909, MedChemExpress, New Jersey, USA) for 30 min prior to nanomaterials exposure. To inhibit mTOR, 25 nM Rapamycin (HY-10219, MedChemExpress, New Jersey, USA) was pretreated with U251 cells for 30 min prior to nanomaterials exposure. Primary astrocyte culture and exposure Mice were housed in stainless steel cages in a ventilated animal facility at 22 ± 2 °C and relative humidity of 50 ± 10% under a 12 h light/dark cycle and fed with sterilized food and distilled water. All the mice were humanely treated throughout the experimental period. Newborn C57BL/6 mice puppies (within 24 h) were euthanized by carbon dioxide inhalation. The cortex was dissected, and the meninges and blood vessels were removed in Hank's equilibrium salt solution (H1045, Solarbio, Beijing, China). Next, the minced cortex was transferred to F12 medium (BL305A, Anhui, China) containing 0.25% trypsin (25200056, Thermo Fisher Scientific, Massachusetts, USA) to digest at 37 °C for 30 min. After centrifugation and suspension, mixed glial cells were plated in a T-25 flask (156367, Thermo Fisher Scientific, Massachusetts, USA) coated with poly-lysine and cultured in DMEM medium (11965092, Thermo Fisher Scientific, Massachusetts, USA) containing 10% FBS. The cells were cultured at 37 °C in an atmosphere of 5% CO 2 and 95% air. The cell culture medium was replaced every 24 h after plating and every two days. After 7–10 days, astrocytes were shaken at 250 RPM for 14 h at 37 °C to remove unwanted cells, including microglia, neurons, and fibroblasts. Astrocytes were digested with 0.25% trypsin at 37 °C for 5 min and seeded in 12-well plate for the following measurement. Immunofluorescence was used to validate the purity of PA. Briefly, 4% w/v paraformaldehyde was added 12 well-plate and incubated at 4 °C for 15 min. The cells were permeabilized for 15 min with 0.15% Triton X-100 (ST795, Beyotime, Shanghai, China) in phosphate-buffered saline (PBS) (C0221A, Beyotime, Shanghai, China) and blocked with 10% normal goat serum (C0265, Beyotime, Shanghai, China) for 1 h at room temperature (RT). For GFAP staining, PA was incubated with anti-GFAP antibody (1:500) (Ab7260, Abcam, Cambridge, England) at 4 °C overnight. The Alexa-Fluora 488-conjugated secondary antibody was incubated at RT for 1 h, and 1 μg/mL DAPI (C1002, Beyotime, Shanghai, China) was used for nuclear staining. The purity of PA (%) = GFAP positive cells/DAPI positive cells × 100%. In total, 150 cells were counted in each well. The purity of PA was over 95% (Additional file 1 : Fig. S3). When PA density reached 70–80% confluence, the medium was changed to DMEM without FBS. Then, the PA was exposed to CoNPs, TiO 2 NPs and MWCNTs for 24 h before the next measurement. The cell types used in each experiment are shown in Fig. 1 . Cell viability assessment U251 cells were seeded in a 96-well plate at a density of 5 × 10 3 containing 100 μL cell medium and exposed to nanomaterials for 24 h. Then 10 μL CCK8 reagent (C0037, Beyotime, Shanghai, China) was added to wells and incubated at 37 °C for 1 h. A microplate reader (Multiskan FC, Thermo Fisher Scientific, Waltham, MA, USA) was used to measure the absorbance (A) at 450 nm. Six parallel wells were set up for each group, and the mean values were obtained. Cell survival rate was calculated using the formula: cell survival rate (%) = (Absorbance of the experimental group/ Absorbance of the control group) × 100%. High content screening system (HCS) PA and U251 cells were seeded in the 24-well plates at a density of 2 × 10 3 in each well. After nanomaterials exposure, the plate was observed using a high content screening system (PerkinElmer, Massachusetts, USA) for 24 h, and images were captured every 15 min. Quantification of TNTs and mitochondrial transfer Using a TCS SP5 confocal microscope (Leica, Weitzlar, Germany), fields of sub-confluent cells were randomly selected with a 20X objective. At least ten images were obtained for each experimental group. The number of TNTs per one hundred cells was calculated for TNTs in U251 cells using Image J. At least fifty TNTs were imaged in each group. The percentage of TNTs containing mitochondria was quantified in each field. ATP measurement U251 cells were seeded in 12-well plates at a density of 1 × 10 5 for 24 h and transfected with the pCMV-Mito-AT1.03 plasmid (D2606, Beyotime, Shanghai, China) using lipo8000 (C0533, Beyotime, Shanghai, China) according to the manufacturer’s instruction. Afterwards, the transfected cells were exposed to nanomaterials. The images were captured using fluorescence microscope and the ATP intensity was quantified using Image J 2.1. Nanomaterials uptake The uptake of nanomaterials was assessed using flow cytometry following methods reported by Suzuki et al. [ 28 ]. U251 cells treated with nanomaterials were washed three times with PBS to remove free particles. The cells were re-suspended in DMEM, and the number of particles taken up was analyzed by flow cytometry (FACSCanto II, Becton Dickinson, Franklin Lakes, USA). The sample profile was obtained by examining forward-scattered light (FSC) and side-scattered light (SSC). As each cell intercepts the path of the laser beam, the light that passes around the cell is measured as the FSC, indicating the cell size. The light scattered at a 90° angle to the axis of the laser beam was measured as the SSC and was related to intracellular density. Thus, the changes in cellular SS, after treatment with nanomaterials can be attributed to their uptake potential. Transmission electron microscope (TEM) U251 cells were seeded in the 10 cm dish at a density of 1 × 10 6 , and then exposed to nanomaterials for 24 h. After being digested by trypsin, cells were centrifuged at 500 × g for 5 min into clumps. Subsequently, cells were fixed in 2.5% glutaraldehyde (P1126, Solarbio, Beijing, China) (diluted in 0.1 μM PBS; pH 7.4) at 4 °C for 24 h and then post-fixed in 1% osmium tetroxide (201030, Merck, New Jersey, USA) (dissolved in PBS; pH 7.4) at 25 °C for 60 min. After dehydration using different concentrations of Ethanol (30%, 50%, 80%, 90%, 100%), samples were embedded by resin (45347, Merck, New Jersey, USA) with different conditions (37 °C for 12 h; 45 °C for 12 h; 60 °C for 12 h) and ultrathin sectioning (the thickness is 50 nm), the samples were stained with uranyl acetate at RT for 60 min and stained with lead citrate (15326, Merck, New Jersey, USA) at RT for 8 min. Digital images were captured using TEM (FEI Tecnai G2 F30; Thermo Fisher Scientific, Inc.). Detection of mitochondrial reactive oxygen species (mtROS) and reactive oxygen species (ROS) The mtROS and ROS levels in treated cells were measured using Mito-SOX (M36009, Invitrogen, Carlsbad, USA) and DCFH-DA dye staining (S0033S, Beyotime, Shanghai, China), respectively. Briefly, U251 cells were exposed to nanomaterials for 24 h, and then incubated with 0.5 μM Mito-Sox or 1 μM DCFH-DA for half an hour at 37 °C. Finally, the mtROS and ROS levels were measured using the fluorescence microscope (DMi8, Leica, Germany) at wavelengths of Ex/Em = 530 nm/562–588 nm and Ex/Em = 488 nm/515–545 nm, separately. To exclude the possible interference of nanomaterials’ autofluorescence on DCFH-DA and mitoSOX, we examined the emission and excitation wavelengths of the dyes and nanomaterials (methods and results in Additional file 1 : Fig. S4A–D). In summary, nanomaterials’ autofluorescence would not interfere with the results of the DCFH-DA and mitoSOX probe. Measurement of mitochondrial membrane potential (MMP) Cell MMP was detected by a JC-1 probe (C2005, Beyotime, Shanghai, China). Briefly, U251 cells were exposed to nanomaterials for 24 h, and then incubated with 1 μM JC-1 probe for 0.5 h at 37 °C. Finally, MMP was measured by fluorescence microscopy, and quantified by Image J according to literature [ 29 ]. Western blot Exposed U251 cells were washed three times with cold PBS, collected, and lysed with 120 μL ice-cold RIPA lysis buffer (P0013D, Beyotime, Shanghai, China). Afterwards, cell-free supernatants were obtained by centrifugation of the lysates at 12 000 × g for 25 min at 4 °C. Sodium dodecyl sulfate (SDS) loading buffer was added to each supernatant, and boiled for 10 min to generate SDS-PAGE samples. The 15 μg samples were electrophoresed on a 10% SDS polyacrylamide gel. Proteins were transferred onto a polyvinylidene fluoride membrane. After blocking the membrane with 5% nonfat milk in Tris-buffered saline containing 0.1% Tween-20 (TBST) (ST671, Beyotime, Shanghai, China) for 1 h at 25 °C, the blots were incubated with primary antibodies of interest overnight at 4 °C. After washing with TBST five times, the blots were incubated with a peroxidase-conjugated secondary antibody. Antibodies binding was detected by chemiluminescent staining using an ECL detection kit (RPN2235, Amersham, USA). The grayscale of the protein bands was analyzed using Image J software. Primary antibodies were used at the following concentrations: p-mTOR (1:2000, AF5869, Beyotime, China), mTOR (1:2000, AF1648, Beyotime, China), P110 (1:2000, AF1966, Beyotime, China), P85 (1:2000, AF7742, Beyotime, China), p-PI3K (1:2000, AF5905, Beyotime, China), AKT (1:2000, AA326, Beyotime, China), p-AKT (1:2000, AF1546, Beyotime, China), beta-ACTIN (1:3000, 81115-1-RR, Proteintech, China), anti-rabbit peroxidase-conjugated secondary antibodies (1:10,000, A16110, ThermoFisher, USA). Statistical analysis Data were analyzed using SPSS software (version 19.0, IBM Corporation, Armonk, NY, USA). A one-way analysis of variance (ANOVA) was used for multiple comparisons. Experimental data with heterogeneous variance were analyzed using the Kruskal–Wallis nonparametric test for different exposure groups. A P value < 0.05 indicates statistical significance. All experiments were carried out in independent triplicates and three individual experiments were performed unless otherwise specified.
Result Characterization of nanomaterials First, the properties of three nanomaterials were characterized. The physical properties of CoNPs have been demonstrated in our previously published literature [ 15 ]. The purity of TiO 2 NPs and MWCNTs was over 98%, and the endotoxin level was below the detection limit (0.01 EU/mL) at the concentration of 1 mg/mL, much higher than the concentration administrated (Additional file 1 : Fig. S5A, B). As shown in the TEM and SEM results (Fig. 2 ), TiO 2 NPs was generally a long cylinder with a diameter of 36.43 nm (16.44–52.33 nm), while MWCNTs was a long, tubular structure with a diameter of 22.12 nm (10.14–36.97 nm). The Z-average, polydispersity and zeta potential of nanomaterials in both water and medium were demonstrated in Table 1 . In brief, there was little change in water and medium for all nanomaterials in polydispersity. Nanomaterials promote TNTs formation and mitochondrial transfer First, we used mouse primary cortical astrocytes (PA) to examine TNTs formation and mitochondrial transfer after nanomaterial exposure. We recently reported that in response to CoNPs exposure, astrocytes transfer functional mitochondria to damaged neurons via TNTs [ 15 ]. Thus, in combination with DiD, a cell membrane dye probe was used to label TNTs, and MitoTraker Red was used to label mitochondria in this study. A high content screening system (HCS) was used to dynamically observe TNTs formation and mitochondrial transfer continuously for 24 h, and images were captured every 15 min. As observed in HCS, PA is in close contact and undergoes membrane fusion, and PA migrates away from each other, drawing out membrane tethers and leading to the formation of TNTs. This process is recognized as “cell dislodgment” [ 30 ]. Simultaneously, vesicles transfer occurred actively in TNTs (Fig. 3 A). The speed of vesicles in the control group is 0.81 μm/min (Additional file 2 : Video S1), in the CoNPs group is 0.51 μm/min (Additional file 3 : Video S2), in the TiO 2 NPs group is 0.34 μm/min (Additional file 4 : Video S3), and in the MWCNT group is 0.56 μm/min (Additional file 5 : Video S4). Mitochondrial transfer via TNTs also observed (Fig. 3 B). The rate of mitochondrial transfer in the control group was 0.92 μm/min (Additional file 6 : Video S5). In contrast, in response to CoNPs exposure, mitochondrial transfer is slower, i.e., 0.50 μm/min (Additional file 7 : Video S6), 0.45 μm/min (Additional file 8 : Video S7) and 0.57 μm/min (Additional file 9 : Video S8) in TiO 2 NPs and MWCNTs treated groups, respectively. Afterwards, the human glioblastoma cell line U251 was selected to investigate the mechanism of TNTs formation because it is a widely used glial cell model to study neurotoxicity [ 23 , 24 ] and TNTs formation in the central nervous system (CNS) [ 25 , 26 ]. U251 cells were exposed to the nanomaterials for 24 h, followed by HCS examination to elucidate the mechanism of TNTs formation. Thin membranous bridges connecting the two cells were observed in the bright field image, suggesting the formation of a TNTs-like structure in U251 cells (Fig. 3 C). One of the most important characteristics of TNTs is the enrichment of F-actin (either with or without microtubes) and non-attachment to extracellular substrates [ 16 ]. As demonstrated, the TNTs induced by nanomaterials between U251 cells consisted of F-actin and microtubules (Fig. 3 D). Furthermore, as demonstrated by 3D reconstruction, the observed TNTs were not attached to the extracellular substrate (Fig. 3 E). In addition, the data indicated that U251 cells could act as a model to study TNTs formation in CNS. Finally, we investigated whether there was a difference in the number of TNTs stimulated by the three types of nanomaterials. Quantitative analyses revealed that the percentage of TNTs significantly increased upon nanomaterial exposure in U251 cells (Fig. 3 F). The number of TNTs stimulated by CoNPs appeared to be the highest, followed by TiO 2 NPs and MWCNTs. In addition, the mitochondrial transfer via TNTs also increased, suggesting a protective role of TNTs formation in U251 cells (Fig. 3 G). All nanomaterials induced mitochondrial transfer, consistent with increased TNTs. Finally, to investigate the function of TNTs and mitochondrial transfer upon nanomaterials exposure, LAT-B, a specific TNTs inhibitor, and GW4869, an extracellular vesicle inhibitor, were utilized in the co-culture system. Nanomaterials exposure reduced the ATP level in U251 cells. GW4869 did not influence ATP levels, whereas LAT-B exacerbated ATP reduction induced by nanomaterials (Fig. 3 H, Additional file 1 : Fig. S6A). Simultaneously, the apoptosis of U251 cells was also examined, and it was found that LAT-B further aggravated the apoptosis of cells induced by nanomaterials, whereas GW4869 did not affect cell apoptosis upon nanomaterials exposure (F i g. 3 I, Additional file 1 : Fig. S6B). All the above results confirm that nanomaterials induce TNTs formation and mitochondrial transfer via TNTs but not extracellular vesicles (EVs), in both PA and U251 cells. In contrast, the number of TNTs induced by different nanomaterials is different. However, the potential mechanism(s) of the nanomaterials-induced TNTs formation requires further investigation. Nanomaterials enter U251 cells and induce neurotoxicity A growing body of evidence demonstrates that TNTs is associated with environmental stressors, such as ischemia, stroke, and hypoxia. Impaired cells actively extend protrusions towards “healthy” cells to form TNTs. Therefore, we propose that the differences in TNTs numbers induced by nanomaterials were due to the different degrees of damage caused by the nanomaterials. First, the uptake of nanomaterials by U251 cells was examined by flow cytometry. Corroborating the dispersion and polydispersity of three types of nanomaterials (Fig. 3 C and Table 1 ), TiO 2 NPs was the most absorbed by cells, and the SSC pattern became significantly discrete (90% of SSC). CoNPs, with only 0.751 dispersion in SSC, was the next, followed by MWCNTs, with almost no change in SSC (Fig. 4 A, B). In addition, TEM was utilized to further verify the uptake of nanomaterials by U251 cells. In line with the flow cytometry results, all nanomaterials could enter U251 cells. CoNPs were more gathered to nucleus, with some entering nucleus. On the contrary, TiO 2 NPs mostly surrounded with membrane structure around nucleus, while MWCNTs with cavity structure (Fig. 4 C). Consistent with flow cytometry, HCS also demonstrated that U251 cells began taking up nanomaterials at 1.5 h and reached the uptake peak at 6 h (Additional file 10 : Video S9). Next, ROS levels in U251 cells were measured using DCFH-DA. All nanomaterials promoted ROS generation compared with the control group. Although most TiO 2 NPs was absorbed by U251 cells, ROS production was only secondary to CoNPs-induced ROS production. The ROS levels induced by MWCNTs is the lowest (Fig. 4 D, E). At the same time, mtROS was detected via mitoSOX probe in U251 cells. In contrast to the above results for ROS, mtROS induced by TiO 2 NPs was the highest, followed by CoNPs and MWCNTs (Fig. 4 F, G). Finally, the functional status of mitochondria in U251 cells was examined by mitochondrial membrane potential (MMP) via JC-1 probe. MWCNTs significantly decreased the MMP of U251 cells, followed by the reduction induced by TiO 2 NPs. CoNPs caused the least MMP reduction, which was still higher than that in the control group, indicating the damage to mitochondria (Fig. 4 H, I). In summary, different types of nanomaterials cause varying degrees and types of damage. CoNPs caused a significant increase in ROS levels, TiO 2 NPs mainly increased mtROS levels, and the toxicity induced by MWCNTs was the lowest, consistent with the lowest amount of cellular uptake. ROS/mtROS increases TNTs formation and mitochondrial transfer after nanomaterials exposure We then investigated whether the generation of ROS/mtROS is key to nanomaterial-induced TNTs formation. Because the ROS/mtROS are the major mechanism for nanotoxicity, NAC, a ROS scavenger [ 31 ], and MitoQ, a mitochondrial antioxidant [ 32 ], were used to pretreat U251 cells before nanomaterial exposure. NAC and MitoQ rescued the nanomaterial-induced decrease in U251 cell viability (Fig. 5 A). Simultaneously, both NAC and MitoQ reduced the level of ROS (Fig. 5 B, D) and mtROS (Fig. 5 C, F) induced by nanomaterials in U251 cells. To exclude potential false positive results of ROS, the ROS positive controls, Rosup, were used in U251 cells. Rosup significantly increased ROS levels compared with nanomaterials exposure, which abolished by NAC and mitoQ pretreatment, indicating that NAC and MitoQ could indeed reduce ROS levels (Additional file 1 : Fig. S7A, B). In addition, the reduction in MMP in U251 cells was reversed by both NAC and MitoQ pretreatment (Fig. 5 F, G). In brief, we demonstrated that reducing ROS/mtROS reversed nanomaterials-induced cellular and mitochondrial toxicity. Then we examined the relationships between ROS/mtROS, and TNTs formation and mitochondrial transfer. Interestingly, NAC was more capable of eliminating ROS/mtROS than MitoQ in U251 cells. Nevertheless, the abilities of NAC and MitoQ to reduce TNTs number were similar (Fig. 6 A, B). What’s more, mitochondrial transfer was also reduced after NAC and MitoQ pretreatment of U251 cells (Fig. 6 C). Combined with the results shown in Fig. 4 , these results indicate that TNTs formation has a strong relationship with ROS/mtROS levels, which was not closely related to mitochondrial damage. CoNPs induced the highest levels of ROS, and TiO 2 NPs induced the highest levels of mtROS in U251 cells. Although MWCNTs induced the largest decrease in MMP, the TNTs induced by MWCNTs was the lowest (while still significantly higher than that of the control group). In summary, ROS/mtROS induced by nanomaterials is the major mechanism that induces TNTs development, which can be abolished by the ROS scavengers NAC and MitoQ. ROS/mtROS regulates TNTs formation via the PI3K/AKT/mTOR pathway following nanomaterials exposure ROS/mtROS regulated TNTs formation, as demonstrated above (Fig. 6 ); however, the specific mechanism remains obscure. The PI3K/AKT pathway is critical in metabolism, proliferation, cell survival, and angiogenesis in response to extracellular signals, including nanomaterials-induced cytotoxicity [ 33 ]. At the same time, the PI3K/AKT pathway participates in TNTs formation upon H 2 O 2 exposure [ 34 ]. PI3K promotes the re-localization of AKT to the plasma membrane, which is phosphorylated for full activation [ 35 ], which may act on TNTs formation. However, whether this pathway mediates the nanomaterial-induced TNTs formation remains unclear. Therefore, we first found that three nanomaterial types activated the expression of P110α and P85β (the two PI3K isoforms) in U251 cells. At the same time, the three nanomaterials activated AKT and its phosphorylated isoforms in U251 cells. These results indicated that the PI3K/AKT pathway was activated under nanomaterial exposure. Because mTOR is a common downstream effector of the PI3K/AKT pathway, total and activated p-mTOR expression was checked in U251 cells after nanomaterials exposure. While the total mTOR protein did not change, p-mTOR increased after exposure, indicating that p-mTOR was activated by three types of nanomaterials in U251 cells (Fig. 7 A–H). However, the causal relationship between PI3K/AKT/mTOR activation and TNTs formation in response to nanomaterials exposure requires further investigation. Hence, the PI3K inhibitor LY294002, the AKT inhibitor Perifosine and mTOR inhibitor Rapamycin were utilized. In U251 cells, all the inhibitors significantly reduced the numbers of TNTs induced by nanomaterials, while Rapamycin was the more potent inhibitor of TNTs. Consistently, the number of transferred mitochondria in U251 cells decreased in the presence of inhibitors (Fig. 7 I–K). These results indicated that the PI3K/AKT/mTOR pathway participates in nanomaterials-induced TNTs formation. As previously demonstrated, the three nanomaterials induced ROS/mtROS production and, stimulated TNTs formation and mitochondrial transfer. We further investigated whether ROS/mtROS-induced TNTs formation depends on the PI3K/AKT/mTOR pathway. Therefore, we examined alterations in the PI3K/AKT/mTOR pathway after applying the ROS scavenger NAC and the mtROS scavenger MitoQ in U251 cells. Interestingly, NAC increased the expression of P110α, and P85β, while the phosphorylation of PI3K was decreased after NAC pretreatment, and as for MitoQ, P110α P85β and p-PI3K expression decreased under nanomaterial exposure. Most importantly, the total AKT protein was reduced under MitoQ pretreatment but not under NAC. Phosphorylated AKT levels decreased after NAC and MitoQ treatment. Furthermore, NAC and MitoQ inhibited mTOR phosphorylation (Fig. 8 A–H). In summary, ROS/mtROS promotes TNTs formation stimulated by nanomaterials via activating the PI3K/AKT/mTOR pathway.
Discussion An increasing number of engineered nanomaterials have been manufactured and utilized in the environment, making their toxicity a public health concern. CoNPs, TiO 2 NPs and MWCNTs are widely accepted as engineered nanomaterials that can enter the body and reach the central nervous system, raising concerns about their neurotoxicity. The original discovery of TNTs was closely related to homeostasis and pathogenesis [ 36 , 37 ], especially in neurotoxicity induced by oxidative stress [ 38 ]. Here, we show, for the first time, that three types of engineered nanomaterials can promote TNTs formation and mitochondrial transfer via the induction of oxidative stress, a common protective strategy in response to nanomaterial exposure that restores ATP production and cell viability. Most importantly, the sophisticated mechanism of TNTs formation was fully elucidated. Our group recently reported the transfer of mitochondria via TNTs against CoNPs-induced neurotoxicity [ 15 ]. However, further investigations are required to determine whether TNTs formation and mitochondrial transfer are universal in response to other nanomaterials. Here, we present evidence that three engineered nanomaterials exposure can induce TNTs formation in primary astrocytes and U251 cells. Moreover, the number of TNTs formed in the cells significantly increased upon nanomaterials exposure. Surprisingly, the model of TNTs formation is consistent with ‘cell dislodgment’, in which cells are in close contact and membrane fusion, and cells migrate away from each other, drawing out membrane tethers, leading to the formation of TNTs [ 30 ]. A growing body of evidence has shown that TNTs can protect cells from environmental stress due to their capacity to transfer materials between cells, such as mitochondria [ 17 ] and lysosomes [ 18 ]. For example, healthy N2a cells can donate their mitochondria to exposed-H 2 O 2 N2a or ρ 0 N2a (mitochondrial-DNA depleted cells) to improve apoptotic, oxidative stress, autophagic, and mitochondrial or DNA-damaged biomarkers indices. Consistent with this finding, we observed mitochondrial transfer in primary astrocytes and U251 cells after nanomaterials exposure. More importantly, the number of mitochondria transferred via TNTs increased after nanomaterials exposure. TNTs and mitochondrial transfer significantly protected neural cells from ATP reduction and cell apoptosis induced by nanomaterials. Interestingly, vesicles pre-labeled with DID were exchanged between primary astrocytes. However, the substances contained in the vesicles are unclear and warrant further investigation. It has been reported that vesicles can carry many substances, such as proteins, mRNA, and mitochondria. One study indicated that protein-containing vesicles can be transferred via TNTs to function biological process [ 22 ]. Compared with EVs, vesicle transfer via TNTs is faster and more accurate [ 39 ]. Although the three types of engineered nanomaterials have different biophysical properties (size, Z-average, polydispersity and zeta potential), they share the same trend of TNTs formation and mitochondrial transfer in response to nanomaterials exposure. The results indicated that a common mechanism regulates TNTs formation and mitochondrial transfer, regardless of the nanomaterial properties. ROS is a major regulator of TNTs formation [ 22 ]. After cellular uptake, the three nanomaterials promoted the generation of excess ROS and mtROS. Interestingly, we found that ROS/mtROS levels were related to the amount of nanomaterials that entered the cells. This is partially because cobalt oxide particles are readily internalized via the endo-lysosomal pathway, and release of cobalt ions over long periods involves specific toxicity [ 40 ]. To assess the relationship between TNTs and ROS/mtROS, NAC and MitoQ were used to scavenge the ROS/mtROS after nanomaterials exposure. NAC and MitoQ reduced TNTs numbers and mitochondrial transfer, indicating that the ROS and mtROS produced by the nanomaterials were the main mechanism promoting TNTs formation and mitochondrial transfer. In conclusion, the difference in TNTs numbers upon nanomaterials exposure is mainly due to the different levels of ROS, whereas mtROS is a secondary factor in TNTs formation. The PI3K/AKT/mTOR plays a key role in numerous cellular functions including proliferation, adhesion, migration, invasion, metabolism, and survival [ 41 ]. Importantly, we identified a new regulatory target of the PI3K/AKT/mTOR pathway in intercellular communication. In this study, three nanomaterials were found to activate the PI3K/AKT/mTOR pathway regardless of their properties. The activated pathway promotes TNTs formation and mitochondrial transfer. The LY294002, a broad-spectrum inhibitor of PI3K, PI3Kα, PI3Kδ and PI3Kβ [ 42 ], can inhibit the TNTs formation. Perifosine, a targeted Akt inhibitor [ 43 ], also reduced the TNTs number after nanomaterial treatment. However, nanomaterials-exposed groups were still higher than the control group, indicating that PI3K or AKT was not the only way to mediate TNTs formation. Interestingly, Rapamycin, an mTOR inhibitor, can potentially reduce TNTs formation, decreasing TNTs number to a basal level. These results indicated that mTOR plays a center role in TNTs formation. mTOR activates S6K1, which participates in mRNA translation, and then activates Eukaryotic Translation Elongation Factor 2 (EEF2) by phosphorylation. In addition, mTORC1 could deactivate the 4EBP1 protein which abolishes the inhibition of EIF4E, a transcription factor that aids in translation initiation by recruiting ribosomes to the 5'-cap structure [ 44 ]. EEF2 and EIF4E can bind to TNT-related genes, such as CDC42, to promote transcription. However, further investigation requires the identification of specific genes that play major roles in promoting TNTs following nanomaterial exposure. The PI3K/AKT pathway is commonly downstream of ROS/mtROS and regulates ROS homeostasis for cell growth and proliferation. ROS can directly activate PI3K and inactivate phosphatase and tensin homolog, which negatively regulate the synthesis of PIP3 and then suppress AKT [ 45 ]. We found that NAC and MitoQ could reduce P110α and phosphorylated PI3K, and further reduce AKT levels. In addition, P110α can limit ROS/mtROS at a desirable range by promoting the cellular antioxidant mechanism via the NF-E2 p45-related factor 2-antioxidant response element dependent pathway [ 46 ]. Adverse Outcome Pathway (AOP) concept provides a mechanism-based framework for interpreting what is known from existing toxicological studies of chemical substances, which covers the sequential progression of events from molecular initiation events (MIE) to adverse effects [ 47 ]. The main blocks of an AOP consist of the MIE, key events (KEs) as the mediators, and ultimately the ending, which is called adverse outcome (AO). Here, we summarized the AOP according to our findings to promote a better understanding of the role of TNTs induced by nanomaterials in neurotoxicity (Fig. 9 ). The generation of ROS and mtROS induced by engineered nanomaterials is the molecular initiating event, which subsequently decreases mitochondrial membrane potential (KE1), thus leading to adverse outcomes of mitochondrial dysfunction and cell apoptosis. In addition, MIE can activate the PI3K/AKT/mTOR pathway (KE2), which then promotes TNTs formation and mitochondrial transfer (KE3) against the adverse outcome.
Conclusion This is the first study to unveil that different types of engineered nanomaterials induce the formation of TNTs in human glial cells to protect against neurotoxicity via ROS/mtROS-centered activation and the downstream PI3K/AKT/mTOR pathway. Despite their different biophysical properties, three types of nanomaterials, namely CoNPs, TiO 2 NPs and MWCNTs, activate TNTs-dependent mitochondrial transfer in primary astrocyte and U251 cells, which can rescue mitochondrial damage and cell apoptosis caused by oxidative stress. Most importantly, the adverse outcome pathway was summarized to shed light on the intercellular protection mechanism against nanomaterials-induced neurotoxicity.
Background As the demand and application of engineered nanomaterials have increased, their potential toxicity to the central nervous system has drawn increasing attention. Tunneling nanotubes (TNTs) are novel cell–cell communication that plays a crucial role in pathology and physiology. However, the relationship between TNTs and nanomaterials neurotoxicity remains unclear. Here, three types of commonly used engineered nanomaterials, namely cobalt nanoparticles (CoNPs), titanium dioxide nanoparticles (TiO 2 NPs), and multi-walled carbon nanotubes (MWCNTs), were selected to address this limitation. Results After the complete characterization of the nanomaterials, the induction of TNTs formation with all of the nanomaterials was observed using high-content screening system and confocal microscopy in both primary astrocytes and U251 cells. It was further revealed that TNT formation protected against nanomaterial-induced neurotoxicity due to cell apoptosis and disrupted ATP production. We then determined the mechanism underlying the protective role of TNTs. Since oxidative stress is a common mechanism in nanotoxicity, we first observed a significant increase in total and mitochondrial reactive oxygen species (namely ROS, mtROS), causing mitochondrial damage. Moreover, pretreatment of U251 cells with either the ROS scavenger N-acetylcysteine or the mtROS scavenger mitoquinone attenuated nanomaterial-induced neurotoxicity and TNTs generation, suggesting a central role of ROS in nanomaterials-induced TNTs formation. Furthermore, a vigorous downstream pathway of ROS, the PI3K/AKT/mTOR pathway, was found to be actively involved in nanomaterials-promoted TNTs development, which was abolished by LY294002, Perifosine and Rapamycin, inhibitors of PI3K, AKT, and mTOR, respectively. Finally, western blot analysis demonstrated that ROS and mtROS scavengers suppressed the PI3K/AKT/mTOR pathway, which abrogated TNTs formation. Conclusion Despite their biophysical properties, various types of nanomaterials promote TNTs formation and mitochondrial transfer, preventing cell apoptosis and disrupting ATP production induced by nanomaterials. ROS/mtROS and the activation of the downstream PI3K/AKT/mTOR pathway are common mechanisms to regulate TNTs formation and mitochondrial transfer. Our study reveals that engineered nanomaterials share the same molecular mechanism of TNTs formation and intercellular mitochondrial transfer, and the proposed adverse outcome pathway contributes to a better understanding of the intercellular protection mechanism against nanomaterials-induced neurotoxicity. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12989-024-00562-0. Keywords
Supplementary Information
Abbreviations Protein kinase B One-way analysis of variance Adverse outcome Adverse outcome pathway Compound annual growth rate Central nervous system Cobalt nanoparticles Eukaryotic translation elongation factor 2 Extracellular vesicles Fetal bovine serum Forward-scattered light High content screening system Key events Limulus amebocyte lysate Latrunculin B Molecular initiation events Mitoquinone Mitochondrial membrane potential Mammalian target of rapamycin Mitochondrial reactive oxygen species Multi-walled carbon nanotubes N-acetyl-L-cysteine Primary astrocyte Phosphate-buffered saline Phosphatidylinositol 3-kinase Reactive oxygen species Room temperature Sodium dodecyl sulfate Scanning electron microscope Side-scattered light Tris-buffered saline containing Tween-20 Transmission electron microscopy Titanium dioxide nanomaterials Tunneling nanotubes Acknowledgements We appreciate Ling Lin, Junjin Lin, Shuping Zheng, Zhihong Huang, Shuyuan Wang and Zhifei Fu from Public Technology Service Center, Fujian Medical University (Fuzhou, China) for their technical support. Author contributions FZ, XL and HL designed research; XL, WW, and XC performed research; XL analyzed data and organized figures and tables; XL drafted the manuscript. FZ, HL, QZ, ZG, WS, GY and CC revised the manuscript; SW inspected the statistics. FZ is responsible for the funding acquisition. All the authors have read and approved the final manuscript. Funding This study was supported by the Joint Funds for the Innovation of Science and Technology, Fujian province (2019Y9020), the National Natural Science Foundation of China under Grant (81903352, 82311530107), the Provincial Natural Science Foundation of Fujian Province (2023J01627, 2019J05081), and the Open Fund of Fujian Provincial Key Laboratory of Molecular Neurology (2022-SJKF-001). Availability of data and materials All data analyzed within this study are included either in the manuscript or in the additional files. Declarations Ethics approval and consent to participate All animal procedures were approved by the Institutional Animal Care and Use Committee of the Fujian Medical University. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:35
Part Fibre Toxicol. 2024 Jan 15; 21:1
oa_package/74/a9/PMC10789074.tar.gz
PMC10789075
0
Background Cardiac implantable electronic devices(CIEDs) is a term that encompasses a number of devices that provide a treatment of bradyarrhythmias, ventricular tachyarrhythmias, and advanced systolic heart failure [ 1 , 2 ]. It has proven to be an invaluable tool in the practice of cardiology, and implantation rates continue to rise, with more than 600,000 CIEDs practiced each year [ 3 ]. The CIEDs include implantable cardioverter defibrillators (ICD) and cardiac resynchronization therapy (CRT) devices [ 4 ]. Most CIEDs typically involve local anesthesia, and are placed under the skin in the left shoulder region, with leads connecting to the vasculature of the heart [ 5 , 6 ]. Although some centers have used local anesthesia with sedation for CIEDs implantation, there is still debate regarding the safety of using sedation because of possible undesirable side effects, such as hypoxaemia, hypotension, nausea and vomiting. Furthermore, more elderly patients (mean age ≥ 70 years) with many medical comorbidities and people with advanced heart disease receive this procedure, which might lead to high sedation induced risk [ 7 , 8 ]. Hence, in most cases of CIEDs procedure, patients will remain awake without sedation, which may result in patients’ fear, insecurity and suffering for their vulnerability and sense of losing control in the operation [ 9 – 11 ]. Previous researches have shown that the conscious state of patients receiving CIEDs with local anesthesia may lead to many adverse effects. In a study by Selwyn et al [ 12 ] indicated that CIED patients with local anesthesia experience severe pain that may be of a long duration. Anne et al [ 13 ] showed that a considerable number of patients receiving an ICD had symptoms of depression and anxiety pre-ICD implantation, and these symptoms level would increase during the operation. In another study, it was also stressed that an important minority of CIED patients reported severe pain during the procedure, suggesting that peri-operative pain management in CIED procedures warrants attention [ 14 ]. Moreover, in Chinese culture, the heart is regarded as the home of the emotions, cognition and even the soul. Therefore, receiving a diagnosis of heart disease signals a life-threatening illness. When they have to receive the cardiac surgery, Chinese patients may become particularly scared and anxious when facing the operation [ 15 , 16 ]. All above suggest that undergoing CIED operation signals an enormous challenge and pressure for patients. Some researches have put efforts on ways to improve the care of patients with local anesthesia during operation [ 17 ]. Studies have shown that non-pharmacological treatments, such as preoperative education, massage therapy etc. are effective for patients with local anesthesia to improve their psychological well-being during surgery, and they are relatively risk-free as well [ 7 , 18 ]. Haugen et al [ 19 ] indicated that intraoperative communication between health care professionals(HCPs) and patients decreased their anxiety level and met patients’ needs under the orthopedics surgery [ 20 ]. Bergman et al [ 21 ] noted that during the spinal anesthesia, the presence of various technical equipment and devices raised patients’ concern, and seeing HCPs being with them made patients feel safe and calm down. Another study by Moon et al [ 22 ] demonstrated that it is important for patients with local anesthesia to feel nearness during surgery through contact, such as holding their hand, which would reduce patients’ anxiety. Moreover, Merakou et al [ 23 ] demonstrated that meditation music could reduce patients’ stress and kept them calm during cataract surgery. In recent years, with the development of technology, virtual reality has also been used to reduce pain and negative emotions of patients with local anesthesia during the operation [ 24 ]. However, the intraoperative care of patients with local anesthesia is unstandardised and largely determined by HCPs’ preference, with patients preference and needs being ignored [ 25 ]. Besides, most researches in intraoperative care of CIED patients has centred on technology and its application, rather than on patient experience, and study concerning intraoperative experiences for CIED patients is in its infancy [ 14 , 26 ]. Nowadays, it is advacated that interventions based on stakeholders’ perception can improve health outcomes. Therefore, it is imperative to explore patients’ and HCPs’ perceptions about their experiences of CIED implantation under local anesthesia context, which can contribute to developing patient-centered care intervention program. Hence, based on patients’ and HCPs’ perspectives, this study aims to explore and analyse their intraoperative care experiences, including their feelings, attitudes, perceptions and some approaches to improve the experience of surgery.
Method Study design This study was conducted using a descriptive phenomenology qualitative design based on semi-structured and in-depth interviews. Qualitative methodological approaches are appropriate when the research seeks to describe the essence of a phenomenon by exploring it from the perspective of those who have experienced it [ 27 , 28 ]. The goal of phenomenology is to describe the meaning of this experience, both in terms of what was experienced and how it was experienced in nursing science [ 28 , 29 ]. In this study, the research question is “What are the care experience of patients and HCPs during CIED surgery”. The consolidated criteria for reporting qualitative research (COREQ) checklist was used [ 30 ]. Setting, participants and sampling method The study was conducted in Yunnan province for its historical, geographical and cultural characteristics, which may result in unique aspect that will contribute our understanding of the study issue. Firstly, Yunnan province has historically been a strategic location on the ancient southern Silk Road and shares borders with Myanmar, Lao, and Vietnam [ 31 ]. Since 2013, the Belt and Road Initiative has been implemented in China, with ASEAN Free Trade Area mainly being implemented through Yunnan, which stress not only the goods, but also Chinese special culture [ 32 ]. Secondly, compared to the coastal areas, such as Shanghai, Zhejiang, and Guangdong, then on-coastal Yunnan is more traditional and may present Chinese cultural characteristics [ 33 ]. Thirdly, many ethnic minority groups live in Yunnan province, the existence of the many minority groups makes Yunnan a good base for cultural studies [ 32 ]. All above suggest that choosing Yunnan province as the study set may be a major attraction for academic research and provide cultural diversities concerning our study topic. Based on purposeful sampling, this study recruited CIED patients and HCPs including physicians and nurses from a tertiary general hospital in Yunnan Province. The patients met the following inclusion criteria were sought:(a) age ≥ 18; (b) Undergoing first successful CIED surgery;(c) no mental illness and well recovered after operation. The inclusion criteria for HCPs were that they have participated in CIED surgery. Having obtained the participants’ informed consent, the investigator established rapport and a mutually convenient interview time with the participants was scheduled. Data collection Semi-structured, face-to-face interviews were conducted with participants between May 2022 and July 2023. A semi-structured interview guide, constructed by the authors, was revised based on respondents’ feedback as the interviews progressed. The interview guide cited from supplementary file. The participants chose the time of interview and its’ location at meeting room of the department, which offered a quiet environment. A pilot study was conducted with two patients and two HCPs before the formal interview. Interviews were conducted by the first author who has undergone systematic qualitative research training. Participants were encouraged to talk widely and freely about their experiences and perspectives during CIED surgery. The interviews were audio-recorded and transcribed verbatim with duration ranging from 15 to 40 min. The interviews continued until the data was saturated when no new concepts emerged. The recording was transcribed within 24 h after each interview. In addition, a copy of the interview transcript was sent to each participant for verification. Data analysis Qualitative thematic analysis with an inductive approach was used to identify dominant themes relating to the participants’ perspectives and experiences during CIED operation. Based on Braun and Clarke, the thematic analysis was carried out in six stages [ 34 ]: (1) Each transcript was read by two researchers, who listened to the audio recording carefully multiple times in order to get a sense of the whole; (2) The researchers identified initial codes inductively using the NVivo 11.0 software; (3) From the initial codes, themes that represented the phenomenon under study were constructed; (4) Two other researchers reviewed and validated the constructed themes for thematic validity and reliability; (5) Themes were named and defined; (6) finally, the final synthesis of the results were constructed and confirmed through review by all authors. Ethical considerations The study was conducted in accordance with the Helsinki Declaration and was approved by the ethics committee of the hospital. Written informed consent was obtained from each participant. The content of interviews would be confidential, anonymous and used solely for this research.
Results Participants characteristics A total of 18 CIED patients were interviewed including 11 males and seven females, aged between 19 and 88 years. 20 HCPs took part in this study, including 13 physicians and seven nurses. Participants were numbered in turn by using the quotes, the quotes were as follows: patient(P),healthcare professional(HCP). The participants’ characteristics are summarized in Tables 1 and 2 . Themes The in-depth interviews revealed four themes: Safety and success is priority; Humanistic Caring is a must yet be lacking; Paradox of surgery information given; Ways to improve surgery experiences in the operation. Theme 1: safety and success is priority For most patient participants, their desire was that the surgery could be completed successfully and they tried to play a “good patient” role. They stated that the guidance of HCPs should be fully followed and they must obey the doctors’ orders until the operation was completed. If they had discomfort in the surgery, they chose to endure suffering for not troubling HCPs. Some patient participants pointed out that they wanted to talk to HCPs about the surgery. However, they were also worried that their words and behaviors would interfere with HCPs’ work and increase the duration of surgery, which might hamper the normal procedure of the operation. AllHCP participants were concerned about the safety and success of surgery, such as shorter surgery duration, efficiency of surgery, and postoperative complications. To reach this goal, patients should cooperate with HCPs and follow their orders. Theme 2: humanistic caring is a must yet be lacking Patient participants acknowledged that they endured pain, anxiety, tension, etc. during the surgery. They said that they were alone in the operation room, and HCPs were concerned about the pacemaker, the vessel, the thresholds of parameter instead of patients. Whereas, what they wanted was accompany and being with them, which was a way of showing caring for patients. Some patient participants stated they hoped HCPs could communicate with them, no matter what the topic is, which could help them relaxed and provided a support and caring atmosphere for them. Some patient participants reported that they felt discomfort during the procedure, such as cold, pain, or breathless during the operation. They recalled that HCPs have taken some measures to alleviate their discomfort but they turned to focus on the operation quickly, and it seemed didn’t work. They had to endure the suffering and expected the surgery to be ended as soon as possible. Most HCP participants mentioned that except surgery, actions such as communicating with patients to appease the negative emotion, paying attention to their comfort timely, and building trust and friendly relationship with patients were highlighted. They reflected that only completing the surgery while ignoring patient’s feelings would do harm to patients. Some HCP participants stated that although humanistic caring is a necessity in the operation for its key role in today’s health care system, caring is still wanted in the surgery due to different reasons such as personnel shortage and lack of competency, Theme 3: Paradox of surgery information giving Some patient participants pointed out that they were more curious about the procedure during the operation, they looked forward that HCPs could tell them the surgery information. Sometimes, they also took initiative in inquiring information. However, some patient participants stated that the information concerning surgery, such as operation processes and the size of the pacemaker, etc., would make them fearful and anxious. Some HCP participants considered that information such as telling patients that the surgery was about to end, and brief introduction of operation information would benefit patients. Some HCP participants said that in order to prevent patients from focusing on surgical operations, which might make them nervous, it would be better to talk to patients with some other topics instead of surgery to create a relaxing atmosphere. Theme 4: ways to improve surgery experiences in the operation Some patient participants said that they held the belief that HCPs could handle everything and there was no need to worry about in the operation. Wait and rest is ok. Even if emergency occurred, that was their bad fortune and had nothing to do with HCPs. Other patient participants usually thought of some wonderful things, such as the improved quality of life after a successful operation. Some HCP participants stated that there were some methods to improve patient’s experience, such as giving surgery information to patients ahead to help them prepare for the operation, and inspire patients to persist and complete the operation during the surgery.
Discussion This qualitative study captured the perception of patients and HCPs on care experience of CIED surgery. The results showed that safety and success is priority and it should take precedence over anything, which is in accordance with the perceptual adjustment level theory by Matiti et al [ 35 ]. According to this theory, patients realize that being hospitalized with some suffering and loss of dignity is a worthwhile price to pay for the sake of safety, which is regarded as a ‘necessary submission’ [ 35 ], namely, patients need to subject to the hospital system, being told what to do and being dependent on health staff, and thereby losing their identity [ 36 ]. Furthermore, in Chinese culture, pain as a “trial” or “sacrifice’’ is profoundly meaningful. Therefore, when a person suffers with pain, he or she would rather endure it until the pain becomes unbearable [ 37 ]. As a result, patients who receive CIED surgery tend rationalize their positions such as pain in the operation, and they would rather accept the suffering and not report it to HCPs, because they know it is temporary and success of the surgery is priority [ 38 ]. Safety and success is also considered as the key issue by HCPs. A survey of 17 clinicians involved in CIED implantation showed that safety and success of the procedure are superior to patient comfort [ 14 ]. As healthcare institutions aim to offer high-quality care and the patient safety have become a major concern for healthcare facilities, many HCPs are aware of the effect of patient safety on patient outcomes [ 39 , 40 ]. A large retrospective review reported that 66% of all adverse events(AEs) were related to surgery [ 41 ]. Consequently, every year, at least seven million patients suffer from surgical complications, including at least one million who die during or immediately after surgery [ 42 ]. Hence, surgical team are more focused on patient safety and have taken measures to reduce the rate of AEs, thus improving patient outcomes [ 43 ]. Under a profoundly stressful circumstance, patients often need more attention and support from HCPs, which suggests the importance of providing humanistic care [ 20 , 44 ]. The necessity of humanistic caring was stressed by patients and HCPs as well in the study, which is in accordance with Chinese culture. Humanity, which is also known as benevolence, is an attitude which is considered to be the greatest of all virtues and at the roots of Chinese culture. One of the highest compliments being paid to a Chinese person is to say that he/she has the aura of a benevolent person [ 45 ]. Humanistic care refers to listening to the needs and desires of patients, understanding patients’ emotions and respecting their life values, which can help patients reach a higher level of physical, psychological, social and spiritual well-being [ 46 , 47 ]. In the delivery of health care, especially with the development of patient-centered approach, there is a consensus on the importance of humanistic care in clinical practice [ 48 ]. Whittle et al [ 49 ] suggested that it is useful to decrease pain and discomfort during awake brain tumour surgery by providing a comfortable operating table, a dedicated person for patients communication, and keeping them warm. Willem et al [ 50 ] reported that patients undergoing awake craniotomy believed that humanistic care, such as positive interactions and support from HCPs, is important to reduce their fear and uncertainty. Our study results showed that HCPs realized the humanistic care was essential in the operations, but it is poorly implemented. The reason might be that nursing practice is driven by a complex system of humanistic dimension (including educational, social/cultural and spiritual), but is constrained and influenced by bureaucratic dimension (including technological, economic, legal and political),which is emphasized in the theory of Bureaucratic Caring [ 51 , 52 ]. In the operation room, HCPs are under pressure to perform their work with maximal efficiency in a minimal amount of time, therefore, they often pay more attention to enhance care of technological dimension [ 53 ]. The humanistic dimension of caring might be neglected due to the fact that economic factors such as too much workload and shortage of staffing negatively influence direct care time [ 42 , 54 , 55 ]. Therefore, there is a need to focus on the interplay between bureaucratic and humanistic dimension to provide high quality care in operation setting. A number of studies have reported that providing surgical information during the operation reduced patients’ anxiety and satisfy their caring needs [ 56 , 57 ]. However, our findings indicate that surgical information was a burden for some patients undergoing CIED surgery such as increasing their intraoperative anxiety, which might be explained by the ‘Blunting Hypothesis’ proposed by Miller [ 58 ]. This hypothesis categorizes individuals into two different information styles (monitors, monitoring information-seeking styles or blunters, blunting information-seeking styles) in seeking, encoding, processing and managing threatening cases, such as CIED surgery [ 16 , 58 , 59 ]. During the CIED surgery, monitors typically seek threat-relevant information to reduce their uncertainty and promote feelings of reassurance [ 60 ]. In contrast, blunters prefer less information and their anxiety may be increased when information is delivered too much [ 61 ]. Hence, it is not surprising that the conflicting results emerged about patient’s response to CIED surgery information. It has been demonstrated that patients have better outcomes psychologically, behaviorally and physiologically when the amount of information received is consistent with the patients’ information-seeking styles [ 62 ]. In addition, patient involvement in clinical decision making has been increasingly advocated, and giving information to patients as a foundation for their involvement is valued by HCPs [ 63 ]. Kim et al [ 56 ] pointed that providing surgical information gave the patient the opportunity to manage their fears, increased patients participation, and resulted in increased well-being. Whereas, it is also evidenced by other studies that surgical information given in the operation might increase patients’ anxiety and HCPs may choose not to tell patients information about the procedure [ 64 , 65 ], which is also demonstrated in our study. Besides, Anna et al [ 66 ] reported that talking to patients with some topics that had nothing to do with surgery is a way to distract patients’ attention and reduce their anxiety. Therefore, it is important for HCPs, based on the patient’s information-seeking styles, to develop an appropriate information support manner in the surgery. In our analyses, there are four main approaches to improving patients’ intraoperative care experience. Patients may trust and rely on HCPs during the operation, and believe that HCPs will take care of everything. This finding is similar to those of Emel et al [ 10 ] who noted that patients release all control and responsibility to the HCPs when they are placed on the operating table for a surgical intervention. Eyi et al [ 67 ] also stated that it is vital to trust HCPs during the surgery. According to attachment theory, when individuals feel vulnerable in the face of major threats they seek attachment figures to help them feel safe [ 68 ]. In operation settings, HCPs are often in the position of an attachment figure because the patients view their providers as an “expert” with the skills to extend the quality and quantity of their life [ 69 ]. In China, individuals’ ways of living and thinking about health are influenced by several main Chinese philosophies and religions [ 16 ]. Taoism emphasizes harmony with nature, and conformity with nature is a process of knowing nature, trying to modify oneself to best fit nature, namely, as long as we try our best and going with the flow, the outcome is accepted peacefully [ 70 , 71 ]. Therefore, patients often chose to make the best effort to co-operate and believe in HCPs during the operation. Our study results showed that patients adopted an optimistic psychological state to enhance intraoperative experience and looked forward to positive events after the surgery. This positive psychological assets is consistent with the idea proposed by Confucianism and Taoism that optimizing will reduce suffering and improve well-being [ 71 , 72 ]. Preoperative information support was cited as an approach to improve patients’ care experience for HCPs, which has been demonstrated in other researches [ 73 ]. A study showed that good patient preoperative counseling allays patients’ anxiety and facilitates successful surgery under local anesthesia [ 74 ]. Similarly, Emel et al [ 10 ] also emphasized the importance of preoperative information in reducing patients’ anxiety and fear levels under spinal anesthesia. As an important component of psychological preparation, preoperative information support is effective for patients with conscious state to reduce anxiety and pain [ 75 , 76 ]. Stefan et al [ 77 ] suggested that we should take more time to provide information and support before operation in order to improve patients’ positive experience and surgical outcomes. Our study found that inspiring patients with prior patients’ successful experiences by HCPs acts as a way of therapeutic suggestion, which can reduce side effect and increase comfort of patients during CIED, and it is also reflected by other studies in various medical procedures, surgical procedures and chemotherapy [ 78 ]. Christine et al [ 79 ] found that therapeutic suggestions can diminish patients’ pain, anxiety and procedure time during radiological procedures. In addition, as a simple communicational technique, the method of therapeutic suggestion can easily be incorporated into everyday work in a clinical environment and can be readily learned by HCPs [ 78 ]. Strengths and limitations The perceptions of patients undergoing CIED surgery regarding intraoperative care in the Chinese context are rarely explored in the literature. This study also presents HCPs’ perceptions to better understand the experience of intraoperative care, which brings new and interesting information for further research to develop intraoperative care programs that can benefit patients. One of the limitations of this study was that the participants were from only one region of China, which limits the generalisability of the results, and caution should be taken when reading the results. In addition, HCPs included in this study only involve physicians and nurses, and professions such as technicians are lacking, which can be considered for future studies. Furthermore, this study was conducted in Chinese environment, and different cultural environments may lead to diverse experiences and perspectives. Further studies can be conducted in many other hospitals and among different ethnic groups to enrich the results. More researches are necessary to explore CIED patients’ experience during the surgery based on multiple stakeholders’ perspectives, which can contribute to developing a comprehensive and systematic intraoperative care programme.
Conclusion Based on patients’ and HCPs’ perspectives, patients who underwent CIED surgery face psychological and physical stresses, which interfere with their comfort and well-being. This study demonstrates the complexity and challenges of providing intraoperative care for HCPs. To improve care experience during the surgery, HCPs should pay attention to patients’ safety, besides, information support should consider patients’ information-seeking styles and personal needs. As an important part of intraoperative care, the factors that affecting humanistic care in clinical practice should be valued by healthcare facilities and measures should be taken to improve patients’ experience, thus achieving patient-centered care [ 80 ]. In addition, the approaches presented in this study are useful to improve intraoperative experience for patients and HCPs. Trusting HCPs and going with the flow, maintaining positive psychological state suggest the importance of building rapport between HCPs and patients, besides, culture variant as a vital factor in influencing individuals’ health beliefs and behaviors should be highlighted. Furthermore, preoperative information support and therapeutic suggestion are effective ways that can be easily implemented by HCPs.
Background Cardiac implantable electronic devices (CIEDs) has proven to be an invaluable tool in the practice of cardiology. Patients who have undergone CIED surgery with local anesthesia may result in fear, insecurity and suffering. Some studies have put efforts on ways to improve intraoperative experience of patients with local anesthesia, but researches concerning experiences of CIED patients during surgery is in its infancy. Methods Based on semi-structured and in-depth interviews, a qualitative design was conducted in a tertiary general hospital in China from May 2022 to July 2023.Purposeful sampling of 17 patients received CIED surgery and 20 medical staff were interviewed. Thematic analysis with an inductive approach was used to identify dominant themes. Results Four themes emerged from the data: (1) Safety and success is priority; (2) Humanistic Caring is a must yet be lacking; (3) Paradox of surgery information given; (4) Ways to improve surgery experiences in the operation. Conclusions Intraoperative care is significant for CIED surgery. To improve care experience during surgery, healthcare professionals should pay attention to patients’ safety and the factors that affecting humanistic caring in clinical practice. In addition, information support should consider information-seeking styles and personal needs. Besides, the four approaches presented in this study are effective to improve the intraoperative care experience. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-024-10546-7. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements The authors thank the participants who voluntarily participated in this study. Author contributions Conceptualization, data collection and analysis: M Z, Hl Z, X Z, Xr J, X S; Methodology: Yg B, W W, Ym Z.Writing: M Z (first author), Hl Z (co-first author) and they made equal contributions to this manuscript. Supervision: F M (Correspondence author). All authors read and approved the final manuscript. Funding No funding was received for conducting this study. Data availability The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. Declarations Ethical approval The study was conducted in accordance with the Helsinki Declaration and was approved by the First Affiliated Hospital of Kunming Medical University Ethics Committee(2022-L-37). Consent for publication All participants have provided written consents for the research team to use their de-identifed data in this manuscript, including information provided through demographic surveys. Competing interests The authors declare no competing interests. Informed consent All participants provided informed consent prior to enrolment in the study, including consent for publication of anonymised quotes.
CC BY
no
2024-01-16 23:45:35
BMC Health Serv Res. 2024 Jan 15; 24:73
oa_package/9b/5d/PMC10789075.tar.gz
PMC10789076
0
Introduction Health technology assessment (HTA) encompasses a comprehensive and multidisciplinary process aimed at evaluating the various dimensions of a health technology’s value, which may include aspects such as clinical, economic, social and ethical considerations [ 1 ]. The primary objective of HTA is to furnish decision-makers with evidence-based information to facilitate informed healthcare policies, clinical practices and funding choices. Given its pivotal role in evidence generation, HTA plays a crucial role in assessing the benefits, risks and cost-effectiveness associated with novel health technologies [ 2 ]. This process is predicated upon an exhaustive appraisal of available evidence, encompassing clinical trials, systematic reviews, observational studies and economic evaluations [ 3 ]. Furthermore, HTA takes into account the societal and ethical implications of the technology, including its impact on patient outcomes, quality of life and resource allocation [ 4 ]. Particularly in the context of healthcare, where resources are often finite and the implementation of new technologies can prove financially burdensome, HTA assumes heightened significance [ 5 ]. By undertaking a rigorous and systematic evaluation of available evidence, HTA serves to ensure the efficient and effective allocation of healthcare resources. Additionally, HTA promotes transparency and accountability in decision-making processes, fortified by the provision of a clear and evidence-based rationale for funding decisions [ 6 ]. In addition, universal health coverage (UHC) strives towards ensuring equitable access to comprehensive healthcare services, encompassing prevention, promotion, treatment, rehabilitation and palliation without imposing financial burdens on individuals and communities [ 7 ]. In this regard, HTA serves as a vital tool in attaining UHC by evaluating the clinical and economic value of health technologies, thereby aiding informed decision-making by policy-makers concerning the allocation of limited healthcare resources [ 8 ]. Utilizing HTA can also contribute to achieving UHC by guaranteeing patients’ access to the most efficacious and cost-effective health technologies, irrespective of their financial capabilities [ 9 ]. Moreover, by prioritizing resource allocation and promoting innovation, HTA can facilitate the sustainability of healthcare systems and the provision of high-quality care to all individuals [ 10 ]. Health system in Iran Iran boasts an extensive public healthcare system that facilitates essential health services to its population [ 11 ]. The government operates public hospitals and clinics, often subsidizing services to enhance affordability for the general populace [ 12 ]. The role of the private sector is also noteworthy in Iran’s healthcare system, with private hospitals, clinics and medical practitioners offering a diverse range of healthcare services [ 13 ]. Private healthcare providers mainly cater to individuals seeking specialized or personalized care and may offer services at varying price points. Iran possesses a domestically developed medical equipment manufacturing industry, generating a plethora of devices and instruments [ 14 ]. The procurement process for medical equipment may involve acquiring them from local manufacturers or importing them from international suppliers. Importation and distribution of medical equipment typically necessitate adherence to government regulations and procurement approvals [ 15 ]. Iran takes pride in its well-established pharmaceutical industry, responsible for producing an extensive range of medications. The process of acquiring drugs within Iran encompasses a blend of domestic production and importation [ 16 ]. While domestically manufactured medications are commonplace, more specialized or patented drugs might be imported. The Ministry of Health and Medical Education (MOHME) shoulders the responsibility of regulating and supervising the pharmaceutical sector, overseeing drug approval and importation [ 17 ]. Current status of HTA in Iran HTA-related activities were initiated in 2007 with the establishment of a secretariat within the Ministry of Higher Education (MOHE). This initial phase involved the introduction of HTA through workshops and its integration into the agenda. In early 2010, the HTA administration underwent restructuring under the supervision of the Vice of Treatment and the Office of Technology Assessment, Standardization, and Health Tariffs. This restructuring aimed to improve operations and establish a new framework. As the institutions engaged in HTA expanded their activities and individuals received training abroad before returning to Iran, policy-makers in the healthcare sector recognized the importance of cultivating a skilled HTA workforce. Consequently, a Master’s program focused on HTA was established. Currently, four medical science universities have taken the responsibility of educating students in this field. Considering the significant impact of policies related to drugs and medical equipment in Iran, dedicated HTA units were created within both the Food and Drug Organization and the University of Medical Sciences. In recent years, the healthcare system of Iran has endeavoured to establish mechanisms for incorporating HTA into evidence-based decision-making. Various initiatives have been undertaken towards this end [ 18 , 19 ]. The objective of this study is to assess the current state of Iran’s healthcare system in terms of the requisite demand, need and supply of HTA services. The findings of this study are expected to offer valuable insights into HTA policies in Iran, thereby aiding policy-makers and decision-makers in the health sector with the successful implementation of UHC.
Methods Ethics declarations The study was approved by the ethical committee at Lorestan University of Medical Sciences (IR.LUMS.REC.1399.112). All methods were performed in accordance with the relevant guidelines and regulations. The study has also been performed in accordance with the Declaration of Helsinki. Instrument of data collection For this study, the HTA introduction status analysis questionnaire developed by the International Decision Support Initiative (iDSI) at the national level was utilized. This questionnaire has been previously employed in multiple countries, including India, Uganda, Nigeria and various sub-Saharan African nations [ 22 – 25 ]. The questionnaire itself consists of three sections: HTA need, demand and supply encompass a total of 12 questions. The need section encompasses five questions, the demand section consists of two questions and the supply section includes five questions (see Additional file 1 of this paper for the questionnaire). Selection of participants and inclusion criteria The identification of key informants involved an extensive process, which entailed conducting a meticulous literature review and consulting with the MOHME, as well as experts who had experience in policy-making, health service provision, and HTA. A comprehensive search was conducted across various databases, encompassing both national and international sources. The national databases included the Scientific Information Database (SID) ( https://www.sid.ir/ ), MagIran ( https://www.magiran.com/ ), and Elmnet ( https://elmnet.ir/ ), while the international databases comprised PubMed, Scopus, Embase and Web of Science from January 2007 to August 2022. Moreover, official documents, reports and pertinent news related to HTA in Iran were thoroughly examined. Drawing from this accumulated information, a roster of stakeholders was compiled, and subsequently, key experts were selected. The criteria for selecting these key informants were based on their specialized expertise, experience and active involvement in decision-making, resource allocation and prioritization processes within the healthcare sector, both at the regional and national levels. This encompassed individuals who were affiliated with government institutions, research centres, private sectors, companies and organizations, as well as individuals with a work experience of more than 10 years. Data collection For the purpose of data collection, an online questionnaire was utilized between September 2022 and January 2023. The questionnaire underwent a meticulous process of adaptation to the Persian language, involving the use of a forward–backward translation technique. Initially, two translators translated the questionnaire from English to Persian, which was then followed by a reverse translation to English by another translator. Any discrepancies between the original and back-translated versions were resolved through consensus among the translators and authors. To ensure clarity and a consistent understanding among respondents, a pretest was conducted with a sample of seven individuals who were not key informants within the MOHME. This pretest aimed to evaluate the clarity and comprehensibility of the questions from the perspective of the respondents. All selected participants received a detailed email explaining the objectives of the study and requesting their willingness to participate. The online questionnaire was developed using the Persian platform accessible at https://porsline.ir/ . It consisted of a combination of open-ended and multiple-choice questions, allowing participants to express their opinions through online completion. Furthermore, participants were invited to suggest individuals who could potentially contribute to the study’s objectives, as a collaborative approach was considered integral. Efforts were made to ensure diversity in participant representation, which involved actively engaging individuals from various departments associated with HTA and Iran’s healthcare system. Data analysis The method of descriptive analysis was employed to analyse the quantitative data, while the inductive thematic analysis method was applied for analysing the qualitative data. In addition, we conducted data analysis using R Version 4.2.3 software.
Results A total of 103 questionnaires were distributed, garnering responses from 63 participants, which translated to a commendable response rate of 61%. In terms of gender distribution, 68% of the participants identified as male. Further details about the participants’ characteristics and their respective fields of activity are presented in Table 1 . The mean age of the participants of the study was 39 ± 12 years, and the average work experience was 13 ± 73 years. The need for HTA Within Iran’s health system, participants placed particular emphasis on distinct facets of HTA, as indicated by the mean scores out of 10 (see Table 2 ) . Notably, allocative efficiency attained the highest mean score of 8.53. This underscores the pivotal role of optimizing resource allocation to attain optimal health outcomes. Additionally, this approach serves to mitigate health inequalities. Further, enhancing the quality of healthcare received a commendable rating of 8.17. This shows its pivotal role in bolstering treatment efficacy, improving patient experiences and elevating overall health statuses. Transparent decision-making, with a rating of 7.92, was highlighted for its ability to build trust and accountability, promote fairness and facilitate evaluation and feedback. Budget control, rated at 7.58, was recognized as essential for financial sustainability, efficient resource allocation, accountability, and managing healthcare costs. Equity, with a rating of 7.25, was emphasized as a matter of social justice, aiming for equal access to healthcare services and reducing health disparities. Participant rankings, ranging from 1 to 6, were employed to discern the relative significance of various policy areas where HTA was considered in Iran. The mean scores for these policy areas were then prioritized in the subsequent order, as presented in Table 3 . Regarding the coverage or reimbursement of individual health technologies, participants emphasized the importance of ensuring access and affordability for essential health technologies such as drugs, medical devices and treatments. They recognized the need for effective coverage and reimbursement mechanisms to facilitate access to these technologies. In terms of provider payment reform or pay-for-performance schemes, participants highlighted the significance of incentivizing healthcare providers on the basis of performance and quality of care. They acknowledged that implementing such schemes can encourage healthcare providers to deliver high-quality care and improve health outcomes. Participants also emphasized the value of HTA in informing the design of the basic health benefits package. They recognized the importance of using HTA to identify and prioritize cost-effective interventions and treatments for inclusion in the basic benefits package. The respondents recognized the importance of these guidelines and pathways in guiding healthcare providers to deliver high-quality and consistent care across different healthcare settings. For health service delivery design, participants mentioned improving coordination of care, patient-centredness, and the integration of services as key considerations. Regarding health technology registration, participants believed that it plays a vital role in ensuring patient safety, promoting quality assurance and building public trust. They recognized the importance of legal and ethical compliance, monitoring and surveillance, and health system planning facilitated by health technology registration. The participants also highlighted the importance of various policy areas in healthcare, including coverage and reimbursement, provider payment reform, HTA-informed benefits package design, production of clinical guidelines, health service delivery design, and health technology registration. The participants in Iran ranked certain technologies in order of preference, indicating the areas where these technologies can be utilized beyond the HTA (Table 4 ). The participants prioritized medicines as the top concern, highlighting the crucial role medications play in the healthcare ecosystem. The participants also acknowledged the importance of medication in enhancing overall health outcomes and reducing the burden of diseases on a broader scale. Vaccines assume a critical role in preventing the dissemination of infectious diseases and safeguarding individuals and communities against illnesses that can be averted through vaccination. The ranking underlines the significance of immunization and its contribution to public health promotion. Participants expressed the significance of medical devices and diagnostics within the realm of healthcare. In the case of other interventions, such as surgical procedures, this category encompasses a broad spectrum of interventions beyond pharmaceuticals, including surgical procedures and other medical interventions. Participants recognized the value of these interventions in addressing specific health conditions and providing necessary medical treatments. In addition, participants acknowledged the importance of screening programs aimed at the early detection of diseases and subsequent referral to appropriate healthcare services. Service delivery initiatives or incentives aspire to enhance healthcare delivery and improve patient experiences through innovative models, enhanced accessibility, and incentivized care practices. Participants ranked this category lower, suggesting that while important, it may not be their primary focus compared with other technologies. Priority health or healthcare issues Considering economic problems, sanctions, increase in the elderly population and demand for new technologies in Iran, participants in the study identified two priority health and treatment issues for Iran’s health system. Affordable access to healthcare services: Given the economic challenges and sanctions affecting Iran, it is crucial to prioritize affordable access to healthcare services. This involves ensuring that individuals, including those with limited financial resources, have financial access to healthcare. By addressing affordability, we can reduce financial barriers and ensure that healthcare is accessible to the entire population. Integration of health technologies and innovation: With the increase in the elderly population and the demand for new technologies, it is necessary to prioritize the integration of health technologies and innovation in the healthcare system. Technology can play a vital role in improving healthcare delivery, enhancing efficiency and expanding access to healthcare services, particularly in remote or underserved areas. By embracing new technologies, we can optimize resource allocation, improve the overall quality of care and better meet the healthcare needs of the population, including the elderly. 2. Demand for HTA On the basis of the participants’ responses, the three potential users of HTA outputs in Iran include the following organizations (Table 5 ). Participants’ level of interest in the types of HTA outputs is summarized in Table 6 . In relation to safety, participants have ranked it as their primary concern due to the potential risks and consequences associated with healthcare interventions. With regard to effectiveness, participants have emphasized the significance of healthcare technologies being efficacious, as this directly influences patient outcomes. In terms of cost-effectiveness, the findings indicate that participants understand the importance of efficient resource allocation and maximizing the value of healthcare investments. In regard to the economy, participants’ ratings reveal that they considered economic factors beyond affordability. These factors encompassed considerations of sustainability and the availability of healthcare resources. Participants recognized the importance of evaluating the financial implications of implementing health technologies. Further, the relatively low ranking of social/ethical considerations can be attributed to several factors. Although participants acknowledged its significance, they prioritized other aspects, such as safety and effectiveness, due to the demands placed on the healthcare system by the people in Iran. These aspects were perceived as more pressing concerns in terms of priority. Lastly, concerning the three organizations introduced by the participants, the ranking points for the mentioned items are outlined in Table 7 . 3. Supply of HTA The participants were asked to state the strengths and weaknesses of their organizations in relation to evidence-based healthcare in Iran. The strengths and weaknesses are mentioned below. Strengths Research and academic institutions : Iran has various reputable research and academic institutions dedicated to healthcare. Collaboration with international organizations: Iran’s healthcare organizations often collaborate with international organizations and research institutions. Government support: The Iranian government recognizes the importance of evidence-based healthcare and has taken steps to support it. They have established policies and initiatives to promote evidence-based practices and research in healthcare organizations. National research networks: Iran has established national research networks, which facilitate the implementation of evidence-based healthcare. Health information systems: Iran has made significant progress in developing health information systems. These systems help collect, analyse and disseminate health-related data, including evidence-based research findings. Emphasis on continuing medical education: Continuing medical education is a priority in Iran’s healthcare system. Healthcare professionals are encouraged to stay updated with the latest evidence-based practices through continuous learning and professional development programs. Use of clinical practice guidelines: Iran has developed and implemented clinical practice guidelines in various healthcare specialities. These guidelines are based on rigorous evidence and provide healthcare practitioners with standardized recommendations for diagnosis, treatment and management of different conditions. Public health initiatives: Iran has implemented various public health initiatives on the basis of evidence. These initiatives target key health issues, such as preventive measures, health promotion and disease control. Weaknesses Limited access to up-to-date research findings: Healthcare organizations face challenges in accessing international journals, databases and research resources, which can hinder their ability to stay updated with the most current evidence. Research funding constraints: Insufficient financial resources may restrict the scope and scale of research projects, making it difficult to generate robust evidence to support healthcare practices. Research infrastructure challenges: Limited access to state-of-the-art laboratories, advanced equipment and research facilities can hinder the execution of rigorous studies and evidence generation. Variability in research quality: Variability in research quality may arise due to factors such as limited resources, inadequate training or lack of standardized research methodologies. This can affect the reliability and generalizability of the evidence produced. Limited implementation of evidence: Despite the emphasis on evidence-based healthcare, organizations usually face barriers such as resistance to changes, lack of awareness among healthcare providers or inadequate support systems for translating evidence into practice. Fragmented healthcare system: Iran’s healthcare system is characterized by a fragmented structure, with multiple stakeholders involved in the delivery of care. This fragmentation can lead to challenges in the coordination and dissemination of evidence-based practices across different healthcare organizations and settings. Data quality and standardization: Variability in data collection, documentation practices and record-keeping can affect the reliability and comparability of research findings and evidence-based guidelines. The participants highlighted the availability of local data as a key challenge in conducting HTA in Iran (Table 8 ). The participants raised multiple concerns pertaining to data collection and documentation practices within Iranian healthcare organizations, thereby resulting in restricted access to local data. Insufficient or incomplete recording of healthcare activities, health outcomes and service delivery information, as well as drug utilization and pricing data, can hinder the availability of comprehensive and reliable local data for HTA. In the context of Iran’s healthcare system, data access and sharing among different stakeholders impose limitations on local data for HTA. Participants identified challenges such as data privacy concerns, the lack of standardized data sharing protocols and inadequate data infrastructure as hindrances to the efficient exchange of data. Moreover, participants reported that inadequate research capacity and funding are additional factors that may obstruct the generation of local data for HTA. Limited resources for research studies, clinical trials and data collection designs may lead to a dearth of comprehensive and up-to-date local data, thus compromising the informative value of HTA processes. The participants also noted that variability in data quality and the absence of standardized data collection methods may further undermine the availability of reliable local data. Organizations involved in the supply or generate evidence in Iran In Iran, various organizations play a role in supplying or generating evidence to support health policy decisions. These organizations include: Medical universities: Medical universities in Iran often engage in research activities and contribute to the generation of evidence. They conduct studies, clinical trials and research projects to gather data and generate evidence relevant to health policy decisions. Research centres: Research centres in Iran play a significant role in generating evidence. These centres focus on specific areas of research, such as public health, epidemiology or medical sciences. They conduct studies, collect data and analyse information to generate evidence that can inform health policy decisions. Government institutions: Government institutions in Iran are responsible for health policy development and implementation. They often rely on evidence to make informed decisions. These institutions may conduct their research or rely on evidence generated by medical universities and research centres to support health policy decisions. Overall, the identified organizations in Iran, including medical universities, research centres and government institutions, have a role in both supplying and generating evidence. They contribute to the evidence base by conducting research, collecting data, analysing information and providing evidence to support health policy decisions. HTA infrastructure in Iran’s health system In Iran, efforts have been made to establish and strengthen the infrastructure for HTA, as presented in Table 9 . Training needs for generators to improve HTA capacity Participants provided a comprehensive list of training needs for capacity building in HTA for evidence generators. Training needs are rated on a scale of 1 to 10 (Table 10 ). Participants emphasized the significance of introducing and applying HTA, as it establishes the groundwork and facilitates a fundamental comprehension of HTA principles and practices. In the process of selecting topics for HTA studies, participants stressed the importance of choosing appropriate subjects to ensure that resources are allocated to areas with the greatest impact and need. The inclusion of health economic evaluation and economic modelling was considered highly valuable, given the growing importance of cost-effectiveness analysis and economic modelling in HTA. These methodologies assist decision-makers in evaluating the value of health technologies relative to their costs. Systematic reviews and meta-analyses play a crucial role in synthesizing and analysing available evidence, providing a robust foundation for HTA assessments. The consideration of healthcare cost is essential for understanding and estimating the financial implications associated with implementing health technologies, aiding in decision-making regarding resource allocation. Lastly, the measurement of health outcomes is vital for evaluating the effects of health technologies on patient outcomes and overall population health. Training needs for users to improve HTA capacity The participants rated the training needed for users in Iran to improve HTA capacity from 1 to 5. The ranking of each training, subject and their scope are presented in Table 11 .
Discussion This study represents the inaugural attempt aimed at delineating the current trajectory of HTA in Iran. The Iranian health system adopts a hierarchical approach to decision-making and policy formulation. However, the prompt consideration of regional health matters remains insufficient, necessitating the incorporation of the nation’s cultural and geographical diversity during the implementation of diverse technologies. The findings of this study yield valuable insights into the pivotal role of HTA in facilitating well-informed decision-making, thereby contributing to the attainment of UHC in Iran. These results demonstrate the significance of addressing training requirements, policy domains, stakeholder interests and the availability of local data, thus elucidating the crucial determinants influencing the capacity for HTA and evidence-based healthcare within the country. Participants in this investigation emphasized the fundamental significance of HTA in healthcare technology coverage and reimbursement policy. Over the past decade, Iran’s healthcare system has witnessed an escalated demand for HTA while grappling with several financial constraints caused by various factors such as sanctions, the coronavirus disease 2019 (COVID-19) outbreak and ageing population. The contemporary landscape of healthcare is marked by a plethora of innovative technologies, yet a substantial number of these advancements find themselves in a state of neglect, unsupported by insurance plans. Both public and private insurance entities exhibit a reluctance to embrace these novel technologies, creating a formidable barrier to their widespread accessibility and adoption. One of the primary culprits behind this reluctance is the exorbitant cost associated with certain health technologies, a significant impediment, particularly for individuals with limited financial means [ 26 ]. This paper emphasizes the pivotal role of HTA within the broader framework. In the evaluation of coverage and reimbursement, HTA emerges as a linchpin, offering a systematic approach to gauging the value and affordability of these health technologies [ 27 ]. By scrutinizing the potential benefits and costs linked to providing coverage and reimbursement for these technologies, healthcare policy-makers and payers are empowered to make judicious decisions. These decisions extend beyond mere financial considerations; they encompass the broader impact on patient outcomes, healthcare delivery and the overall efficacy of the healthcare system [ 28 ]. The integration of HTA into insurance plans emerges as a strategic imperative, resonating with a body of research that underscores its significance and potential [ 22 , 25 , 29 ]. Against the backdrop of financial constraints within the Iranian health system, policy-makers find themselves grappling with heightened concerns, exacerbated by the substantial expenditures attributable to the COVID-19 pandemic and disease management [ 19 ]. In response to these challenges, study participants underscored the imperative for judicious allocation of financial resources and the advancement of evidence-based healthcare services [ 19 ]. Addressing these concerns, Dang et al. draw attention to the pivotal role of HTA as a strategic tool for bolstering the efficiency of the health system. This recommendation is contextualized within the broader landscape of escalating healthcare costs, the unique demographic profile of Iran and the swift pace of technological advancements[ 7 ]. The examination of various types of technology within the study revealed a discernible preference among participants for advancements in medicine, health interventions and vaccines. Strikingly, this inclination mirrors the outcomes of a survey report by the WHO. The WHO report underscores that in many low-income countries, there exists a consensus regarding the indispensability of medicine and health interventions as pivotal technologies for HTA. This collective sentiment implies that a substantial portion of healthcare costs in these regions is intricately linked to these specific technologies. Moreover, the prevalence of diverse diseases, both in low-income and high-income countries, appears to exert a profound influence on the preferences for specific technologies within their respective healthcare systems [ 30 , 31 ]. This study provides invaluable information with Iran’s MOHME, MCLSW and government and private insurance organizations as the primary users of HTA, which is consistent with findings from precursory studies [ 1 , 22 , 25 ] and the WHO report. As custodians of health within their respective countries, these organizations bear the responsibility of providing services through the utilization of various technologies and ensuring their financial coverage. Among the limitations identified by the participants, the lack of appropriate local and national data was particularly emphasized. They considered the existing data to be insufficient and limited [ 32 ]. Notably, there is a pressing need for investment in the establishment and enhancement of data systems for health research to support HTA analysis in Iran. However, the application of evidence based on HTA requires both financial resources and political support, both of which are crucial for its development [ 33 ]. To garner political support, it is crucial to focus on capacity building and raising awareness among policy-makers regarding the potential of HTA [ 34 ]. Furthermore, there is a certain need for skilled human resources to expand HTA research in Iran, and although some efforts have been made in recent years to develop relevant training, such as the establishment of HTA courses in medical universities, there remains room for improvement. Regrettably, the MOHME has not earnestly incorporated the utilization of trained human resources in the field of HTA into its agenda, which has resulted in numerous policy decisions being made without their involvement and the necessary evidence preparation [ 35 ]. In Iran’s healthcare system, decision-making primarily resides within the MOHME. However, leveraging the expertise of trained human resources across different regions can significantly contribute to the preparation of localized evidence and subsequently serve as a valuable bridge for national-level decision-making. Establishing effective channels of communication is vital in bolstering Iran’s capacity for HTA research. Limitation This study has some limitations that warrant acknowledgment. Firstly, a majority of the study’s participants were affiliated with government organizations or research institutes, potentially limiting the representativeness of non-governmental organizations or the private sector. Additionally, the study may have been subject to selection bias, stemming from organizations’ willingness to participate or the identification of relevant entities. Another limitation pertains to missing data or complications in the analysis, potentially impacting the generalizability of the findings.
Conclusions Although the need for the establishment of HTA is generally acknowledged by authorities and health sector policy-makers in the context of achieving UHC, it is imperative for them to place HTA as a priority over policies influenced by specific stakeholders for their individual gains. HTA should be employed as a means to assess new health technologies. To secure political support and commitment towards HTA, it is essential to enhance the understanding of HTA fundamentals among policy-makers and healthcare managers.
Background The evaluation of health technologies plays a crucial role in the allocation of resources and the promotion of equitable healthcare access, known as health technology assessment (HTA). This study focuses on Iran’s efforts to integrate HTA and aims to gain insights into stakeholder perspectives regarding capacity needs, demand and implementation. Methods In this study, we employed the HTA introduction status analysis questionnaire developed by the International Decision Support Initiative (iDSI), which has been utilized in various countries. The questionnaire consisted of 12 questions divided into three sections: HTA need, demand and supply. To identify key informants, we conducted a literature review and consulted with the Ministry of Health and Medical Education (MOHME), as well we experts in policy-making, health service provision and HTA. We selected stakeholders who held decision-making positions in the healthcare domain. A modified Persian version of the questionnaire was administered online from September 2022 to January 2023 and was pretested for clarity. The analysis of the collected data involved quantitative methods for descriptive analysis and qualitative methods for thematic analysis. Results In this study, a total of 103 questionnaires were distributed, resulting in a favourable response rate of 61% from 63 participants, of whom 68% identified as male. The participants, when assessing the needs of HTA, rated allocative efficiency as the highest priority, with a mean rating of 8.53, thereby highlighting its crucial role in optimizing resource allocation. Furthermore, healthcare quality, with a mean rating of 8.17, and transparent decision-making, with a mean rating of 7.92, were highly valued for their impact on treatment outcomes and accountability. The importance of budget control (mean rating 7.58) and equity (mean rating 7.25) were also acknowledged, as they contribute to maintaining sustainability and promoting social justice. In terms of HTA demand, safety concerns were identified as the top priority, closely followed by effectiveness and cost-effectiveness, with an expanded perspective on the economy. However, limited access to local data was reported, which arose from various factors including data collection practices, system fragmentation and privacy concerns. The priorities of HTA users encompassed coverage, payment reform, benefits design, guidelines, service delivery and technology registration. Evidence generation involved the participation of medical universities, research centres and government bodies, albeit with ongoing challenges in research quality, data access and funding. The study highlights government support and medical education as notable strengths in this context. Conclusions This study provides a comprehensive evaluation of Iran’s HTA landscape, considering its capacity, demand and implementation aspects. It underlines the vital role of HTA in optimizing resources, improving healthcare quality and promoting equity. The study also sheds light on the strengths of evidence generation in the country, while simultaneously identifying challenges related to data access and system fragmentation. In terms of policy priorities, evidence-based decision-making emerges as crucial for enhancing healthcare access and integrating technology. The study stresses the need for evidence-based practices, a robust HTA infrastructure and collaboration among stakeholders to achieve better healthcare outcomes in Iran. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-023-01097-0. Keywords
Supplementary Information
Abbreviations Health technology assessment International decision support initiative Ministry of Health and Medical Education Universal health coverage Scientific information database World Health Organization Acknowledgements Not applicable. Author contributions MaB, AA, SA, BDT and SA contributed to the development of the idea for this article. MeB, AB, AR, SJE and MaB partook in the acquisition and analysis of data. All co-authors joined them in critically interpreting and discussing the data. MaB, AA, SJE and SS wrote sub-sections of this article and provided input into further sub-sections of the article, along with AA, MaB, AB, SA, BDT and AR. All authors have critically revised content, have approved the submitted version of this article and are accountable for the accuracy or integrity of any part of the work. Funding The authors no funding was received to assist with the preparation of this research. Availability of data and materials All data generated or analysed during this study are included in this published article. Declarations Ethics approval and consent to participate The study was approved by the ethical committee at Lorestan University of Medical Sciences (IR.LUMS.REC.1399.112). Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:35
Health Res Policy Syst. 2024 Jan 15; 22:11
oa_package/77/71/PMC10789076.tar.gz
PMC10789077
38221608
Correction: Mol Cancer 18, 33 (2019) https://doi.org/10.1186/s12943-019-0947-9 Following publication of the original article [ 1 ], the authors, after thorough checking the original data, they have found three unintentional duplication in this paper, the authors requested to update the figures as stated below. We request to replace the misused image in Fig. 2 K with the correct image We request to replace the misused images in Fig. 5 d and 5i with the correct images We request to replace the misused image in Fig. 8 e with the correct images To ensure the reliability of the experimental conclusion, three repeated experiments were performed again by different authors from their team which has no conflict of interest. The correction does not change the results and scientific conclusions of this article. We sincerely apologize to the editor, reviewers and readers for the errors and any confusion it may have caused. We want to make a correction to this error as soon as possible.
CC BY
no
2024-01-16 23:45:35
Mol Cancer. 2024 Jan 15; 23:14
oa_package/18/da/PMC10789077.tar.gz
PMC10789078
38221624
Background Patient deaths are impactful events for professional caregivers, and they influence both their professional and personal lives [ 1 , 2 ]. For physicians and nurses, a patient death can carry both professional meaning, such as suggesting a failure in one’s work, highlighting the limitations of medical science, and a providing case to learn from, and personal meaning, such as indicating the loss of a valuable life and an acquaintance and inspiring empathy for the bereaved family [ 3 ]. Shortly after a patient death (approximately one week), bereavement reactions such as grief, frustration, and death anxiety manifest among professional caregivers [ 4 , 5 ]. In addition to direct short-term reactions, unresolved professional bereavement experiences are related to a series of negative outcomes, such as trauma [ 6 ], depression [ 7 ], and desire to quit one’s job [ 8 ]. According to a study with 828 healthcare workers in Israel during the peak of the COVID-19 outbreak, risks for posttraumatic stress symptoms were higher in COVID-19 wards, where significantly more patient deaths took place, than in non-COVID-19 wards [ 9 ]. However, patient death experiences may also contribute to positive outcomes. Among 254 Korean nurses, nursing assistants, social workers, and care workers, the psychological suffering that they experienced after patient deaths was found to be positively linked to posttraumatic growth [ 10 ]. Professional quality of life (ProQOL) is “the quality one feels in relation to their work as a helper” [ 11 ]. It consists of three aspects, namely, burnout (BO; feelings of unhappiness, disconnectedness, and insensitivity to the work environment), secondary traumatic stress (STS; feelings of being trapped, on edge, exhausted, overwhelmed, and infected by others’ trauma), and compassion satisfaction (CS; feeling satisfied by one’s job and from the helping itself) [ 11 ]. Increasing attention has been given to ProQOL among healthcare professionals who frequently experience patient deaths. Among oncology nurses, the prevalence rates of CS, BO, and STS were estimated to be 22.89%, 62.79% and 66.84%, respectively [ 12 ]. In emergency department nurses, low to average levels of compassion fatigue and BO and average to high levels of CS were detected [ 13 ]. Close links have been found among professional caregivers’ workload, attitudes, perceived competence, and coping methods related to patient deaths and the BO aspect of ProQOL. For instance, delivering death notifications was associated with increased BO rates among American emergency medical service professionals [ 14 ]. Among 177 oncologists from Israel and Canada, BO was positively related to the perception of expressing emotion over patient death to be weak and unprofessional [ 15 ]. Higher perceived death competence was shown to predict lower BO in Chinese novice oncology nurses [ 16 ]. Moreover, grief avoidance among nursing assistants and homecare workers who recently experienced patient death in the U.S. was found to lead to higher risks of BO [ 14 ], and psychological education interventions with topics including coping with patient death were found to significantly reduce BO among British doctors [ 17 ]. Despite the accumulating evidence that links patient-death-related variables with burnout, studies that straightforwardly checked the associations between professional caregivers’ patient death experiences and ProQOL had failed to detect direct connections. For instance, among 65 pediatric and neonatal intensive care nurses in the U.S., the relationships between experiences of patient death or near death and BO, STS, and CS were all nonsignificant [ 18 ]. According to a survey of Israeli palliative care workers, levels of CS, STS, and BO did not differ between the high level of exposure to death group (high-LED; measured by the number of actively dying patients who the participant was exposed to or cared for and the number of hours of direct care for terminally ill patients) and the low-LED group [ 19 ]. Previous studies’ failure to detect a direct link may be due to the investigation of objective exposure to rather than subjective experiences of patient deaths as the independent variable. According to constructive views, adverse experiences such as bereavement and trauma are processes of meaning reconstruction, and how people react and cope depends on their perceptions and attributions rather than what happened in reality [ 20 , 21 ]. Therefore, it is possible that more comprehensive measurements of subjective patient death experiences may help reveal their associations with ProQOL. Several gaps can be identified in previous explorations of the relationship between patient death experiences and ProQOL. First, attention has been given exclusively to objective exposure to patient deaths as the independent variable, while subjective experiences have been neglected. Second, the research focus has been more on the BO aspect of ProQOL rather than STS and CS. Third, most previous studies were conducted in only one department, which limits the generalizability of the findings. To fill these gaps, the present study used secondary data to answer the following research question: How are subjective and objective patient death experiences related to the BO, STS, and CS component of ProQOL among physicians and nurses from various departments in hospitals?
Material and methods Design The present study adopted a cross-sectional design. Secondary data analyses were conducted, and the researchers followed the STROBE checklist for reporting cross-sectional studies [ 22 ]. In the original project, 563 Chinese urban hospital physicians and nurses who had experienced patient deaths were recruited between August and December 2018, and data were collected through an online survey. Both convenient sampling and snowballing methods were used. Potential participants known personally to the authors of the original study were contacted; in addition, to improve the regional diversity of the sample, advertisements were purposefully sent via a national medical conference whose members included professional caregivers from all over the country. The participants were asked to provide basic information, recall their most recent patient death experience, review their accumulated global changes after all the patient deaths in their career, and rate their ProQOL. Additionally, they were encouraged to communicate the link to friends and colleagues after completing the questionnaire. More details of participant recruitment and data collection are reported elsewhere [ 23 , 24 ]. Study participants The inclusion criteria for the present study were (1) being a physician or nurse whose most recent patient death experience was more than one month before the survey and (2) having a reasonable response time (no shorter than 173 items * 2 s/item = 346 s [ 25 ]) for the online survey. In the original project, 72.4% of the participants reported that their most intact memory of the latest patient death faded in less than 1 month [ 23 ]. Therefore, the one-month criterion was used to avoid severe distortions of all past patient death experiences and ProQOL by disturbing memories of the most recent patient death. Meanwhile, the response time criterion was used to exclude participants prone to providing careless responses since extremely fast responses indicate a failure to have read, understood, and responded carefully and accurately to survey items [ 26 ]. The minimum number of cases needed was calculated with G*Power [ 27 ]. For the present study, at least 89 cases were needed for the multiple linear regressions (two tails, f 2 = 0.15, α = 0.05, power = 0.95, maximum number of predictors = 13). Instruments Past patient death experiences, ProQOL, and several control variables were extracted from the original dataset. For past patient death experiences, the objective measurement was a question asking the number of patient deaths the participant had experienced throughout his or her career (≤ 3, 4–10, 11–20, 21–50, > 50). Meanwhile, the subjective measurement assessed the global changes in the participant’s personal life and professional life as well as ways he or she had faced professional bereavement that was attributed to all patient deaths in his or her career. This construct was measured by the Accumulated Global Changes (AGC) subscale of the Professional Bereavement Scale, a 15-item measurement that was developed and validated in Mandarin. The scale includes factors related to new insights (e.g., “I cherish my life more”), more acceptance of limitations (e.g., “I am more aware of the limitation of medical science”), more death-related anxiety (e.g., “I am more anxious about my own mortality”), less influenced by patient deaths (e.g., “The aftereffects of patient deaths become weaker for me”), and better coping with patient deaths (e.g., “I am better at coping with patient deaths”), which were measured by 4, 3, 4, 2, and 2 items, respectively [ 23 ]. Participants were asked to rate “the extent to which you have been changed by all patient deaths in your career in each of the following aspects” on a scale from 0 (no such change or the change was not induced by experiencing patient deaths) to 4 (yes, great deal). Higher AGC scores reflected more intense subjective patient death experiences in general. ProQOL was measured with the Professional Quality of Life Scale by Stamm [ 11 ]. Three subscales were used to assess BO, STS, and CS, each with 10 items [ 11 ]. Every item was measured with a 5-point scale. After reverse coding of some items, a higher total score of each subscale showed a higher degree of the construct being measured. With permission from the ProQOL Office, the researchers of the original study slightly modified the expressions in the official simplified Chinese version of the scale. A series of data on basic demographic information, work-related information, and bereavement-related information were extracted as control variables. Basic demographic information included participants’ age, sex, and religious beliefs. Work-related information consisted of participants’ occupation (nurse/physician), department, hospital level (primary, secondary, tertiary), and work experience (< 1 year, 1–3 years, 4–10 years, 11–20 years, > 20 years). Participants were also asked to rate their commitment to their job and sense of mission in their job on a scale from 0 (extremely weak) to 4 (extremely strong). In terms of bereavement-related information, participants reported their familial bereavement experiences (whether loved ones were lost in their personal life) in the past 2 years and the time since their most recent patient death experience (1–6 months, 6–12 months, > 12 months). Analysis process Analyses were conducted with R packages [ 28 ]. Physicians and nurses were grouped together in the analyses, as no significant differences were detected in previous studies between the two groups in either qualitative explorations of their lived experiences of professional bereavement or quantitative measurements of their short-term reactions after each patient death [ 29 ]. Following descriptive analyses of all variables, Little’s MCAR test was run to determine whether data were missing at random. Multiple imputation by chained equations [ 30 , 31 ] was employed to handle missing data, and 5 imputations were run [ 32 ]. Afterward, the correlations between BO, STS, and CS were calculated. Then, bivariate analyses were run to explore the relationships between the three ProQOL scores and all other variables (the 2 variables of patient death experiences and the 11 control variables). Correlational analysis (Pearson’s r), t tests, and ANOVAs were run for continuous, dichotomous, and categorical variables, respectively. Control variables that had significant links with at least one ProQOL score were selected for further regression analyses. Finally, three multiple linear regressions were run after the assumptions of the linear model (normality, linearity, and heteroscedasticity) were assessed. BO, STS, and CS were the dependent variables. All models used the number of past patient deaths and the ACG score as independent variables, and the same list of control variables was used as well. All dichotomous and categorical predictive variables were turned into dummy variables.
Results Background A total of 306 participants were involved in the analyses, which met the eligible number for regression. The average age was 32.33 years ( N = 306, Range: 22–56, SD = 7.23), and the majority of participants were female ( n = 276, 88.7%) and nurses ( n = 257, 87.0%). More information is shown in Table 1 . Of the 306 participants, 43 and 21 had missing data on objective and subjective past patient death experiences, respectively. Little’s MCAR test showed that the data were missing at random ( = 45.0, df = 44, p =.431). No case was deleted due to missing data. The number of past patient deaths were no more than 3, 4–10, 11–20, 21–50, and more than 50 for 62, 88, 41, 51, and 21 participants, respectively. The mean total AGC score was 35.29 ( n = 285, α = 0.941, SD = 14.51). On average, participants scored 17.49 ( n = 306, α = 0.613, SD = 6.19) on the BO subscale, 16.98 on the STS subscale ( n = 306, α = 0.935, SD = 10.43), and 23.22 ( n = 306, α = 0.937, SD = 9.77) on the CS subscale. Positive correlations were found between BO and STS ( r =.49, p < 0.001) and between STS and CS ( r =.46, p < 0.001), while the link between BO and CS was negative ( r = −.37, p < 0.001). The results of the bivariate analyses between all control variables and the three ProQOL scores are shown in Table 2 . Participants’ sex, familial bereavement experiences in the past 2 years, religious beliefs, hospital level, and time since the most patient death did not have significant connections with any of the ProQOL scores. Age, occupation, department, work experience, job commitment, and sense of mission were added along with the number of past patient deaths and the ACG total score into the regressions against ProQOL scores. For the linear models, the assumptions of linearity ( p ≥ 0.106) and heteroscedasticity ( p ≥ 0.476) were met for all three models, while the assumption of normality was only satisfied by the STS model (BO: skewness: p =.003, kurtosis: p <.001; STS: skewness: p =.231, kurtosis: p =.970; CS: skewness: p =.025, kurtosis: p =.311). That is, the data for BO and CS were not normally distributed. According to Schmidt and Finan [ 33 ], violations of the normality assumption may not noticeably distort the results as long as the number of observations per variable is larger than 10. Therefore, no transformation was applied to the predicted variable of BO or CS. Objective patient death experiences and ProQOL In the bivariate analyses, the number of past patient deaths was not significantly linked with BO ( F = 0.42, p =.798), STS ( F = 2.17, p =.069), or CS ( F = 1.11, p =.348). Table 3 shows the outcomes of the three multivariate regressions. In the BO ( R 2 = 0.20, adjusted R 2 = 0.14), STS ( R 2 = 0.39, adjusted R 2 = 0.35), and CS ( R 2 = 0.44, adjusted R 2 = 0.40) models, the number of past patient deaths did not significantly link with the predicted variable. Subjective patient death experiences and BO, STS, and CS In the bivariate analyses, the correlations between AGC and BO ( r =.16, p =.005), AGC and STS ( r =.58, p < 0.001), and AGC and CS ( r =.49, p < 0.001) were all significant. In the multivariate regressions against BO, STS, and CS, higher ACG scores were associated with higher outcome scores. Moreover, a stronger sense of mission was significantly associated with lower BO, higher STS, and higher CS scores. Participants from various departments differed in their BO and CS, and the impacts of age, occupation, work experience, and job commitment on the ProQOL measures were not significant after they were added with the other variables.
Discussion Objective patient death experiences and ProQOL Using secondary data from 306 Chinese urban hospital physicians and nurses, the present study explored the link between patient death experiences and ProQOL. For the first time, the effects of objective and subjective patient death experiences on ProQOL were separated, and the research question was answered. While objective experiences measured by the number of past patient deaths was not significantly linked with the BO, STS, or CS scores in either the bivariate analyses or regressions, more intense subjective experiences, as reflected by more global changes caused by patient deaths, were associated with higher scores in all three ProQOL aspects. The failure to detect direct links between healthcare professionals’ objective patient death experiences and ProQOL is consistent with previous findings [ 18 , 19 ], and the nonsignificant impacts echo a constructive view of people’s experiences: people’s reactions to and coping with adverse events are based not on what actually happened but on their perceptions and interpretations of what happened [ 20 , 21 ]. For instance, among Hong Kong adults grieving for their loved ones, higher subjective traumatic levels associated with the event were linked with more intense depression and complicated grief, while objective traumatic death (caused by accident or suicide rather than illness or senility) had no relation to either outcome [ 34 ]. Subjective patient death experiences and BO, STS, and CS Among professional caregivers, the links between subjective stressors and ProQOL have long been established. For both Indian and British nursing staff, perceived stress was found to be positively linked with BO and STS/compassion fatigue and negatively linked with CS [ 35 , 36 ]. In line with these findings, the present study revealed a clear and direct connection between patient death-specific subjective experiences and ProQOL. Although BO, STS, and CS reflect both positive and negative aspects of being a professional caregiver, AGC scores are positively associated with all of them. That is, the more that professional caregivers think that they have been changed by all patient deaths in their career, the more they experience BO and STS and the more they are satisfied with their job and with the act of helping others. Similar to the finding that bereavement of loved ones can cause both psychological trauma and posttraumatic growth [ 37 ], it seems that profound and in-depth patient death experiences also lead to pain as well as gain. The link between the AGC score and BO can be explained by the feelings brought about by each patient death. Facing death can be exhausting, especially when deaths are perceived as very impactful [ 38 ]. Each patient death can lead to grief, guilt, frustration [ 39 ], and, in the Chinese context, worries about potential professional-patient conflicts [ 2 ]. Repetitive exposure to such intense feelings may, on the one hand, directly cause profound changes among professional caregivers (as was measured by the AGC) and, on the other hand, indirectly force professional caregivers to become disconnected and insensitive as a strategy for self-protection [ 40 ]. Moreover, as reflected in the AGC measurement, many of the changes that are caused by patient deaths are in fundamental dimensions that relate to the meaning of life and death and the value and limitations of medical science. The shattering of basic assumptions is itself a sign of major traumatic experiences [ 41 ]. Professional caregivers who report more changes are more likely to be traumatized by patient deaths, and PTSD symptoms correlate positively with vicarious trauma among helping professionals [ 42 ]. Meanwhile, when professionals learn more about the uncertainty of life and the limitations of medical science, they cherish the things they can do for their patients out of a “sense of humbleness about their work” [ 43 ]. This can result in a higher sense of value attached to the job of helping others. In addition, when professional caregivers gradually learn to be affected less by patient deaths and become better at coping, as was also measured in the AGC, a more internal locus of control can be developed, which is positively linked with CS [ 44 ]. Notably, sense of mission was significantly associated with all ProQOL scores: a higher sense of mission was related to less BO, more STS, and more CS. As explained by a Chinese physician in an interview, a sense of mission is a double-edged sword in facing patient deaths: On the one hand, it makes healthcare professionals experience more pain when witnessing patients and their families suffer. On the other hand, it helps them rebound more quickly from such impacts and focus on their duty [ 2 ]. The two aspects can explain the links of sense of mission with STS and CS, respectively. Moreover, a professional caregiver with a higher sense of mission may be less likely to become disconnected from patients and insensitive to their pain, despite the challenges that professional bereavement experiences bring. Therefore, their BO risks would be lower. Contributions and limitations This study is the first to separate subjective and objective aspects in the exploration of how patient death experiences are linked to ProQOL. With this approach, a direct and significant link between patient death experiences and ProQOL, which was previously obscured by the exclusive measurement of the number of death events and the neglect of the accumulated changes induced by those events, was revealed for the first time. The findings not only provide a missing piece in the theoretical gap but also further demonstrate how interpretations rather than facts are vital in professional bereavement experiences. Moreover, a comprehensive measurement of ProQOL was adopted, participants from various departments were involved, and a series of control variables were accounted for. All of these factors enhance the generalizability of the findings. Nonetheless, several limitations exist in the present study. First, a cross-sectional design was adopted, so causal inferences could not be made. Second, convenience sampling was used, and physicians were underrepresented in the present sample, which may have biased the findings. Third, recall bias might have been introduced with the usage of self-reported data. Practical implications The findings of the present study provide practical insights. First, more awareness of patient deaths experiences in healthcare services should be raised for the sake of not only their direct outcomes but also their long-term impacts, such as impaired ProQOL. To prevent patient death experiences from severely interfering with physicians and nurses’ ProQOL, support needs to be provided to those who report being severely influenced by patient deaths rather than those who simply encounter larger numbers of patient death events. Among the targeted population, meaning-centered interventions [ 45 ] and education on coping strategies [ 17 ] that are adapted to the unique context of patient deaths should be used to deal with threats posed by a sense of disconnection and shattered basic assumptions and to facilitate professionals’ appreciation of the value of the helping profession. Future studies Future studies can use longitudinal designs to reveal clearer causal links between professional caregivers’ patient death experiences and quality of life. More efforts are needed to identify potential mediators and moderators of this relationship to gain more theoretical understanding as well as practical insights. Moreover, a comprehensive framework that involves perceived impacts, perceived competence, values, attitudes, etc., regarding patient deaths can be constructed to form a comprehensive view of all subjective elements of professional bereavement and test how each of the elements contributes to variances in ProQOL when joined with other factors.
Conclusions Among Chinese physicians and nurses in the present study, objective and subjective patient death experiences had different links to ProQOL. BO, STS, and CS were all positively related to perceived accumulated global changes caused by patient deaths in regressions with control variables. However, none of the three aspects showed a significant link to the number of past patient deaths.
Background Patient deaths are impactful events for professional caregivers in both their professional and personal lives. The present study aims to explore how both subjective and objective patient death experiences are related to various aspects of professional quality of life (ProQOL) among physicians and nurses. Methods S econdary analyses of cross-sectional data were conducted, and 306 Chinese physicians and nurses whose most recent patient death experience was more than one month prior were included. Objective and subjective patient death experiences were measured based on the number of past patient deaths and the Accumulated Global Changes (AGC) subscale of the Professional Bereavement Scale, respectively. ProQOL was measured with the Professional Quality of Life Scale. Regressions were run following bivariate analyses. Results The number of past patient deaths was not significantly linked with any of the three ProQOL scores in either the bivariate analyses or regressions. Meanwhile, higher AGC scores were associated with higher burnout, secondary traumatic stress, and compassion satisfaction scores after participants’ age, occupation (physician/nurse), department, work experience, job commitment, and sense of mission were controlled. Conclusion Subjective rather than objective past patient death experiences link significantly with all three aspects of physicians’ and nurses’ ProQOL. The more professional caregivers think that they have been changed by all past patient deaths in their career, the more they experience burnout and secondary traumatic stress, but, the more satisfied they are with their job and the helping itself. Keywords
Acknowledgements Not applicable. Author contributions CQ.C. generated the research idea, analyzed and interpreted the data, and drafted the manuscript. JL.C. was a major contributor in revising the manuscript. Both authors read and approved the final manuscript. Funding This study was supported by the National Natural Science Foundation of China (32200898), and Southeast University Zhishan Young Scholars Support Program. Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate All the methods included in this study are in accordance with the declaration of Helsinki. As no new data were collected in the present study, no ethical approval was required. First author of the present study was the primary investigator and owner of the data in the original project, which was approved by the Human Research Ethics Committee of the University of Hong Kong (reference number: EA1807022). All participants read the entire consent letter on the first page of the online questionnaire and gave their informed consent by clicking “I will participate in the research” before formally entering the survey. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:35
BMC Nurs. 2024 Jan 15; 23:41
oa_package/35/45/PMC10789078.tar.gz
PMC10789079
38221623
Introduction Differentiated thyroid cancer (DTC) is a common type of malignancy of the thyroid gland. More than 80% of all thyroid cancer cases consist of papillary carcinoma [ 1 ]. According to the World Health Organisation (WHO) data, there are 8 subtypes of papillary thyroid cancer (PTC): infiltrative follicular, tall cell, columnar cell, hobnail, solid, diffuse sclerosing, Warthin-like, and oncocytic [ 2 ]. We represent a case of such as a unique subtype of PTC as the Warthin-like variant obtained by 40-year-old woman. In the vast majority of cases, a tumor is diagnosed in women aged 30 to 50 [ 3 ]. Kim et al. revealed 0.2% presence of the Warthin-like variant of papillary thyroid carcinoma (WLV-PTC) in 8179 patients [ 4 ]. Therefore, there is a lack of data regarding long-term prognosis and tumor behavior in the literature. Approximately 95 patients were described in published articles with the abovementioned pathology subtype. The presence of the papillae structures lined with malignant oncocytic cells and lymphoid stroma are the main characteristics of the WLV-PTC [ 5 ]. The first description of the WLV-PTC was made by Apel in 1995. He defined the existence of oxyphilic cells on the background of the stroma lymphocytic infiltration. This archetype resembles papillary cystadenoma lymphomatosum of the salivary gland [ 6 ]. Local lymph node metastases are relatively rare (22%) comparing to classic PTC [ 7 ]. The case report confirms common histological characteristics of the WLV-PTC, its epidemiology, influence on overall and disease-specific survival. The five-year disease-free survival was also assessed and established.
Discussion WLV-PTC is one of the 8 subtypes of classical PTC according to the WHO [ 2 ]. This tumor develops mainly in women with an average age of 50 years [ 7 ]. The first description of the cases of WLV-PTC in English literature was made by Apel et al. in 1995. It was noted that this type of pathology resembled papillary cyst adenoma lymphomatosum of the salivary gland [ 6 ]. As a result, we added to the literature one new case of WLV-PTC with lymph node metastases. Clinical presentation of the pathological variant of the malignancy is the same as in classic PTC [ 8 ]. Our patient also did not have any symptoms of the disease, even with the presence of the Hashimoto thyroiditis. The nodule was identified accidentally. The high rate of TPO Ab (> 500) did not cause hypothyroidism and following complaints. Annual check-up should be conducted to avoid missing of an occult malignancy. On the other hand, an overdiagnosis may lead to performing unnecessary surgeries with following complications. US findings indeed may differentiate a suspicious lesion from a benign one. Conducting a study, Ning et al. pointed out some features of WLV-PTC nodules among 32 cases. The author determined that in 97% of the cases, nodules were solid or nearly solid, and in 78.8% they were hypoechoic [ 9 ]. In our report, these assumptions were confirmed. US is considered as the main method of decision-making about applying FNA biopsy. Thyroid Imaging, Reporting and Data System Lexicon Directory (TI-RADS) can be used in routine assessment of malignant nodules. Patients with TI-RADS category 4 and 5 lesions should unequivocally be undergone biopsy because of the high level of positive predictive values [ 10 ]. However, preoperative cytological definition of WLV-PTC is highly problematic. FNA should be performed in all suspicious thyroid lesions. This method could identify the exact genesis of the pathology. Detection of the WLV-PTC utilizing FNA may not be pointed out because of overlapping features of lymphocytic infiltration, oncocytic cells and papillary nuclear signs [ 11 ]. The cytological pattern of the WLV-PTC can imitate distinct pathologies, such as: Hurtle cell carcinoma, oncocytic variant of PTC, lymphocytic thyroiditis and follicular lesion [ 12 ]. Missaoui et al. examined 150 patients with WLV-PTC who made up 87.5% Bethesda categories 5 and 6 [ 3 ]. The result of FNA of the presented patient was Bethesda category 6 (malignant). Our patient also obtained malignant cytology which confirms common statistics. As mentioned above, the tumor was less than 10 mm, but nonetheless highly suspicious lesion should be biopsied. However, preoperative correct identification of the subtype of a tumor may be beneficial to choose the appropriate treatment. Predominantly macroscopic detailing of the nodule showed a grey lesion with undefined, irregular, and sharp margins [ 7 ]. Also, this pathology had the same patterns outlined by the gland. A method of surgery should be chosen according to the size of a tumor, FNA results, US examination of the neck, the medical history of a patient and clinical findings. In our case, we chose total thyroidectomy with CLND as we had previously done in similar cases. The choice was based on the cytology, presence of thyroiditis and opinion of the patient. Thereby, we removed metastatic nodules, but its impact on long-term survival has not been known because of rarity of the pathology. Paliogiannis et al. (2012) reviewed the data of several case series. Among 54 patients with WLV-PTC, local lymph node metastases were found in 22% of cases [ 7 ]. However, Rotstein (2009) revealed that the incidence of occult nodal metastases of PTC varied between 30 and 90% [ 13 ]. The necessity to do lymph node dissection in similar cases is still controversial to date. According to mentioned rates, we have decided to perform CLND in this particular case. It possibly avoids any local recurrence or persistent disease in the future. Probably, gene mutation may facilitate the decision-making process to determine the volume of a surgery. Absence of a cheap mutation-based gene set limited an opportunity to detect common BRAF V600E mutation. The number of articles about advanced WLV-PTC, cases with long-term disease-free survival is not enough represented in literature. A deeper analysis of tumor’s behavior will help to choose the most effective treatment option.
Conclusion Our case report confirms the main signs of manifestation of the WLV-PTC. The most important are as follows: indolent existence of lesions, absence of clinical symptoms, chronic lymphocytic thyroiditis as a background, and presence of specific histological patterns (papillae which are lined with oncocytic cells, nuclear grooves, and pseudoinclusions). The five-year disease-free survival seems to be favorable for that pathology. Neither performing nor avoiding CLND could be strongly recommended due to the scarcity of data about the incidence of local lymph node metastases. More information of this subtype of PTC should be collected to understand metastasizing, long-term survival rates and subsequent appropriate treatment.
Background We present a rare case of thyroid lesion marked as the Warthin-like variant of papillary thyroid carcinoma (WLV-PTC) with lymph node metastases. A proper preoperative identification is difficult because of unspecific cytology features and common ultrasound characteristics of this malignant tumor. The long-term prognosis cannot be thoroughly described due to the scarcity of data. The purpose of the presentation is to show common characteristics and long-term survival rates of an uncommon variant of differentiated thyroid cancer (DTC). Therefore, the data represented in this article can make a significant contribution to future investigations. Case presentation A 40-year-old Ukrainian woman had a lesion in the thyroid gland, which was accidentally diagnosed during medical checkup. Ultrasound (US) features were similar to the common suspicious nodule. It had typical signs of suspicion for malignancy (TI-RADS-4) on the background of thyroiditis. A thorough investigation of the neck showed lymph nodes with nonspecific US features on both lateral compartments. Lymph nodes were hypoechoic, oval-shaped and 10 mm wide, with regular contours, low central vascularity, with preserving hilar fat, without cystic formation. The patient did not have any complaints or changes in the hormone status. No hereditary findings linked with cancer were discovered. The woman had been living for a long time in the country with a high level of insolation, which was atypical for the ordinary environment of the patient. Fine-needle aspiration (FNA) of the lesion was done and the Bethesda system 6 result was obtained. Total thyroidectomy with central lymph node dissection was accomplished. The histological conclusion was WLV-PTC on the background of lymphocytic infiltration of the gland with metastasis to the lymph nodes. The inpatient radioactive iodine (RAI) ablation (100 mCi) was subsequently performed. Hormone withdrawal was used followed by RAI. In one year after the surgery the level of thyroglobulin (Tg) was 0.2 ng/ml. Up to the present time the five-year follow-up has not demonstrated any signs of recurrence relying on a level of Tg (< 0.04 ng/ml), Tg antibodies (< 14 IU/ml), neck US without any structural disease. Conclusion WLV-PTC resembles salivary gland tumors with similar histological features. This variant is not well known, but often associated with a stroma lymphocytic infiltration and a low risk of lymph node metastases. It is regarded that this rare subtype has similar long-term survival rates as classic papillary thyroid cancer (PTC). Keywords
Case presentation A 40-year-old Ukrainian female had a check-up and the ultrasonography (US) of the neck was performed. Previously, no complaints of the presence of lumps or pain zones were recorded. The patient had lived in a highly insolated climate zone for several years. She did not report a history of radiation exposure or thyroid diseases among her relatives. Clinical findings US of the neck noted hetero echogenicity of the thyroid gland with distinct areas of hypo- and hyperechogenicity. The volume of the gland was 8.3 cm 3 . Also, a small hypoechogenic 8 mm wide nodule with an irregular, spiculated margin and with microcalcifications was found in the left lobe. The nodule was taller than wide, which is a specific feature of malignancy. Also, the US of the neck detected small lymph nodes (less than 10 mm) in the bilateral compartments, which were hypoechoic, oval with a smooth margin, echogenic hilus and hilar vascularity. In the central compartment, no suspicious lymph nodes were found. A fine-needle aspiration (FNA) biopsy was performed several days after the pathology had been found. The result belonged to Bethesda category 6 (malignant). FNA of the nodules in the lateral compartments was not performed because of absence of the suspicious signs during the US. The main laboratory tests that could affect the thyroid function were as follows: thyroid-stimulating hormone (TSH)—2.28 mU/L (reference 0.45–4.0), free thyroxine (T4)—0.7 ng/dL (reference 0.8–1.8), thyroid peroxidase antibodies (TPO Ab)—544 IU/ml (less than 9), ionized calcium—1.23 mmol/L (reference 1.16–1.31). The patient underwent total thyroidectomy (TT) with central lymph node dissection (CLND) one month after the pathology had been detected. The Delphian’s node, pretracheal lymph nodes and fat tissue were dissected. Histopathology revealed typical features of the Warthin-like papillary thyroid carcinoma (ICD-O 8260/3): oxyphilic follicular cells which were lined out on the papillae structure; infiltrative lymphocytic stroma; with the lymphatic invasion of the tumor (Figs. 1 and 2 ). Also, one of 5 dissected lymph nodes was diagnosed as metastatic. The diagnosis was confirmed as pT1aN1aM0 I stage. The patient was discharged the next day. Medications with calcium and vitamin D were prescribed for 6 months. One month after the surgery, inpatient radioactive iodine (RAI) ablation was conducted with a 100 mCi dosage. Thyroxine withdrawal was used to achieve hypothyroidism. One month after TSH equaled 75.0 mU/L. No other complications following RAI were detected. In 5 months after the surgery laboratory tests were as follows: TSH—0.11 mU/L (reference 0.45–4.0), Tg—0.2 ng/ml (disease-free reference < 0.2), Tg Ab—20 IU/ml (reference < 40). No proof of recurrence or persistence of the disease was detected on the US of the neck. L-thyroxine was prescribed to achieve a 0.5–2.5 mU/L TSH level. The five-year follow-up identified the absence of the recurrence. It was confirmed by neck US with no evidence of structural disease, a Tg test (< 0.2 ng/ml) and a Tg Ab test (<4 IU/ml). Suspicious US changes of the neck lymph nodes were also not detected (Fig. 3 ). The CARE guidelines were applied in writing this report.
Abbreviations Warthin-like variant of papillary thyroid carcinoma Differentiated thyroid cancer Ultrasound Thyroid Imaging Reporting and Data Systems Fine needle aspiration Radioactive iodine Thyroglobulin Papillary thyroid cancer World Health Organization Thyroid-stimulating hormone Thyroxine Anti-thyroid peroxidase antibody Total thyroidectomy Central lymph node dissection Recurrent laryngeal nerve International classification of diseases for oncology Thyroglobulin antibodies CAse REport guidelines Acknowledgements Not applicable. Author contributions AH contributed the idea of the manuscript. AB performed a surgery of this patient and contributed the thorough concept of the presentation. PB and VC did histological conclusion, collected and presented figures in this case report. Funding The authors confirmed that they don’t have any source of additional funding. Availability of data and materials The author will provide all the data on request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Competing interests The authors declare no conflict of interests.
CC BY
no
2024-01-16 23:45:35
J Med Case Rep. 2024 Jan 15; 18:17
oa_package/e2/3d/PMC10789079.tar.gz
PMC10789080
38225618
Background Alcoholic liver disease (ALD) is one of the most common liver-related conditions in numerous countries [ 1 – 3 ]. It includes excessive alcohol consumption exceeding a particular daily amount and has various manifestations such as chronic hepatitis with liver fibrosis or cirrhosis [ 2 ]. ALD clinically encompasses a range of liver-related conditions starting with steatosis, which can advance to fibrosis and consequently result in cirrhosis in one out of every four patients who are frequent alcohol consumers [ 2 ]. According to the statistics, ALD-related mortality is more prevalent among men in the US. However, it has been demonstrated that ALD-related mortality in women occurs two to three years earlier than in men [ 1 ]. To this date, the underlying mechanism of ALD has not been fully elucidated. However, it has been found that prolonged excessive alcohol consumption can result in the formation of oxidative stress through amplified metabolism via the cytochrome P450 2E1 system [ 3 ]. This phenomenon can lead to the production of reactive oxygen species (ROS) and protein and DNA adducts. These products induce inflammatory signaling pathways within the liver, resulting in the expression of pro-inflammatory mediators. These pro-inflammatory mediators can mediate apoptosis and necrosis of hepatocytes [ 3 ]. Moreover, intra-hepatocyte mitochondrial stress resulting from ROS exposure can also cause structural and functional mitochondrial complications leading to an increased incidence of apoptosis. Additionally, epigenetic regulation has also been reported to be directly affected by alcohol consumption. Elevated levels of histone acetylation and methylation and particular site-specific histone acetylation can impede various antioxidant pathways and reactions and induce the expression of important pro-inflammatory genes [ 3 ]. Early-stage ALD usually does not have any physical manifestations. ALD is commonly diagnosed when it has reached an advanced stage with apparent manifestations. However, routine medical examinations can be highly beneficial for the early diagnosis of ALD. Histologic tests as well as evaluation of the level of liver enzymes are used as the common diagnostic methods [ 4 ]. The most important step for ALD treatment is retraining from further alcohol consumption. Moreover, liver transplantation is currently the only long-term management option available for individuals with decompensated liver cirrhosis [ 5 ]. Several types of medications, including corticosteroids, are also used under certain conditions [ 6 ]. Additionally, researchers have recently focused on monoclonal antibody therapeutics such as infliximab; however, the results regarding their clinical benefit are still unclear [ 7 ]. Berberine (BBr) is an alkaloid utilized as an herbal medicine for treating various types of health-related conditions, such as diarrhea, in traditional Chinese medicine [ 8 ]. Recently, various studies have highlighted the beneficial biological effects of berberine, which include antitumor effects, cardiovascular-protective properties, and anti-inflammatory activities [ 9 ]. To this date, many bodies of research have focused on the protective effects of BBr in various types of hepatotoxicity at different experimental levels. For instance, Germoush et al. investigated the protective properties of BBr in alleviating cyclophosphamide-induced hepatotoxicity in mouse preclinical models [ 10 ]. According to their results, oral administration of BBr for days after the administration of a single dose of cyclophosphamide improved the serum level of various hepatic enzymes which had deviated from the normal level following the administration of cyclophosphamide [ 10 ]. These researchers also indicated that BBr demonstrates noticeable hepatoprotective behavior against drug-induced hepatotoxicity [ 10 ]. Additionally, an investigation by Knittel et al. also demonstrated that oral administration of BBr with a dose of 25 and 50 mg/kg for 7 days can mediate hepatoprotective effects against methotrexate-induced liver toxicity [ 11 ]. Moreover, Wang et al. investigated the protective effects of BBr on liver fibrosis in rat models of liver fibrosis established using bile duct ligation (BDL) [ 12 ]. These researchers demonstrated that BBr prevents hepatic fibrosis mediated by BDL in preclinical mouse models [ 12 ]. However, it was also indicated that the antifibrotic properties of BBr in patients demand further investigations [ 12 ]. In 2020, Li et al. studied the action mechanism by which BBr exerts its therapeutic effects on ALD linked to the gut microbiota-immune system axis [ 13 ]. These researchers established preclinical animal models of ALD, assessed various biochemical factors, and performed histological evaluations [ 13 ]. According to their results, they first reported the favorable therapeutic effects of BBr on “acute-on-chronic” alcoholic hepatic damage [ 13 ]. Furthermore, these researchers focused on the action mechanism related to the gut microbiota-immune system axis [ 13 ]. It was elucidated that BBr activates a group of immune cells called granulocytic-myeloid-derived suppressor cell (G-MDSC)-like cells and increases the population of these cells in both the liver and blood [ 13 ]. Furthermore, these cells improved the hepatic damage mediated by alcohol in the liver of the studied animal models [ 13 ]. In addition, it was reported that BBr decreased the population of cytotoxic T cells [ 13 ]. Of note, these researchers added that inhibiting the G-MDSC-like cell population remarkably debilitated the protective activity of BBr against alcohol [ 13 ]. Additionally, the findings of Zhang et al. demonstrated that BBr can reduce alcohol-induced oxidative stress by decreasing hepatic lipid peroxidation, glutathione exhaust, and mitochondria oxidative damage in mouse models [ 14 ]. In vivo assessments have highlighted the role of BBr in preventing ethanol-mediated oxidative stress and macro steatosis. Briefly, BBr can inhibit the total cytochrome P450 2E1 or the mitochondria cytochrome P450 2E1 activity. It has also been demonstrated that BBr can reduce excessive alcohol consumption-induced lipid accumulation in the liver. Such findings can propose the capability of BBr to serve as a possible agent for preventing or managing ALD [ 14 ]. However, in addition to these preclinical studies, clinical trials are crucially required for investigating the therapeutic and hepatoprotective effects of BBr in liver-related conditions. To this date, many researchers have used nanoparticles for the nanomedicine-based delivery of BBr for different aims based on their properties including cancer therapy and antibacterial applications [ 15 – 18 ]. Studies have focused on enhancing various properties of BBr by loading it onto nanomedicine delivery platforms since the promising potentials of BBr are mostly tackled by its poor level of aqueous solubility, strong hydrophobicity, low rate of absorption in the gastrointestinal, and rapid metabolism in the body [ 16 , 19 – 22 ]. Nanomedicine-based delivery systems aim to alleviate various properties of BBr to further support its applications. For instance, Wang et al. have reported that the antitumor activity of BBr remarkably increases when encapsulated in solid lipid nanoparticles [ 16 ]. Moreover, other researchers demonstrated that the bioavailability of BBr can be improved when loaded in chitosan (CS) nanoparticles and these nanoparticles could exhibit enhanced anticancer activity against nasopharyngeal carcinoma cells [ 23 ]. Additionally, other researchers investigated BBr-loaded CS nanoparticles in scopolamine-induced Alzheimer’s-like disease preclinical rat models [ 24 ]. The results of this study demonstrated that CS nanoparticles enhanced the bioavailability, absorption, and brain drug uptake of BBr in rat models [ 24 ]. In addition, an in vitro experiment demonstrated that loading BBr in O-hexadecyl-dextran nanoparticles improved its cytoprotective properties and these nanoparticles can decrease the level of oxidative stress in hepatocytes of rats at a concentration 20 times lower than free BBr and prevent high glucose stress [ 25 ]. Such studies suggest that nanotechnology-based platforms might improve the physical, chemical, and biological behavior of BBr and support its further applications. Different types of nanoparticles have been investigated for their applicability in ameliorating hepatoxicity. For instance, Bhattacharjee et al. investigated the protective characteristics of selenium nanoparticles against hepatotoxicity and genotoxicity induced by cyclophosphamide in preclinical mouse models [ 26 ]. These researchers reported that intraperitoneal administration of cyclophosphamide was used for the establishment of the models and selenium nanoparticles were given by oral gavages. According to the results of this report, the delivery of nanoparticles decreased the level of various hepatotoxicity factors including malonaldehyde (MDA), ROS, glutathione, and various antioxidant enzymes [ 26 ]. It also resulted in a reduced level of chromosomal abnormalities and DNA damage in bone marrow and lymphocytes [ 26 ]. Additionally, the protective effects of selenium nanoparticles were also observed in histopathological samples in preclinical models of hepatoxicity [ 26 ]. In another study by Tabbasam et al., the researchers investigated the protective effects of inorganic nanoparticle complexes against carbon tetrachloride-induced hepatotoxicity in preclinical mouse models [ 27 ]. In detail, these researchers generated three different types of nanoparticles (gold, silver, and zinc oxide) all loaded with doxorubicin. According to the results, nanoparticle-assisted delivery of doxorubicin resulted in a reduced level of liver fibrosis. Moreover, they reported that silver nanoparticles loaded with doxorubicin outperformed the other types of nanoparticles by mediating the level of hepatic enzymes closest to the control group [ 27 ]. Overall, it was reported that drug-loaded silver nanoparticles demonstrated remarkable protective effects against carbon tetrachloride-induced hepatotoxicity [ 27 ]. Based on these findings, we proposed that nanoparticles may be beneficial for the treatment of hepatoxicity. However, it is worth mentioning that there are also many bodies of research reporting the induction of hepatoxicity by nanoparticles as well [ 28 – 30 ]. For instance, it has been reported that silver nanoparticles can mediate the emergence of hepatotoxicity in male rats [ 29 ]. In this regard, the researchers have suggested that Beta vulgaris (beetroot) water extract as a potential therapeutic intervention following the administration of sliver nanoparticles for minimizing nanoparticle-induced hepatotoxicity [ 29 ]. Other researchers also demonstrated that the intraperitoneal administration of gold nanoparticles induces liver damage, and produces oxidative stress and fatty acid peroxidation [ 28 ]. It was also demonstrated that melanin exhibits advantageous characteristics in reducing the liver toxicities induced by gold nanoparticles [ 28 ]. Such data suggest that even though nanoparticles can be used for alleviating liver toxicities, they can be also a source for various types of liver toxicities which should be broadly investigated. CS-based nanoparticles have been broadly investigated as ideal delivery vehicles in many studies, owing to their capability for improving BBr bioavailability [ 23 , 31 ]. CS has chemical functional groups which can be efficiently modified for particular purposes [ 32 ]. Such properties render CS a polymer with a remarkable variety of possible functions. However, the gastrointestinal delivery of various types of cargo using nanoparticles is challenging since a pH gradient exists in the gastrointestinal tract [ 33 ]. Therefore, different CS-based formulations that are responsive to specific pH conditions can be useful in this regard. CS nanoparticles formulated by ionic cross-linking with hydroxypropyl methylcellulose phthalate (HPMCP) have been investigated as pH-responsive nanoparticles for the targeted delivery of various types of cargo and it has been demonstrated that they can be safe and efficient in terms of targeted drug delivery [ 34 ]. Herein, we generated CS nanoparticles and cross-linked them with tripolyphosphate (TPP; hereafter referred to as CS/TPP) or HPMCP (hereafter referred to as CS/HPMCP) and loaded these nanoparticles with BBr (hereafter referred to as CS/TPP/BBr and CS/HPMCP/BBr, respectively) and assessed different aspects of these nanoparticles in vitro and in vivo.
Materials and methods Materials Low molecular weight CS was purchased from Primex Pharmaceuticals AG (Iceland). BBr chloride hydrate (catalog No. 14,050), pancreatin from porcine pancreas (catalog No. P3292), pepsin from porcine gastric mucosa (catalog No. P7000), dipotassium phosphate, diethyl ether, TPP, HPMCP, and 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) were purchased from Sigma-Aldrich (USA). Penicillin-streptomycin, phosphate buffer saline (PBS), trypsin 0.25%, dimethyl sulfoxide (DMSO), sodium hydroxide (NaOH), heat-inactivated fetal bovine serum (FBS), Dulbecco’s Modified Eagle Medium/Nutrient Mixture F-12 (DMEM/F-12) were purchased from DNA Biotech Co. (Tehran, Iran). Formalin, hydrochloric acid (HCl), and glacial acetic acid were obtained from Dr. Mojallali Industrial Chemical Complex Co. (Tehran, Iran). Figure 1 represents the chemical structure of chitosan, HPMCP, TPP, and BBr used in the preparation of the nanoparticles. Cells and animals Mesenchymal stem cells (MSCs) were obtained from the Pasteur Institute of Iran (Tehran, Iran) and were cultured in low glucose DMEM supplemented with 10% (v/v) FBS. All the cells were cultured at 37 °C in a humidified atmosphere containing 5% CO 2 . A total of 45 male Wistar rats (200–250 g) were used in this study. The animals were obtained from the Center for Reproduction of Laboratory Animals at Shahroud University of Medical Sciences and were maintained under 12-hour light/12-hour dark cycles and were given water and food ad libitum. All animal experiments were carried out in accordance with the regulations of Shahroud University of Medical Sciences and this study is reported in accordance with ARRIVE guidelines ( https://arriveguidelines.org ). Preparation of nanoparticles CS/HPMCP and CS/TPP nanoparticles were prepared using the ionic gelation method as previously described [ 35 , 36 ]. Briefly, 28 mg of CS was dissolved in 10 mL of acetic acid (2%) and the solution was placed on a magnetic stirrer until a transparent solution was obtained (pH 4.2–4.8). Separately, HPMCP (3 mg/mL; dissolved in 0.1 NaOH) and TPP (2 mg/mL; dissolved in deionized water) solutions were prepared and stirred for 30 min (1200 rpm). Moreover, in separate preparations, the HPMCP and TPP solutions were added to the CS solution in a drop-wise manner (at a titration rate of 0.1 mL/minute) and the solutions were stirred (1500 rpm) at room temperature until a volume ratio of 5:2 for both CS:HPMCP and CS:TPP was obtained. In order to prepare nanoparticles with a similar size range, the nanoparticle-containing solutions were stirred for an additional 30 min (1500 rpm). For the preparation of CS/HPMCP/BBr and CS/TPP/BBr nanoparticle, 28 mg of CS was dissolved in 10 mL of acetic acid (2%) and the solution was placed on a magnetic stirrer until a transparent solution was obtained (pH 4.2–4.8). 28 mg of BBr was added to the CS solution (called CS-BBr solution) and the solution was placed in an ultrasonic bath at 30 ̊C for 30 min. Of note, all of the experiment steps involving BBr were carried out in a dark condition since BBr is a light-sensitive material. The CS-BBr solution was stirred at 30 ̊C for 60 min (1200 rpm) to obtain a transparent solution. Separately, HPMCP (3 mg/mL; dissolved in 0.1 NaOH) and TPP (2 mg/mL; dissolved in deionized water) solutions were prepared and stirred for 30 min (1200 rpm). Moreover, in separate preparations, the HPMCP and TPP solutions were added to the CS-BBr solution in a drop-wise manner (at a titration rate of 0.1 mL/minute) and the solutions were stirred (1500 rpm) at room temperature until a volume ratio of 5:2 for both CS-BBr:HPMCP and CS-BBr:TPP was obtained. In order to prepare nanoparticles with a similar size range, the nanoparticle-containing solutions were stirred for an additional 60 min (1500 rpm). Encapsulation efficiency and drug loading capacity The loading capacity and encapsulation efficiency of CS/HPMCP and CS/TPP nanoparticles were determined by measuring the amount of unentrapped BBr in the supernatant. Briefly, the generated CS/HPMCP and CS/TPP nanoparticles were separated by centrifugation (16,000 rpm, 30 min, and 4 °C) and the amount of free BBr in the supernatant was determined using the Bradford protein assay [ 37 ]. The equations used for calculating the loading capacity and encapsulation efficiency of BBr are as follows [ 38 ]: Physicochemical characterization of nanoparticles Particle size, polydispersity index (PI), and zeta potential The particle size and PI of the freshly prepared CS/HPMCP and CS/TPP nanoparticles were determined by the Dynamic Light Scattering (DLS) method via a particle characterizer device (nanoPartica® SZ-100, Horiba, Japan). Moreover, the same device was also used for the calculation of the zeta potential of the nanoparticles. Fourier transform infrared (FTIR) Spectroscopy FTIR spectroscopy was carried out to determine any molecular interactions present between the formulation components using the Spectrum GX spectrophotometer (Perkin Elmer, USA). The spectra were collected in the spectral range of 400–4,000 cm − 1 with a resolution of 4 cm − 1 . Morphological analysis using scanning electron microscope (SEM) The morphology of the synthesized nanoparticles was assessed using SEM (Sigma 300-HV, Zeiss, Germany). Briefly, the synthesized nanoparticles were sputter-coated with gold and then analyzed. BBr release from CS/HPMCP/BBr and CS/TPP/BBr nanoparticles in the stimulated gastric fluid (SGF) and stimulated intestinal fluid (SIF) environments The effect of pH on the release ability of BBr from CS/HPMCP and CS/TPP nanoparticles in SGF (pH 1.2) and SIF (pH 7.4) was assessed in eight different time points (15, 30, 60, and 90 min, and 2, 3, 4, 6, and 8 h) [ 39 ]. Briefly, 2 mL of CS/HPMCP/BBr and 2 mL of CS/TPP/BBr nanosuspensions were separately placed in dialysis tubing (12 kDa) and then they were placed in 48 mL of SGF and SIF at 37 °C and with gentle shaking (50 rpm). At appropriate intervals, 2 mL of SGF and SIF was taken and replaced by fresh medium. The quantity of the released BBr in the taken SGF and SIG samples was assessed using Ultraviolet–visible (UV-VIS) spectroscopy at a maximum wavelength (λ max ) according to the standard curve of BBr [ 40 ]. Of note, to avoid light-induced BBr decomposition, all of the mentioned steps were performed in the dark. Cell viability assay MTT assay was used to investigate the possible cytotoxic effects of different concentrations of CS/HPMCP, CS/TPP, CS/HPMCP/BBr, and CS/TPP/BBr nanoparticles on the viability of MSCs. Briefly, 1 × 10 4 MSCs were seeded in different wells of a 96-well cell culture plate and then they were incubated at 37 oC for 24 h. Next, the cells were treated with different concentrations of each of the indicated nanoparticles (1, 0.5, 0.25, 0.125, 0.0625, 0.03125, and 0.015625 mg/mL) for 24 and 72 h. Next, 10 μL of MTT solution (5 mg/mL) was added to each well which was followed by a 4-hour incubation. Afterward, the media was gently aspirated and 100 μL of pure DMSO was added to each well and the absorbance was read at 570 and 690 nm using an ELISA reader [ 41 ] (Cytation 5 BioTek, USA). In vivo assays The animals were randomly categorized into nine experimental groups (n = 5) including [ 1 ] control group (normal rats with no liver damage; named “control”), [ 2 ] sham group (alcohol-receiving animals with no previous treatment; named “EtOH”), [ 3 ] alcohol-receiving, previously treated with CS/HPMCP/BBr nanoparticles (named “CS/HPMCP/BBr”), [ 4 ] alcohol-receiving, previously treated with CS/HPMCP nanoparticles (named “CS/HPMCP”), [ 5 ] alcohol-receiving, previously treated with CS/TPP/BBr nanoparticles (named “CS/TPP/BBr”), [ 6 ] alcohol-receiving, previously treated with CS/TPP nanoparticles (named “CS/TPP”), [ 7 ] alcohol-receiving, previously treated with free CS (named “CS”), [ 8 ] alcohol-receiving, previously treated with BBr (named “BBr”), and [ 9 ] alcohol-receiving, previously treated with free CS and BBr (named “CS/BBr”). For 21 days, CS/HPMCP/BBr, CS/HPMCP, CS/TPP/BBr, CS/TPP, CS, BBr, and “CS/BBr” suspensions (20 mg/kg) were given to the rats in the treated groups through the gastric gavage route. Two hours after this, 45% ethanol was given to the treatment groups (20 mg/kg) through gastric gavage. 24 h after performing the last treatment, the animals were anesthetized with an intraperitoneal injection (75–100 mg/kg of 10% ketamine and 10 mg/kg of xylazine 2%). Blood samples were quickly collected from the heart of the animals and the sera were obtained through centrifugation (2000 × g for 10 min at 4 °C using a refrigerated centrifuge). The obtained sera were stored at -80 oC for further analysis. Moreover, the livers of the animal models were excised and gently washed using physiological serum and placed in 40% formaldehyde solution and kept for further assessments. Biochemical assessments The serum levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), and gamma-glutamyl transpeptidase (GGT) were determined using commercially available kits (ParsAzmoon, Tehran, Iran) according to manufacturer’s instructions. Moreover, liver functionality was further studied by determining the relative concentration of MDA and glutathione peroxidase (GPx) using the lipid peroxidation MDA assay and GPx assay, respectively. Histopathological assays Hematoxylin and eosin (H&E) and Masson’s trichrome staining were used for the histopathological evaluation of the liver of the animals. In brief, livers were fixed using 10% formalin and were embedded in paraffin. 5 μm slices were prepared from the fixed tissues and were stained with H&E and Masson’s trichrome staining, separately. The prepared slides were analyzed under a light microscope. Statistical analysis One-way and two-way analysis of variance (ANOVA) was performed for statistical analyses using Graph Pad Prism 9 (GraphPad Software, USA). A p -value < 0.05 was considered statistically significant.
Results Mean particle size, PI, and zeta potential of the prepared nanoparticles The average size of the prepared nanoparticles before and after BBr loading was determined using DLS. In detail, CS/HPMCP/BBr and CS/TPP/BBr nanoparticles had an average size of 260 ± 23 and 198 ± 17 nm, respectively. Moreover, CS/HPMCP nanoparticles exhibited an average size of 245 ± 42 nm and CS/TPP nanoparticles had an average size of 172 ± 21 nm. According to SEM images presented in Fig. 2 a and b, CS/HPMCP/BBr nanoparticles had a particle size ranging from 222 to 251 nm, with an average size of 235.5 nm. On the other hand, CS/TPP/BBr nanoparticles had a smaller particle size ranging from 145 to 194 nm, with an average size of 172 nm. In regard to the PI, CS/HPMCP/BBr and CS/HPMCP nanoparticles had a PI of 0.28 ± 0.024 and 0.25 ± 0.2, respectively. Moreover, CS/TPP/BBr and CS/TPP nanoparticles demonstrated a PI of 0.27 ± 0.02 and 0.27 ± 0.1, respectively. The zeta potential of CS/HPMCP/BBr nanoparticles was about 32 ± 0.25 mV (27 ± 0.35 mV for CS/HPMCP nanoparticles), whereas CS/TPP/BBr nanoparticles had a zeta potential of about 28 ± 0.31 mV (25 ± 0.41 mV for CS/TPP nanoparticles). FTIR The FTIR spectra of CS, HPMCP, TPP, BBr, as well as CS/HPMCP, CS/HPMCP/BBr, CS/TPP, and CS/TPP/BBr nanoparticles are presented in Fig. 3 a and b. According to the results, CS has two strong peaks in 1596 cm − 1 and 1664 cm − 1 which has been attributed to CONH 2 and NH 2 groups. The fluctuation in these peaks in the spectrum of CS/HPMCP nanoparticles in comparison with free CS is a sign of interaction between the NH 3 group of CS and the COO − group of HPMCP. This interaction is recognized through the strong decline of the amid band in 1655 cm − 1 . The broad peak in 3400 cm − 1 has been attributed to the stretching vibration of the NH 2 and OH groups. Such peaks are more visible and robust in the CS group which is an indicator of strong hydrogen bonds. Also, this peak can be attributed to CH 2 interactions in the CS group. The elevation seen from 1203 cm − 1 to 1240 cm − 1 in CS/TPP nanoparticles in comparison with the free CS group is an indicator of P-O interactions. Moreover, the fluctuation of the peaks from 1647 cm − 1 to 1738 cm − 1 and from 1588 cm − 1 to 1643 cm − 1 in CS/TPP nanoparticles in comparison with CS are due to C-O and N-H interactions, respectively. Moreover, BBr has sharp peaks in 1500 cm − 1 to 3000 cm − 1 regions. BBr is bonded to CS through forming amid bonds with carboxyl groups or by forming hydrogen bonds with carbonyl groups (179). The FI-IR spectra of BBr, CS/HPMCP/BBr, and CS/TPP/BBr clearly demonstrate that CS and BBr are bonded to each other without any structural changes. Loading capacity and encapsulation efficiency In this experiment, the encapsulation efficiency was calculated for both CS/HPMCP and CS/TPP nanoparticles and it was reported to be 75.79% and 80.05%, respectively. Moreover, loading capacity of CS/HPMCP and CS/TPP nanoparticles was calculated as 79.78% and 84.26%, respectively. In vitro assays Drug release assay In this experiment, we assessed the effect of pH on the release ability of CS/HPMCP/BBr and CS/TPP/BBr nanoparticles for releasing BBr in in vitro settings of SGF (pH 1.2) and SIF (pH 7.4) at eight different time points (15, 30, 60, and 90 min, and 2, 3, 4, 6, and 8 h after the starting time of the experiment) [ 39 ]. According to the results presented in Fig. 4 a, the pH of the environment was a determining factor in the drug-releasing rate of the nanoparticles. In detail, CS/HPMCP/BBr nanoparticles demonstrated a BBr release percentage of no more than 4.6% in the SGF environment; however, this pattern was remarkably different in the SIF environment as CS/HPMCP/BBr nanoparticles demonstrated a BBr release rate of 43.2% after the first 2 h reaching to 81.6% after eight hours since the start of the experiment. Statistical analyses indicated that the BBr release percentage of CS/HPMCP/BBr nanoparticles was significantly higher ( p < 0.0001) in the SIF environment in comparison with the SGF environment at all of the investigated time intervals except for the first 15 min. In regard to CS/TPP/BBr nanoparticles, the releasing pattern was different in the SGF environment. In detail, CS/TPP/BBr nanoparticles managed to release 50.6% of the BBr in the first 2 h reaching 58.3% after eight hours since the start of the experiment. In the SIF environment, CS/TPP/BBr nanoparticles demonstrated a releasing rate of 69.6% in the first 2 h and a releasing rate of 77% after 8 h. In a similar fashion to CS/HPMCP/BBr nanoparticles, CS/TPP/BBr nanoparticles also exhibited significantly higher ( p < 0.0001) release rates in the SIF environment in comparison with the SGF environment at all of the investigated time intervals except for the first 15 min. Such results indicate even though CS/TPP/BBr nanoparticles demonstrated a high release rate in both of the simulated conditions, it can be concluded that both of the nanoparticles can significantly manage the release of their cargo in the gastrointestinal tract. Cell viability assay In this experiment, the MTT assay was used to determine the effects of BBr delivery using different concentrations of CS/HPMCP/BBr and CS/TPP/BBr nanoparticles on the viability of MSCs. Of note, in the case of CS/HPCMP/BBr and CS/TPP/BBr nanoparticles, according to the results of the loading capacity and encapsulation efficiency, around 80% of the prepared nanoparticles were loaded with BBr. Therefore, around 80% of each of the indicated concentrations of CS/HPCMP/BBr and CS/TPP/BBr nanoparticles were drug-loaded. In the case of CS/HPMCP and CS/HPMCP/BBr nanoparticles (Fig. 4 b), CS/HPMCP/BBr nanoparticles significantly elevated the cell viability level in all of the experimented concentrations in comparison with CS/HPMCP nanoparticles in 72 h ( p -value < 0.0001 for all of the groups). The same pattern was also observed after 24 h with the exception that two of the lowest investigated concentrations of CS/HPMCP/BBr nanoparticles (0.015625 and 0.03125 mg/mL) did not significantly elevate the cell viability level in comparison with CS/HPMCP nanoparticles. Additionally, CS/TPP and CS/TPP/BBr nanoparticles demonstrated very similar behavior in comparison with CS/HPMCP and CS/HPMCP/BBr nanoparticles, respectively (Fig. 4 c). In detail, after 24 h, the concentration of 0.0625 mg/mL and higher concentrations of CS/TPP/BBr significantly increased cell viability rate in comparison with CS/TPP nanoparticles ( p -value < 0.0001 for all of the groups). Moreover, after 72 h, treatment with all of the concentrations of CS/TPP/BBr nanoparticles significantly increased the cell viability rate of MSCs in comparison with CS/TPP nanoparticles ( p -value < 0.0001 for all of the groups). Such data indicate that long-term exposure of human MSCs to CS/HPMCP/BBr and CS/TPP/BBr nanoparticles does not negatively affect normal cellular functioning and viability of the cells. In vivo experiments Enzymatic analysis To investigate the protective effects of CS/HPMCP/BBr and CS/TPP/BBr nanoparticles against alcohol-induced hepatotoxicity, the level of different hepatic enzymes was evaluated in each of the experimental groups. According to our observations, alcohol delivery resulted in significant deviance in the level of hepatic enzymes including AST, ALT, ALP, GGT, GPx, and MDA. In detail, the serum levels of AST, ALT, ALP, GGT, and MDA in the EtOH group were significantly higher in comparison with the control group ( p < 0.0001 for all of the enzymes). Moreover, the level of GPx in the EtOH group was significantly lower than the control group ( p < 0.0001; Fig. 5 ). These findings indicate that alcohol delivery results in remarkable liver damage and consequently impaired liver functioning. In regards to the experimental performance of the tested materials, only CS/HPMCP/BBr treatment prevented significant deviance in the level of all of the tested liver enzymes in comparison with the control group, after the animals were given alcohol. Moreover, CS/TPP/BBr treatment only prevented significant deviance in the level of ALT, GGT, and MDA in comparison with the control group after the animals were given alcohol, indicating a partial response in comparison with its counterpart CS/HPMCP/BBr. Histopathological analysis Microscopic evaluation of the liver tissues via H&E staining in the control group demonstrated that the liver lobules have a clear structure with the hepatic cords arranged from the central veins to the periphery. However, in the EtOH group, the findings indicated the abnormal lobular structure of the liver, the irregular structure of the liver cord, and the interstitial infiltration of inflammatory cells. In the CS group, the destruction and disorganization of the liver cord, and infiltration of inflammatory cells were observed similar to the EtOH group. The same results were observed in the CS/HPMCP and CS/TPP groups. Therefore, it is safe to conclude that CS, HPMCP, or TPP do not solely demonstrate any remarkable protective effects. In the CS/HPMCP/BBr group, the penetration of inflammatory cells, the destruction of the liver cells, and the disruption of the hepatic cord were remarkably prevented. In the CS/TPP/BBr group, relatively more tissue damage was observed in comparison with the CS/HPMCP/BBr group. Additionally, in the CS/BBr group as well as the BBr group, the disorganization in the liver cord, infiltration of inflammatory cells, and destruction of liver cells were observed (Fig. 6 ). Additionally, the protective effects of the nanoparticles were investigated using TCM staining (Fig. 7 ). In detail, in the liver tissue samples of the control group, no collagen fibers were seen around the central vein and the cells and the liver cord had a preserved structure. On the other hand, in the EtOH group, abnormal liver lobular structure, irregular structure of the liver cord, and abundant collagen fibers around the central vein were documented. Similarly, in the CS, CS/HPMCP, and CS/TPP groups, the destruction and irregularity of the liver cord, the infiltration of inflammatory cells, and numerous collagen strands were present; suggesting no remarkable protective effects for CS, HPMCP, and TPP as single agents and supporting what was indicated in the H&E staining assay results. In the CS/HPMCP/BBr group, the disorder in the liver cord was significantly reduced, the endothelium cells were placed side by side, and the collagen layers around the central vein were not present. Similar results were also observed in the CS/TPP/BBr group; however, the destruction and damage were still observed and collagen fibers were present. Also, in both the CS/BBr and BBr groups, the destruction of the liver cells and the collagen fibers, and more disarray in the liver cord were observed.
Discussion Chronic alcohol consumption is a globally major public health challenge and is known as one of the main factors for non-communicable diseases [ 42 ]. Since alcohol is mainly metabolized in the liver, this organ is considered one of the main sites of damage after alcohol consumption [ 43 ]. Liver diseases caused by alcohol consumption include steatosis, steatohepatitis, cirrhosis, and hepatocellular carcinoma [ 43 ]. Ethanol metabolism in the liver causes an increase in the level of ROS, which leads to an imbalance of oxidation and reduction. In this regard, antioxidants could be considered a solution for restoring the balance of oxidation and reduction [ 44 – 46 ]. Medicinal antioxidants could have beneficial effects in reducing the occurrence of ethanol-induced changes in cellular lipids, proteins, and nucleic acids, and can act as a natural antioxidant defense booster by trapping free radicals, causing an interruption in the peroxidation process [ 45 , 47 – 49 ]. BBr is a yellow alkaloid present in numerous plants including barberry [ 50 , 51 ]. Accumulating evidence suggests that BBr has numerous properties including anti-inflammatory, antioxidant, anti-convulsant, anti-depressant, anti-Alzheimer, anti-cancer, anti-arrhythmic, anti-viral, anti-bacterial, and anti-diabetic properties [ 52 – 54 ]. Also, BBr can reduce the toxicity of chemical toxins in the brain, heart, kidney, liver, and lung through its antioxidant, anti-inflammatory, and anti-apoptotic properties, and modulation of the mitogen-activated protein kinase (MAPK) and nuclear factor κB and signaling pathways. (NF-κB) [ 14 , 55 , 56 ]. However, the low level of BBr bioavailability, absorption, and solubility are the known major obstacles to its systemic administration. In fact, only 0.5% of ingested BBr is absorbed in the small intestine and this amount decreases to 0.35% by the time it enters the bloodstream. In this regard, it is believed that nano-based formulations are ideal candidates to increase the absorption rate of BBr since nano-scale compounds can be absorbed in the intestine with the desired speed and concentration [ 57 ]. CS nanoparticles have interesting biological properties such as non-toxicity, biocompatibility, biodegradability, mucosal adhesion, and the ability to penetrate through the epithelial tight junctions; therefore, they are considered suitable carriers for drug delivery, including oral drug delivery [ 32 , 58 , 59 ]. It should also be stated that CS nanoparticles are easily decomposed and dissolved in acidic conditions of the stomach. CS nanoparticles formulated with HPMCP (as a pH-sensitive polymer) have good film-forming properties, rapid dissolution at intestinal pH, and stable physical and chemical properties. Hydroxypropyl methylcellulose (HPMC) acts as the backbone and is esterified by phthalic anhydride. It is relatively insoluble in water and the stomach, and it can expand and dissolve quickly in the upper part of the intestine. This polymer provides superior acid stability and demonstrates enhanced adhesion and intestinal penetration capacity compared to the CS/TPP form [ 34 ]. The pH-sensitive HPMCP polymer is used as a cross-linker to stabilize CS nanoparticles in the acidic conditions of the stomach and to enable them to release their cargo in the intestinal environment with a pH of ≥ 5.5. HPMCP is a pH-sensitive polymer (the critical pH of decomposition can be controlled by the content of phthalates) that protects any given drug loaded into nanoparticles in the acidic conditions of the stomach and releases them in the intestine. One of the ways to reduce the side effects in the digestive system is to use polymers such as HPMCP since such polymers can be advantageous as they can induce the release of the loaded drugs at the absorption site [ 34 ]. The aim of this study was to compare the protective effects of CS/HPMCP nanoparticles loaded with BBr and CS/TPP loaded with BBr to study the cargo-releasing ability of the indicated nanoparticles in the intestine. This is a highly important research topic in regard to the challenge of excessive alcohol use and hepatotoxic diseases. In this study, CS/HPMCP/BBr and CS/TPP/BBr nanoparticles were successfully prepared using the ionic gelation technique [ 60 ]. To prepare stable nanoparticles, the controllable conditions in the synthesis and the related formulation parameters were optimized. pH-sensitive polymers in the structure of nanoparticles can be used to prevent the premature release of drugs or active substances from nanoparticles in the upper gastrointestinal tract. This strategy prevents the reduction of drug concentration in the intestine and subsequently prevents the reduction of the absorption and effectiveness of the drug. In the current study, CS nanoparticles were cross-linked using two different cross-linkers. The use of HPMCP in the synthesis of CS nanoparticles reduces the leakage of BBr in the upper part of the digestive tract and enables the specific delivery of BBr to the intestine. HPMCP, as a non-toxic pH-sensitive polymer with biodegradability and a good biocompatibility profile, has recently been used as a drug coating material for enteral drug delivery [ 61 – 63 ]. This polymer is insoluble in the gastric pH, while it is completely soluble in the intestinal pH [ 63 , 64 ]. Hence, HPMCP was used as an enteric-coated substance to resist gastric acid [ 63 , 64 ]. In this method, drug dissolving starts in the small intestine and the maximum absorption of the drug occurs in the intestine. Studies on the release of BBr in the simulated environments of the stomach (pH 1.2) and the intestine (pH 7.4) have demonstrated that pH had a great effect on the release of BBr from CS/HPMCP/BBr and CS/TPP/BBr nanoparticles. Moreover, the release rate of BBr from CS/HPCMP/BBr nanoparticles at pH 7.4 was much higher than that of pH 1.2, and effective release was observed in the simulated environment of the stomach (pH = 1.2). On the other hand, the release of BBr from CS/TPP/BBr nanoparticles in the SIF environment was slightly higher than its release in the SGF environment; however, the difference was still significant. These observations demonstrated the inhibition of drug release in the acidic pH of the stomach, as drugs do not effectively release until the nanoparticles are in the intestine. Also, a slow and continuous release of the drug from CS/TPP/BBr nanoparticles in the intestine was observed. Investigation of cytotoxicity effects using the MTT assay demonstrated increased survival and proliferation of MSCs during treatment with various concentrations of CS/HPMCP/BBr and CS/TPP/BBr nanoparticles. MSCs were used as the target cell line for cellular assessments of this experiment since we did not have access to a normal nonmalignant liver cell line. This factor can be considered one of the shortcomings of this study which requires further in-depth investigations. Furthermore, the administration of CS/HPMCP/BBr nanoparticles had significant protective and preventive effects on hepatotoxicity caused by ethanol administration in rats. This effect was confirmed by the improvement of macroscopic and histological damage and the normal serum level of AST, ALT, ALP, GGT, GPx, and MDA. Our results demonstrated the beneficial effects of CS/HPMCP/BBr and CS/TPP/BBr nanoparticles in preventing hepatotoxicity in rats in vivo. 21 days of prevention through the oral administration of CS/HPMCP/BBr or CS/TPP/BBr nanoparticles mediated remarkable protective effects against hepatotoxicity, which was determined by macroscopic, enzymatic, and histological examinations. Moreover, in this study, the level of antioxidant enzymes (MDA & GPx) in the animals treated with CS/HPMCP/BBr nanoparticles had non-significant change as compared to the control group, which can indicate the remarkable protective effects of CS/HPMCP/BBr nanoparticles. Lipid peroxidation is the basic cellular damage process caused by oxidative stress and is considered a hallmark of oxidative stress in which ROS interact with unsaturated fatty acids leading to the formation of lipid products such as MDA that can damage cellular membrane components and cause necrosis and inflammation [ 65 – 67 ]. The present study demonstrated that CS/HPMCP/BBr nanoparticles can mediate higher protective effects than CS/TPP/BBr nanoparticles. Also, CS/HPMCP/BBr and CS/TPP/BBr nanoparticles demonstrated superior protective effects in comparison with the bulk form of BBr. Our results are similar to the results reported by Li et al. which have reported the hepatoprotective properties of BBr based on antioxidant enzymes [ 68 ]. Moreover, Makhlof et al. have also reported the slow and controlled drug release loaded in CS/hydroxypropyl nanoparticles. These researchers reported the stability of methyl cellulose phthalate in the intestinal pH and gastric pH [ 34 ]. According to the results of our study and other similar reports, CS/HPMCP/BBr nanoparticles can be described as suitable and affordable products with an easy production method as a protective factor against alcoholic hepatotoxicity caused by continuous ethanol consumption. It is worth mentioning that more in vivo assessments can further support the protective effects reported in this study and also elucidate the possible shortcomings and limitations of this method.
Conclusion In this study, CS/HPMCP/BBr and CS/TPP/BBr nanoparticles were prepared. The ideal biocompatibility of these nanoparticles, their low toxicity, sensitivity to pH, and proper drug release characteristics make them suitable carriers for oral drug delivery as a protective agent against hepatotoxicity caused by ethanol consumption. Due to the favorable drug release capacity of CS/HPMCP/BBr nanoparticles in comparison with CS/TPP/BBr nanoparticles at the indicated target sites, CS/HPMCP nanoparticles are considered more effective carriers for therapeutic substances, such as BBr, with the aim of mediating protective effects against hepatotoxicity caused by ethanol consumption. This study can also serve as a pipeline for the preparation of nanoparticles ideal for the delivery of various types of cargo which may be damaged upon oral administration/consumption in the low pH of the stomach but are intended to be absorbed in the intestine.
Background Alcoholic liver disease (ALD) is a globally critical condition with no available efficient treatments. Methods Herein, we generated chitosan (CS) nanoparticles cross-linked with two different agents, hydroxypropyl methylcellulose phthalate (HPMCP; termed as CS/HPMCP) and tripolyphosphate (TPP; termed as CS/TPP), and loaded them with berberine (BBr; referred to as CS/HPMCP/BBr and CS/TPP/BBr, respectively). Alongside the encapsulation efficiency (EE) and loading capacity (LC), the releasing activity of the nanoparticles was also measured in stimulated gastric fluid (SGF) and stimulated intestinal fluid (SIF) conditions. The effects of the prepared nanoparticles on the viability of mesenchymal stem cells (MSCs) were also evaluated. Ultimately, the protective effects of the nanoparticles were investigated in ALD mouse models. Results SEM images demonstrated that CS/HPMCP and CS/TPP nanoparticles had an average size of 235.5 ± 42 and 172 ± 21 nm, respectively. The LC and EE for CS/HPMCP/BBr were calculated as 79.78% and 75.79%, respectively; while the LC and EE for CS/TPP/BBr were 84.26% and 80.05%, respectively. pH was a determining factor for releasing BBr from CS/HPMCP nanoparticles as a higher cargo-releasing rate was observed in a less acidic environment. Both the BBr-loaded nanoparticles increased the viability of MSCs in comparison with their BBr-free counterparts. In vivo results demonstrated CS/HPMCP/BBr and CS/TPP/BBr nanoparticles protected enzymatic liver functionality against ethanol-induced damage. They also prevented histopathological ethanol-induced damage. Conclusions Crosslinking CS nanoparticles with HPMCP can mediate controlled drug release in the intestine improving the bioavailability of BBr. Keywords
Acknowledgements Not applicable. Author contributions MMK: Conceptualization, Methodology, Validation, Formal analysis, Investigation, Resources, Software, Writing - original draft, Writing - reviewing & editing. MA: Validation, Investigation, Project administration, Supervision. MM: Validation, Conceptualization, Investigation, Resources, Funding acquisition, Project administration, Writing - reviewing & editing, Supervision. Funding The present study was supported by Shahroud University of medical sciences as an MSc Thesis. We hereby acknowledge the research deputy for Grant No. 14010018. Data Availability The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate All animal experiments were carried out in accordance with ARRIVE guidelines ( https://arriveguidelines.org ). The in vivo experiments were also performed according to the National Institutes of Health Guidelines for the Care and use of Laboratory Animal and approved by the Institutional Animal Care and Use Committee in Shahroud University of Medical Sciences (IR.SHUM.REC.1401.081). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-16 23:45:35
BMC Complement Med Ther. 2024 Jan 15; 24:39
oa_package/f9/6e/PMC10789080.tar.gz
PMC10789081
38226092
Introduction It has been nearly three years since the outbreak of COVID-19, which infected more than 600 million people and killed more than six million [ 1 ] worldwide. The transmission of the SARS-Co-2 virus has been increasing due to continuous mutations, such as Omicron variants, although their virulence has decreased [ 2 ]. Most mild cases present mild upper respiratory symptoms such as nasopharyngeal discomfort and cough [ 3 ]. Although most cases are asymptomatic or mildly symptomatic, there is still a proportion of patients with significant lung damage and even multiple organ dysfunction who require hospitalization [ 4 ]. To improve the prognosis of these patients, timely screening and treatment are particularly important. Radiographic imaging, such as CT scans and X-rays, plays an important role in the screening of these patients, according to WHO guidelines [ 5 ]. However, the imaging equipment may not be available due to limited medical resources during the pandemic. Therefore, it is important to screen patients using other approaches. We attempted to develop machine learning models to predict the occurrence of lung injury by retrospectively analyzing existing clinical cases.
Materials and methods Study design and patients A retrospective analysis of clinical data from adult patients with COVID-19 who were admitted to Hainan West Central Hospital in Danzhou, China, from August 2022 and September 2022 was performed. The data were collected from the electronic medical record system of Hainan West Hospital. Data included sex, age, vital signs on admission, comorbidities, imaging data, treatment, length of stay, and results of the first laboratory test after admission. Mild COVID-19 and moderate COVID-19 were diagnosed according to WHO guidelines [ 5 ]. Mild COVID-19 was defined as symptomatic patients meeting the case definition for COVID-19 without evidence of viral pneumonia or hypoxia. Moderate COVID-19 was defined as patients with clinical signs of pneumonia (fever, cough, dyspnea, and fast breathing) but no signs of severe pneumonia, including oxygen saturation (SpO 2 ) ≥ 90% on room air. A total of 231 patients with confirmed COVID-19 were included in the study. All patients had coronavirus polymerase chain reaction (PCR) tests confirming the Omicron variant (BA.5.1.3). Data collection Demographic characteristics of the patients, vital signs on admission, comorbidities, imaging data, disease type, vaccination status prior to onset, treatment after admission, length of stay, and results of the first laboratory tests after admission were obtained through the electronic medical record system. Demographic characteristics included age, sex, height, and weight; vital signs at admission included blood pressure, heart rate, respiratory rate, and temperature; and underlying diseases included diabetes, cardiovascular disease, cerebrovascular disease, chronic lung disease, chronic liver disease, chronic kidney disease, solid tumors, hematologic diseases, and immunodeficiency diseases. Laboratory tests included routine blood tests, biochemistry, electrolytes, C-reactive protein (CRP), procalcitonin (PCT), coagulation tests, and D-dimer. Statistical analysis Data processing and analysis were performed using the Smart Research Online platform ( https://dxonline.deepwise.com /). Patients were grouped according to the COVID-19 severity classification. Categorical variables were compared using the chi-square test or Fisher's exact test and expressed as n (frequency). Continuous variables with a normal distribution were compared using a t-test and expressed as the mean ± standard deviation. Continuous variables with non-normal distributions were compared using the Mann-Whitney U test and are represented by the median and interquartile range (IQR). Spearman’s correlation coefficient was used for correlation analysis. Correlations between factor variables that were significantly different between mild and common types were analyzed. P-values <0.05 were considered to be statistically significant. Variables that differed between groups were included in machine learning models, and predictive modeling was performed using machine learning models including plain Bayesian, linear discriminant analysis, support vector machine (SVM), and least absolute shrinkage and selection operator (LASSO), and logistic regression (LR) models, and the predictive efficacy of these models was compared. The predictive efficacy of these models was evaluated by receiver operating characteristic (ROC) curves, and the optimal model was selected by sensitivity, specificity, and area under the curve (AUC). The AUC was calculated, and an AUC of >0.7 was considered to be a good model. The calibration curve was used to assess the agreement between the predicted probabilities of the model and the actual probabilities.
Results Clinical and laboratory characteristics of COVID-19 patients on admission A total of 231 COVID-19 patients were included in this retrospective analysis. Among them, 152 (68.83%) were mild cases, 72 (31.17%) were moderate cases, and there were no severe or critical cases. On admission, there were no statistically significant differences between the two groups in terms of sex, weight, height, body temperature, systolic blood pressure (SBP), heart rate, days of positive nucleic acid testing, vaccination status, or underlying disease status (including diabetes mellitus, cardiovascular disease, cerebrovascular disease, chronic lung disease, chronic liver disease, chronic kidney disease, and solid tumor) (P >0.05). No significant difference was found between the two groups in the laboratory tests, including for white blood cell count (WBC), lymphocyte count, hemoglobin level, platelet count, CRP, aspartate aminotransferase (AST), prealbumin, bilirubin level, and blood creatinine levels (P >0.05). Patients in the moderate group were significantly and statistically older than those in the mild group (years, 45.50 (31.25-61.50) versus 61.50 (52.00-70.00, P <0.05). Diastolic blood pressure (DBP)(mmHg, 72.72±10.23 versus 76.68±9.52), respiratory rate (breaths/min, 19.46±1.65 versus 20.21±1.53), PCT (ng/mL, 0.02 (0.01-0.04) versus 0.02 (0.01-0.05)), interleukin 6 (IL6) levels (pg/ml, 10.80 (6.25-17.70) vs. 13.80 (9.50-24.50)), lactate dehydrogenase(LDH) levels (U/L, 214.87±45.39 vs. 229.15±48.92), glutamate transaminase (U/L,18.00 (13.00-28.00) vs. 21.00 (16.00-30.00)), and urea nitrogen (mmol/L,3.70 (2.90-4.60) versus 4.15 (3.33-5.13)) were higher than in the moderate group (P <0.05). However, the albumin level was significantly lower in the moderate group (g/L, 41.26±4.44 versus 39.06±4.26, P = 0.001) (Table 1 ). Developing a model to predict the occurrence of moderate COVID-19 Spearman's analysis showed that the correlation between all of the above variance variables was low (Figure 1 ), so these variables could be included in the analysis model. The variance factors between the two groups were included in the Bernoulli Naïve Bayes (BNB), linear discriminant analysis, SVM, and LR models. By comparing the indicators between the models, we found that the LR model had the best sensitivity (sensitivity = 0.653) and Youden's index = 0.288 (Table 2 ), so the LR model was selected for modeling. The top five variables on the obtained model feature weights were selected as age, D-dimer, LDH, respiratory rate, and albumin (Table 3 ), and their AUC values for the prediction of moderate COVID-19 were 0.714, 0.591, 0.589, 0.605, and 0.634 (Figure 2a ). When these five variables were incorporated into the final LR model, the AUC, sensitivity, and specificity were 0.719, 0.681, and 0.635, respectively, for predicting the occurrence of moderate COVID-19 (Table 4 , Figure 2b ). To facilitate clinical application, the LR model was visualized using the nomogram. Scores were assigned to each variable of the model, and the scores were summed to calculate the total score to reflect the probability of moderate COVID-19 for each patient (Figure 3a ). We found good agreement between the predictive probability of the model (predicted values) and the true probability (observed values) by calibration curve analysis (Figure 3b ). We stratified the patients by age to assess the predictive efficacy of the model across age groups. After grouping by age in quartiles (<35 years, 35-52 years, 53-66 years, >66), we found that the predictive efficacy of our LR model was better in the first three-quarters of the age quartile (≤ 66 years, AUC = 0.766, Table 5 ). The calibration curve showed better agreement between the predicted values and the observed values in the model (Figures 4a - 4b ). Model applications We used nomograms and online links to assist clinicians in performing rapid screening. The LR model was visualized and applied using a nomogram. The model assigned a score for each variable, and the scores were summed up to calculate a total score reflecting the probability of moderate COVID-19 for each patient. To obtain the patient's outcome and the corresponding probabilities to assist in clinical applications, an online link was generated ( https://dxonline.deepwise.com/prediction/index.html?baseUrl=%2Fapi%2F&id=19350&topicName=undefined&from=share&platformType=wisdom ).
Discussion Due to the general vulnerability of the population to SARS-CoV-2 and multiple routes of transmission [ 6 ], the pandemic has not yet ended. Although vaccines can partially stop transmission [ 7 ], the virus can achieve immune evasion through continuous mutation [ 8 ], and some variants increase the transmissibility of the virus, especially the currently prevalent Omicron variant. However, due to differences in age, physiological status, immune status, and many other factors, patients present different manifestations when infected [ 9 ]. For example, most Omicron infections present with mild COVID-19 and only a small proportion of patients present with moderate or severe COVID-19. Identifying mild and moderate cases and implementing stratified management can save medical resources to a greater extent while enabling earlier treatment of moderate-type patients. Therefore, we attempted to analyze the clinical data of 231 patients with Omicron variant infection (both mild and moderate types) to identify the variables that differ between mild and moderate types, and based on the results, we have established a prediction model for moderate COVID-19. By comparing mild and moderate cases, we found differences in age, respiratory rate, D-dimer, LDH, and albumin between the two groups, suggesting that old age and hypoproteinemia may be risk factors for progression to moderate COVID-19, and elevated respiratory rate, D-dimer, LDH, AST, urinary creatinine, PCT, and IL6 may indicate the development of moderate COVID-19. Incorporating the above factors into LR modeling revealed that age, D-dimer, LDH, respiratory rate, and albumin had the highest characteristic weights in the model. The occurrence of moderate pneumonia was well predicted after modeling using the five variables mentioned above. After modeling patients aged ≤66 years, we found that age, D-dimer, LDH, respiratory rate, and albumin still had the highest characteristic weights in the model, and the model had better efficacy in predicting moderate COVID-19 with higher AUC values. We found that age had the highest weight value in the above model, with an AUC of 0.714 in the univariate prediction model, suggesting that old age was an important risk factor for progression to the moderate type in patients with mild disease. Early in the epidemic, Wu et al. found that older patients were more likely to develop acute respiratory distress syndrome (ARDS) [ 10 ]. An analysis of COVID-19 patients in 45 countries by O'Driscoll et al. found that these patients showed significant age-specific outcomes, with a log-linear increase by age among individuals older than 30 years [ 11 ]. According to the Centers for Disease Control and Prevention (CDC), the mortality rate for people aged >75 years is more than 100 times that for those aged 18-29 years [ 12 ]. This may be related to decreased immune function in elderly patients. With increasing age, the migration, differentiation, and cytokine production of innate immune cells are impaired or delayed, while adaptive immune B- and T-cell functions deteriorate [ 13 ], and the immune system's ability to resist viral replication and transmission decreases compared to that of younger patients. These changes may result in a significant increase in peak virus load [ 14 ], making elderly patients more vulnerable to lung and other organ involvement. By comparing mild and moderate COVID-19, we found that the D-dimer level was higher in moderate COVID-19. Elevated D-dimer levels are an important indicator in response to coagulation disorders, and in our study, D-dimer levels were found to be higher in moderate COVID-19 patients than in patients with mild COVID-19. Previous studies have found elevations in approximately 36% of patients with COVID-19 [ 15 ], and elevated levels are correlated with higher ARDS risk, disease severity, and mortality [ 16 , 17 ]. Coagulation disorders such as elevated dimers are associated with direct damage to multiorgan endothelial cells by the COVID-19 virus and the release of inflammatory factors such as IL6 caused by infection, leading to a hypercoagulable state, which can lead to an increased risk of thrombosis in the venous and arterial systems, as well as in the microvascular system of vital organs such as the lungs and kidneys [ 18 ]. An autopsy of patients who died of COVID-19 revealed the presence of diffuse thrombosis in capillaries within the lungs [ 19 ]. In COVID-19 patients with D-dimer elevation, anticoagulation therapy has been proven to improve the prognosis of these patients [ 20 ]. Therefore, D-dimer can be used as a good predictor of the severity of COVID-19 and to evaluate the effect of treatment. In our study, LDH levels were higher among moderate COVID-19 cases. Lactate dehydrogenase is an intracellular enzyme that maintains normal energy metabolism in the body with several isoenzymes, mainly in the heart, liver, kidney, lung, and striated muscle, and in the lung, mainly LDH-3 [ 21 ]. In COVID-19, damage to the lungs leads to more LDH release into the blood, causing an increase in LDH levels. In addition, the severe inflammatory response after viral infection can also damage the liver, heart, and other organs [ 22 ], which exacerbates the elevation of LDH. Henry et al. [ 23 ] found that elevated LDH was associated with a six-fold increase in the odds of severe and a 16-fold increase in mortality among COVID-19 patients through a pooled analysis of 1206 cases. Therefore, LDH can be used as an indicator to assess the severity of COVID-19. In many clinical settings, hypoproteinemia is associated with increased severity and mortality [ 24 ], which is consistent with our findings: moderate COVID-19 cases had lower albumin levels than the mild type of COVID-19. A previous study found that hypoproteinemia increased disease severity and mortality. The probability of a poor prognosis was 70% in patients with hypoalbuminemia, compared to 24% in patients with normal albumin levels [ 25 ]. The potential mechanism of hypoalbuminemia associated with COVID-19 cases was thought to be related to direct viral damage, capillary leakage, and high protein catabolism due to a high inflammatory response [ 26 ]. In our results, the respiratory rate was significantly higher in moderate COVID-19 cases than in mild COVID-19 cases. The increase in respiratory rate reflects the aggravation of COVID-19. In some lung disease assessment methods, such as CURB-65 (an acronym for confusion, uremia, respiratory rate, BP, age ≥ 65 years), the pneumonia severity index (PSI), and the ROX index (defined as the ratio of oxygen saturation as measured by pulse oximetry/FIO2 to respiratory rate) [ 27 , 28 ], and some systemic infectious disease assessment methods, such as the Sequential Organ Failure Assessment (SOFA) and acute physiology and chronic health evaluation (APACHE) II [ 29 , 30 ], respiratory rate was included as an important parameter, and an increase in respiratory rate may indicate the aggravation of pneumonia or systemic disease. Therefore, the inclusion of respiratory frequency in our model improved its accuracy. During the pandemic, it is particularly important to optimize the allocation of medical resources to treat focus groups due to the shortage of resources. Only symptomatic treatment is required for mild COVID-19, while further treatment and monitoring are required for moderate COVID-19. Our model could distinguish moderate COVID-19 from mild cases. Furthermore, we developed nomograms to make it easy for physicians to identify moderate cases so that they can receive treatment and monitoring earlier during the pandemic. In this study, we created a model to predict the occurrence of moderate COVID-19 from clinical data and verified the good predictive efficacy of the model. However, our study also had some limitations. First, the small sample size had an impact on the predictive efficacy of the model; second, the prediction model was internally validated, which placed limitations on the evaluation of model efficacy and required further external validation; third, our model had not yet addressed the prediction of prognosis such as mortality. In addition, our model was built based on a population infected by the Omicron variant, and the prediction performance for other variants needs further validation.
Conclusions In this study, we developed a logistic regression model to predict the occurrence of moderate COVID-19 and evaluated its predictive efficacy by ROC curve and calibration curve. Multiple variables, such as age, respiratory rate, D-dimer, LDH, and albumin, were included in the model. By combining these five variables, the model can accurately predict the occurrence of moderate COVID-19, especially for patients aged ≤66 years.
Background: Timely differentiation of moderate COVID-19 cases from mild cases is beneficial for early treatment and saves medical resources during the pandemic. We attempted to construct a model to predict the occurrence of moderate COVID-19 through a retrospective study. Methods: In this retrospective study, clinical data from patients with COVID-19 admitted to Hainan Western Central Hospital in Danzhou, China, between August 1, 2022, and August 31, 2022, was collected, including sex, age, signs on admission, comorbidities, imaging data, post-admission treatment, length of stay, and the results of laboratory tests on admission. The patients were classified into a mild-to-moderate-type group according to WHO guidance. Factors that differed between groups were included in machine learning models such as Bernoulli Naïve Bayes (BNB), linear discriminant analysis, support vector machine (SVM), least absolute shrinkage and selection operator (LASSO), and logistic regression (LR) models. These models were compared to select the optimal model with the best predictive efficacy for moderate COVID-19. The predictive performance of the models was assessed using the area under the curve (AUC), sensitivity, specificity, and calibration plot. Results: A total of 231 patients with COVID-19 were included in this retrospective analysis. Among them, 152 (68.83%) were mild types, 72 (31.17%) were moderate types, and there were no patients with severe or critical types. A logistic regression model combined with age, respiratory rate (RR), lactate dehydrogenase (LDH), D-dimer, and albumin was selected to predict the occurrence of moderate COVID-19. The receiver operating characteristic curve (ROC) showed that AUC, sensitivity, and specificity in the model were 0.719, 0.681, and 0.635, respectively, in predicting moderate COVID-19. Calibration curve analysis revealed that the predicted probability of the model was in good agreement with the true probability. Stratified analysis showed better predictive efficacy after modeling for people aged ≤66 years (AUC = 0.7656) and a better calibration curve. Conclusion: The LR model, combined with age, RR, D-dimer, LDH, and albumin, can predict the occurrence of moderate COVID-19 well, especially for patients aged ≤66 years.
We would like to thank the clinical care teams involved in the care of the patients, as well as the microbiology laboratory team at Hainan Western Central Hospital. The research was supported by the Shanghai Jiao Tong University "Jiao Tong University Star" Program "Medicine-Engineering Interdisciplinary Research Fund" (YG2021QN79) and the Clinical Management Optimization Project of Municipal Hospitals (SHDC22022206).
CC BY
no
2024-01-16 23:45:35
Cureus.; 15(12):e50619
oa_package/25/92/PMC10789081.tar.gz
PMC10789089
0
Fluoride ion batteries (FIB) are a promising post lithium-ion technology thanks to their high theoretical energy densities and Earth-abundant materials. However, the flooded cells commonly used to test liquid electrolyte FIBs severely affect the overall performance and impede comparability across different studies, hindering FIB progress. Here, we report a reliable Pb-PbF 2 counter electrode that enables the use of two-electrode coin cells. To test this setup, we first introduce a liquid electrolyte that combines the advantages of a highly concentrated electrolyte (tetramethylammonium fluoride in methanol) while addressing its transport and high-cost shortcomings by introducing a diluent (propionitrile). We then demonstrate the viability of this system by reporting a BiF 3 –Pb-PbF 2 cell with the highest capacity retention to date.
Achieving net-zero emissions by 2050 relies on the electrification of various sectors. This in turn requires batteries with higher energy densities which are free from expensive and critical battery minerals such as cobalt and lithium. 1 , 2 The fluoride-ion battery (FIB) is a promising post-lithium chemistry that has the potential to satisfy both the energy density and sustainability requirements. In conversion-type FIBs, the electrodes undergo multielectron reactions, and charge neutrality is maintained by shuttling a monoanionic charge carrier (F – ) through the electrolyte. 3 High theoretical energy densities—on the order of ∼600 Wh kg –1 —could be obtained owing to the high oxidative stability of F – ions, which enables the use of high-voltage redox pairs. 1 Additionally, the smaller charge density of the fluoride ion compared to that of traditional cationic charge carriers should provide favorable electrolyte transport properties. 4 Moreover, the global production of fluorides is over 60 times larger than that of lithium with a large and well-established supply chain. 5 , 6 The advancement of fluoride-ion batteries, however, has been hindered by several obstacles. One of these is the lack of a realistic and reproducible testing setup. 7 Flooded cells (also known as beaker cells) are commonly used to test the electrochemical properties of liquid electrolyte FIBs. However, they impede the realistic assessment of performance given the large excess of electrolyte, the absence of a separator and, in many cases, the use of Pt as the counter electrode, which inevitably results in electrolyte decomposition during cycling. 7 − 10 The use of more realistic form-factors such as coin and pouch cells has been hindered by a number of problems including the lack of a reliable counter electrode, equivalent to Li metal in lithium-ion batteries (LIBs). Such an electrode would allow for a more rapid, reproducible, and representative investigation of electrochemical performance, reaction mechanisms, and degradation pathways, for both state-of-the-art and novel FIB materials. Yaokawa et al. pointed out the problems associated with the use of flooded cells and used a two-electrode setup to cycle BiF 3 vs Pb. However, their cell exhibited low discharge capacity (110 mAh g –1 on first discharge) and poor cycling (∼0 mAh g –1 on fifth cycle), partly because of the poor performance of the Pb counter electrode. 7 Other studies with two-electrode cells have also resulted in poor performance. 11 − 13 To design a stable counter electrode, Nowroozi et al. proposed the use of intercalation compounds, particularly La 2 CoO 4 , because of the lower volume changes during cycling (compared to those of conversion-type electrodes), which might provide a more reliable performance. However, the presence of an unknown decomposition reaction in the first cycle, the laborious synthesis procedure of La 2 CoO 4 , and its poor cyclability make it an unsuitable option. 14 Most of these problems do not only apply to La 2 CoO 4 but also to intercalation compounds in general. 15 − 17 A good counter electrode should exhibit a stable potential during cycling so that any variations in voltage can be attributed to the working electrode. Additionally, it should provide a readily accessible reservoir of F – ions to compensate for irreversible losses caused by the formation of solid electrolyte interphases or the decomposition of the electrolyte. Finally, the counter electrode should exhibit (electro)chemical stability toward F – in the electrolyte. In this work, we report a novel method to produce reliable and chemically stable Pb-PbF 2 electrodes featuring a dry-process in the presence of a polytetrafluoroethylene (PTFE) binder. We then introduce a novel liquid electrolyte consisting of highly concentrated tetramethylammonium fluoride in methanol, with propionitrile as the diluent, to demonstrate their suitability as counter electrodes in a two-electrode coin cell setup. Finally, by combining the counter electrode and the electrolyte, we report the best capacity retention for BiF 3 vs Pb-PbF 2 full cells to date. The PbF 2 /Pb redox couple showcases a range of advantages that led to their extensive investigation as active materials in FIBs. These attributes also make them highly suitable for use as a counter electrode. First, the low melting point of Pb (327.5 °C), which facilitates metal crystallization, and the higher ionic conductivity of PbF 2 at room temperature (10 –7 –10 –9 S cm –1 ) compared to other fluorides enable excellent conversion yields. 14 , 18 − 20 Additionally, the redox potential of PbF 2 /Pb (−0.15 V vs SHE) falls well within the electrochemical stability window of most FIBs electrolytes. Nevertheless, the cycling behavior of Pb-PbF 2 electrodes reported thus far is incompatible with their potential merit as stable and reliable counter electrodes, possibly due to the limited focus on their microstructural design. 9 , 21 , 22 A uniform and well dispersed mixture of Pb and PbF 2 is required to achieve a stable electrode potential, reduce the overpotential during cycling, and provide a reservoir of F-ions. 14 To achieve full conversion at high current densities, the particle sizes of Pb and PbF 2 should be minimized. 23 To accomplish this, PbF 2 was ball milled until the average particle size was reduced from ∼6 μm to ∼500 nm ( Figure 1 a,b). The ball-milled PbF 2 was then annealed at 350 °C to convert the orthorhombic α-crystal structure (10 –8 –10 –9 S cm –1 ) into the more ionically conductive cubic β-structure (10 –7 –10 –8 S cm –1 ) ( Figure 1 a,c). 19 The average particle size increase during heat treatment due to particle sintering (from ∼500 nm to ∼1.5 μm) is more than compensated by the 1 order of magnitude increase in ionic conductivity ( Figure 4 c, Figure S1 ). 19 Unfortunately, the order of the ball milling and annealing steps cannot be reversed, since the pressure exerted on the particles during the ball milling step converts the β-phase back to the α-structure ( Figure 1 c). 19 The minimum particle size of commercially available Pb powder is ∼45 μm, which unfortunately cannot be reduced by ball milling due to the ductility of Pb. Thermal decomposition of Pb(C 2 H 3 O 2 ) 2 was instead used to obtain Pb particles with an average size of 2 μm ( Figure 1 d,e). 24 PTFE was chosen as the binder because of its chemical compatibility with F-ions. 25 On the contrary, polyvinylidene fluoride (PVDF), the standard material used as the binder for FIBs, is known to degrade in contact with F-ions, undergoing dehydrofluorination to form HF, C=C bonds, or cross-links. 9 , 26 , 27 To fabricate the Pb-PbF 2 electrodes with PTFE as the binder, the dry casting manufacturing procedure rather than slurry casting is required, as PTFE is not soluble in common solvents at room temperature. 25 This method eliminates the need for a solvent evaporation step, leading to time and cost savings. 28 Pb and PbF 2 powders were incorporated into the dry casting ( Figure 1 a) where PTFE fibrils created a 3-D network capable of holding the active material and the carbon nanofibers together ( Figure 2 a). 29 As for conductive carbon, carbon nanofibers were chosen due to their ability to interweave with the PTFE fibrils and the metal fluoride particles, providing electronically conductive pathways and structural support. 30 The cross section of Pb-PbF 2 electrodes obtained with focused-ion beam scanning electron microscopy (FIB-SEM) coupled with energy dispersive X-ray spectroscopy (EDS) ( Figure 2 b,c) demonstrates the uniform distribution of Pb, PbF 2 , and carbon. PbF 2 particles have an oval shape, whereas Pb particles appear very elongated as a result of the pressure applied during the dry processing and the high ductility of Pb. The morphology of Pb appears to be responsible for the electrode’s low porosity, potentially hindering complete wetting with viscous electrolytes. Both Pb and PbF 2 are well dispersed in the carbon matrix. The suitability of Pb-PbF 2 as a counter electrode in coin cells was validated by using a novel liquid electrolyte. Designing liquid electrolytes for FIBs is challenging owing to the limited solubility of fluoride salts in the aprotic organic solvents commonly used in Li-ion batteries. Protic organic solvents are particularly effective at dissolving fluoride salts thanks to their ability to form hydrogen bonds, but the very same hydrogen limits their cathodic (electro)chemical stability. In our previous work we have demonstrated the benefits of using solvent-in-salt electrolytes in FIBs to expand the electrochemical stability window of protic solvents by suppressing the number of free solvent molecules as well as minimizing HF formation. 31 This strategy, however, requires an impractically large amount of salt and results in high viscosity, which can negatively impact the transport and wetting properties. The use of a diluent (propionitrile, PN) was therefore demonstrated herein, for the first time in an FIB, to drastically reduce the quantity of salt needed and to improve the transport properties of a highly concentrated solution (10 m) of tetramethylammonium fluoride (TMAF) in methanol (MeOH). Propionitrile was selected because of its wide electrochemical stability window (>5 V), low viscosity (0.399 mPa.s at 25 °C), miscibility with alcohols, and high boiling point (97 °C). 32 , 33 Additionally, propionitrile does not dissolve TMAF ( Figure S4 ) and is chemically stable toward fluoride ions. 34 A ratio of 5 wt % MeOH/95 wt % PN resulting in a TMAF concentration of 0.5 m was chosen to investigate how a high diluent concentration affects the electrolyte properties. A combination of spectroscopic and computational characterization methods was employed to investigate the properties of the liquid electrolyte, in particular the solvation of fluoride ions and the role of the diluent. 1 H NMR shows a 2.3 ppm shift to higher frequencies in the peak corresponding to the -OH proton between highly concentrated TMAF in MeOH and pure MeOH ( Figure 3 a). This indicates that fluoride ions are preferentially solvated by methanol via OH··· F – hydrogen bonding ( Figure 3 b). The relative bonding distance between the hydroxylic proton in methanol and the fluoride ions was calculated to be 1.58 Å by Monte Carlo simulations ( Figure 3 b, S13 ), in agreement with previously calculated bond distances. 35 Within the first solvation shell, TMA + counterions are found at a distance of 2.55 Å from F – , due to the strong Coulombic interactions and the tendency to form ion-pairs and aggregates in highly concentrated solutions ( Figure 3 b). This model indicates an average of approximately two methanol molecules (∼2.3 ± 0.1) interacting with a fluoride ion, and this number decreases to 0.5 ± 0.1 when propionitrile is added, due to the competing interaction arising between PN and MeOH and the large excess of the former ( Figure 3 b,c). As a result of the insolubility of TMAF in PN, propionitrile acts as a diluent by interacting with F-ions via weaker van der Waals forces, which do not alter the distance between F – and MeOH, and reducing the viscosity of the highly concentrated electrolyte ( Figure 3 c, Figure S4 ). 36 In the diluted electrolyte, the reduced viscosity coupled with the lower number of coordinated MeOH molecules resulted in a 1 order of magnitude increase in the fluoride diffusion coefficient (1.4 × 10 –9 m 2 s –1 ) compared to the highly concentrated electrolyte (2.05 × 10 –10 m 2 s –1 ). Meanwhile, the ionic conductivity was still reasonably high at 7 mS cm –1 , decreasing from 28 mS cm –1 in the 10 m electrolyte due to the considerably lower number of charge carriers in the diluted electrolyte ( Figure 3 b,c). In addition to improving F-ion transport and decreasing the amount of salt required, the diluted electrolyte maintains the advantage of the highly concentrated electrolyte given the unaffected electrochemical stability window upon addition of PN ( Figure 3 d). The electrochemical behavior of the Pb-PbF 2 electrodes was then tested in symmetric coin cells via galvanostatic cycling and ex-situ XRD. A PTFE-based separator was used to ensure chemical stability. Since the cells were assembled in a 50% state of charge, the maximum capacity accessible on the first discharge is half the theoretical capacity (109 mAh g –1 ). Starting from the first charge, the maximum achievable capacity corresponds to the theoretical capacity of PbF 2 (218 mAh g –1 ). On the first discharge, 106 out of the 109 mAh g –1 is accessed ( Figure 4 a), suggesting that almost all of the active material undergoes the conversion reaction. On the first charge, the capacity reaches 211 out of the 218 mAh g –1 and then decreases to 202 mAh g –1 on the second discharge ( Figure 4 a,b). The XRD at the end of the first discharge confirms that the obtained capacity arises from the conversion to the Pb metal ( Figure 4 c). The cell retained 97.8% of its original capacity after the 30th cycle with a low overpotential of 30 mV on both charge and discharge ( Figure 4 a,b), outperforming previously reported Pb-PbF 2 electrodes, including in flooded cells. 9 , 14 , 21 , 22 The capacity retention obtained with the diluted electrolyte is higher than that achieved with the highly concentrated electrolyte (78.5% at the 30th cycle) ( Figure 4 d,e), possibly due to the lower active material dissolution. Inductively coupled plasma spectroscopy (ICP) results show that PbF 2 and Pb are soluble in methanol up to 31.5 and 14.6 ppm, respectively, whereas neither of the two is soluble in PN (<0.01 ppm) ( Figure 4 f). Additionally, the 5 wt % MeOH in the diluted electrolyte is not sufficient to cause any detectable dissolution, allowing for improved cycling performance in the diluted electrolyte compared to the highly concentrated one. The presence of a reliable counter electrode and the realistic cell design allowed meaningful measurements of Coulombic efficiency, rarely reported for FIBs. The Coulombic efficiency of the cell with the diluted electrolyte started at 96% in the first cycle and, from the 15th cycle, plateaued at 101%. ( Figure 4 b). This could be indicative of a parasitic reaction with the electrolyte. To prove the feasibility of Pb-PbF 2 as a counter electrode in coin cells, it was tested against BiF 3 working electrodes. On the first discharge, a capacity of 284 mAh g –1 out of the theoretical 302 mAh g –1 is achieved, suggesting that most of the BiF 3 is converted to Bi metal ( Figure 5 a,c). Upon recharge, a considerably lower capacity of 178 mAh g –1 is reached, indicating that not all of the Bi metal is converted back to BiF 3 ( Figure 5 a,b, Figure S12 ). Despite not achieving full conversion, BiF 3 cycles reversibly with a capacity retention of 173 mAh g –1 (61%) after 10 cycles, making this cycling performance the best for BiF 3 in a FIB ( Table S1 ). Similar to the Pb-PbF 2 symmetric cell, the Coulombic efficiency is over 100% and appears to plateau at 101% on the ninth cycle, suggesting some minor degradation processes are taking place ( Figure 5 b). The asymmetry between the charge and discharge profiles has already been observed by Yaokawa et al. and Okazaki et al., and a detailed mechanistic study would be required to elucidate its origin. 7 , 11 The coin cell setup presented in this work could offer an ideal configuration for conducting such an investigation through in-situ XRD. In conclusion, we present a novel manufacturing method for Pb-PbF 2 electrodes using a dry-casting process using PTFE as the binder and validate their use as counter electrodes in two-electrode coin cells using a novel diluted liquid electrolyte. With this electrolyte, we report, for the first time in FIBs, the use of a diluent (propionitrile) to improve the transport properties and reduce the amount of salt required by a highly concentrated electrolyte (tetramethylammonium fluoride in methanol), while retaining the wider electrochemical stability window. Finally, we demonstrate the suitability of Pb-Pb 2 as counter electrodes in coin cells by cycling versus BiF 3 electrodes, prepared via the same dry-casting process. The capacity retention obtained translates to the best electrochemical performance for this system to date. We believe that the introduction of a reliable counter electrode for a practical and accessible cell configuration (like coin cells) is a critical step to the advancement of FIBs, as it will empower a more streamlined development and understanding of novel active materials and the investigation of degradation mechanisms. We hope that this work, despite requiring further optimization, could be of inspiration to the FIB community.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsenergylett.3c02228 . Experimental methods for electrode manufacturing and characterization, electrolyte synthesis and testing, and cell assembly; supplemental figures and tables including SEM images of PbF 2 after milling and heat treatment, XRD patterns during cycling, ESW of TMAF in MeOH, 1 H NMR of TMAF in PN, cycling stability and dQ/dV plots for the counter electrode, electrochemical impedance spectroscopy characterization and fitting, cycling behavior of BiF 3 vs Pb-PbF 2 , Lennard-Jones parameters, and radial partial distribution functions ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was supported by the Henry Royce Institute (through UK Engineering and Physical Science Research Council Grant EP/R010145/1) for capital equipment. G.G. is grateful for the financial support from ESPRC. O.A. thanks the Rhodes Trust and the Saudi Cultural Bureau (SACB) for funding. C.D.M. acknowledges the UK Science and Technology Facilities Council (STFC) for the use of the SCARF computational facility and IDAaaS virtual machines for the Monte Carlo simulations. G.J.R. is grateful for the financial support from the Royal Society (SIF/R2/222003). G.G. is grateful to Bartholomew Payne for his help collecting the diffusion data and to Shobhan Dhir for his help measuring the density of the electrolyte. G.G. also thanks Krishnakanth Sada for the training on electrode dry processing and Hua Guo for his helpful conversations on NMR. C.D.M. thanks Maxim Zyskin and Thomas Headen for useful discussions on the Monte Carlo simulations.
CC BY
no
2024-01-16 23:45:35
ACS Energy Lett. 2023 Dec 8; 9(1):85-92
oa_package/79/db/PMC10789089.tar.gz
PMC10789090
38153981
Bambus[6]urils and biotin[6]urils are macrocycles with an exceptional affinity for inorganic anions. Here, we investigated statistical condensation of 2,4-dibenzylglycoluril and d -biotin, monomers of the corresponding macrocycles, to prepare the enantiomerically pure macrocycle 1 containing a single d -biotin and five glycoluril units. Host–guest properties of 1 in chloroform solution and solid state were investigated. The macrocycle 1 bearing a single functional group was employed in the formation of [1]rotaxane utilizing reversible covalent bonds.
Bambus[6]urils (bambusurils in short) are macrocycles consisting of six glycoluril units, which are connected by six methylene bridges ( Figure 1 c). 1 Bambusurils bind various inorganic anions inside their cavity due to 12 C–H···anion hydrogen bonds. Therefore, bambusurils are usually compared to hemicucurbiturils 2 − 6 and biotin[6]uril ( Figure 1 a) 7 , 8 macrocycles, which also contain ethyleneurea unit as a part of their building blocks and bind inorganic anions inside their cavity. Bambusurils are appreciated for their exceptional affinity and selectivity for many inorganic anions in organic solvents and water. For instance, water-soluble bambusuril derivatives bind small anions such as chloride and large iodide at millimolar and micromolar affinity, respectively, in water. 9 , 10 Thus, bambusurils are investigated for their use in many areas including anion sensing, 11 , 12 anion transport through lipophilic membranes, 13 − 15 gold mining, 16 hydrogel preparation, 17 and others. The range of bambusuril applications can even be broadened as their supramolecular properties and solubility can be tuned by changing their substituents attached to nitrogen atoms at their portals 1 or by substituting oxygen atoms of glycoluril building blocks by sulfur or nitrogen atoms. 18 − 21 Recently, we introduced monofunctionalized bambusurils ( Figure 1 c) and demonstrated their potential in the liquid–liquid extraction of anions, anion transport, and construction of mechanically interlocked molecules. 22 − 24 The synthesis of monofunctionalized bambusurils is based on statistical condensation of formaldehyde with symmetrical glycoluril (such as 2,4-dibenzylglycoluril in Mono-BU , Figure 1 c) in the presence of a small amount of unsymmetrical glycoluril (bearing a carboxyl group), which brings chirality into the system. To achieve enantiomerically pure monofunctionalized bambusurils, a single enantiomer of the unsymmetrical glycoluril must be used for the macrocyclization. However, the preparation of a single enantiomer of the glycoluril is rather difficult. 19 Therefore, we envisioned that, in the macrocyclization reaction, the unsymmetrical glycoluril can be substituted for d -biotin, which is commercially available as a single enantiomer. Similarly to the 2,4-disubstituted glycolurils used for the bambusuril preparation, d -biotin contains two NH nitrogen functions, enabling its incorporation into the macrocycle. In this work, we report the preparation of monofunctionalized bambusuril 1 , in which one glycoluril is substituted for a d -biotin unit. The effect of the modification on the host–guest properties of 1 is investigated as well as its use in the preparation of [1]rotaxane. The preparation of the macrocycle 1 was first tested following the reaction conditions optimized for previously published monofunctionalized bambusurils. 24 A mixture of d -biotin and 2,4-dibenzylglycoluril was heated with paraformaldehyde and sulfuric acid in dioxane ( Scheme 1 ), and the composition of the reaction mixture was followed by MALDI-TOF MS ( Figure S17 ). The spectra showed the presence of 1 accompanied only by dodecabenzylbambus[6]uril ( BnBU , Figure 1 b), even when a relatively high d -biotin:glycoluril ratio of 1:5 was used for the reaction. This is in contrast with our previous work, in which such a high content of unsymmetrical glycoluril resulted in the formation of not only mono- but also di- and tri-substituted bambusurils. 22 − 24 We also observed that in some macrocyclization attempts a small number of sulfur atoms of d -biotin were oxidized to sulfoxide during the reaction. Anion-free monofunctionalized bambusuril 1 was obtained in 35% yield after the crude mixture was boiled in aqueous solution of NH 3 and purified by flash chromatography. Diffusion of diethyl ether vapor into a solution of 1 and tetrabutylammonium chloride in chloroform resulted in colorless monocrystals suitable for X-ray diffraction analysis. The determined crystal structure confirmed that the macrocycle consists of one d -biotin unit and five glycoluril units ( Figure 2 ). It also showed that the methine hydrogen atoms of the d -biotin unit direct to the cavity center while participating in the anion binding. As typical for bambusuril complexes, 1 binds the chloride anion inside its cavity, where it is stabilized by 12 C–H···Cl – hydrogen bonding interactions with the average distance of 2.955 Å. One portal of 1 is occupied by a molecule of chloroform. The stabilization of the chloroform molecule is due to C–Cl···Cl – halogen bond interaction (3.289 Å) with the bound chloride anion and C–H···S hydrogen bond interaction (2.206 Å) with the sulfur atom of the d -biotin unit ( Figure 2 b). The second portal of 1 engulfs the carboxyl group of the second molecule of 1 , while the group interacts with the encapsulated chloride anion through COO–H···Cl – hydrogen bonding. Thus, the crystal structure is represented by two molecules of 1 differing in their orientation that self-assemble into helixlike supramolecular polymeric chains. The supramolecular properties of 1 in solution were studied by 1 H NMR spectroscopy and isothermal titration calorimetry (ITC). First, we investigated a possible self-assembly of 1 similar to that observed in the solid state. 1 H NMR spectra of 1 in chloroform and acetonitrile with the concentration increasing up to 30 mM did not show any broadening or concentration-induced chemical shifts. Similar results were obtained for chloride complexes in chloroform, where the absence of self-assembly was further confirmed by diffusion-ordered spectroscopy experiments ( Figures S14–S16 ). Second, we wanted to evaluate the influence of the d -biotin unit on the host–guest properties of 1 . We previously reported association constants of BnBU complexes with various anions in chloroform. 25 BnBU differs from 1 just by a single dibenzylglycoluril unit, which is substituted by a d -biotin unit in 1 . Thus, we studied complexes of 1 and model anions MeSO 3 – , Cl – , Br – , and I – in chloroform by ITC and compared them to the corresponding complexes of BnBU ( Table 1 ). The results showed a 1:1 binding stoichiometry for all of the investigated systems. Macrocycle 1 forms the weakest complex with MeSO 3 – , and the stability of its complexes increases further for halides in the row Cl – , Br – , and I – . Differences in binding affinities can be explained by the anion solvation energy, which is significantly higher for MeSO 3 – and Cl – compared to Br – and I – . 25 All investigated host–guest events were driven by enthalpy compensated by the entropic term. Similar host–guest characteristics were previously observed for the complexes of BnBU . Absolute values of association constants of the 1 and BnBU complexes are relatively similar. The largest difference was found for the complexes of MeSO 3 – , in which the anion is bound by BnBU about 5 times stronger compared to 1 . These results showed that incorporation of d -biotin into the bambusuril structure does not significantly influence its binding ability toward anions. Recently, we used monofunctionalized bambusurils bearing a carboxyl function on their alkyl substituent for the construction of [1]rotaxanes utilizing a bis(acyloxy)iodate(I) reversible covalent bond. 24 Macrocycle 1 features a similar substituent as the monofunctionalized bambusurils. Therefore, we decided to demonstrate the potential of 1 by its transformation into an interlocked molecule. We studied the formation of [1]rotaxane in situ using 1 H NMR. The stoichiometric amount of 1 and bis(acyloxy)iodate(I) was dissolved in CD 3 CN ( Figure 3 ), which resulted in significant changes in the 1 H NMR spectrum compared to both reagents. The most characteristic signal of the rotaxane formation was an upfield shift (Δσ = 0.6 ppm) of methyl protons H(B) of bis(acyloxy)iodate(I). Furthermore, methine signals of the macrocycle became sharper, more distinguishable, and significantly shifted from their original position. Careful analysis of 1 employing ROESY measurement ( Figure S13 ) revealed that only six methine protons show cross peaks with the acetoxy methyl group protons. Although the complexity of the NMR spectra precluded assignment of the methine protons, we assume that the methyl group of the axle interacts with six methine protons positioned in the lower part of the macrocycle. Similar cross peaks in ROESY spectra were observed for the previously published [1]rotaxane. 24 Furthermore, an upfield shift of proton H(A) of 0.8 ppm was observed after the addition of bis(acyloxy)iodate(I) ( Figure 3 ) as a consequence of folding of the aliphatic substituent into the cavity of the macrocycle. All the characteristics discussed above are in agreement with the formation of [1]rotaxane. In conclusion, we synthesized macrocycle 1 as the first hybrid of bambus[6]urils and biotin[6]uril macrocycles. Selective arrangement of the building blocks into a macrocyclic structure resulted in the macrocycle containing one d -biotin and five glycoluril units. Furthermore, d -biotin introduced one carboxyl function, enabling selective functionalization of the macrocycle. This was demonstrated by the formation of [1]rotaxane. Binding affinity and selectivity of 1 toward inorganic anions were similar to those of BnBU . The iodide⊂ 1 complex was the most stable one with an association constant of 7.3 × 10 9 M –1 in chloroform. In the solid state, molecules of 1 self-assemble into a helical supramolecular polymer through inclusion of carboxyl groups of one molecule into the portal of a neighboring molecule of the macrocycle. Our study also showed that the enantiomerically pure monofunctionalized bambusurils 1 can be used for the preparation of [1]rotaxane.
Data Availability Statement The data underlying this study are available in the published article and its Supporting Information . Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.orglett.3c03715 . General methods, synthesis of compounds, NMR spectra, MALDI TOF spectra, isothermal titration calorimetry (ITC) data, and crystallography data ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was supported by the Czech Science Foundation (No. 23-05271S). The authors thank the RECETOX Research Infrastructure (No. LM2018121) financed by the Ministry of Education, Youth and Sports and the Operational Programme Research, Development and Education (the CETOCOEN EXCELLENCE Project No. CZ.02.1.01/0.0/0.0/17_043/0009632) for supportive background. This project was supported by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 857560. This publication reflects only the authors’ views, and the European Commission is not responsible for any use that may be made of the information it contains. We acknowledge Proteomic Core Facility of CIISB, Instruct-CZ Centre, supported by MEYS CR (LM2018127) and the National Infrastructure for Chemical Biology (CZ-OPENSCREEN, LM2023052). We thank Subhasis Chattopadhyay for his help with the DOSY experiments.
CC BY
no
2024-01-16 23:45:35
Org Lett. 2023 Dec 28; 26(1):106-109
oa_package/4c/be/PMC10789090.tar.gz
PMC10789091
38153354
We herein describe a diastereoselective Pd(0)-catalyzed Hiyama cross-coupling reaction of gem -difluoroalkenes. The use of organosilicon reagents in this reaction is advantageous over other organometallic reagents by allowing the introduction of a wide range of functional groups, including challenging alkyl groups. Also conveniently, the additive TBAF was not required for (hetero)aryl-substituted difluoroalkenes.
Since its seminal report in 1988, 1 the Hiyama cross-coupling reaction of organic halides with organosilicon reagents has become an indispensable tool for palladium-catalyzed C–C bond formation ( Scheme 1 a). 2 Compared to other cross-coupling protocols, the use of organosilicon reagents is attractive due to their stability and low toxicity. A wide range of organosilicon compounds are also commercially available and inexpensive because of the natural abundance of silicon. A nucleophilic activator such as fluoride is commonly employed in these reactions, as the C–Si bonds are less polarized and relatively inert. Despite the significant progress, the Hiyama cross-coupling of organic fluorides through C–F bond activation has remained a major obstacle. The pioneering work from Ogoshi’s group 3 demonstrated that by using perfluorinated compounds such as tetrafluoroethylene (TFE), hexafluorobenzene, or octafluorotoluene, the Hiyama coupling of a C–F bond could be achieved using arylsiloxanes under Pd or Ni catalysis ( Scheme 1 b,c). 4 On the other hand, the Hiyama cross-coupling of readily available gem -difluoroalkenes for the synthesis of valuable monofluoroalkenes was unknown. 5 We have a continuing interest in the stereoselective Pd-catalyzed C–F bond functionalization of tetrasubstituted gem -difluoroalkenes 1 . 6 In terms of Si-based reagents, we have reported the hydrodefluorination of 1 using hydrosilane Me 2 PhSiH ( Scheme 1 d). 6c In this work, a novel Hiyama cross-coupling reaction of 1 for the stereoselective synthesis of tetrasubstituted monofluoroalkenes 2 is described ( Scheme 1 e). Compared to previous protocols using other organometallic reagents FG–[M], this method significantly enhanced the functional group (FG) tolerability due to the use of organosilicon reagents. For instance, the Suzuki–Miyaura coupling of 1 using boronic acids (mainly aryl) did not allow the installation of alkyl groups. 6b The Stille coupling was specialized for vinylation and allylation of 1 with the disadvantage of using toxic organotin compounds. 7 In the Pd-free C–F bond functionalization using Grignard reagents, the reaction scope was broader, but the maximum yield was not more than 50% due to the resolution nature of the reaction. 7b We began the optimization studies by using the tetrasubstituted gem -difluoroalkene 1a (β,β-difluoroacrylate) as a standard substrate and triethoxyphenylsilane (3.0 equiv) as the reagent for generating the monofluoroalkene product 2a ( Table 1 ). 8 In previous Suzuki–Miyaura coupling of 1a with PhB(OH) 2 , the catalyst Pd(PPh 3 ) 4 was highly effective. 6b However, this catalyst only gave low yields in the Hiyama coupling (entries 1 and 2). Similar trends were observed for the Pd 2 (dba) 3 /dppe catalyst, which was effective for the Stille coupling of 1a with vinyl-/allyl-SnBu 3 (entries 3 and 4). 7a On the other hand, the Pd(dba) 2 /dppe catalyst gave 42% yield (entry 5). A dramatic increase in yield was observed by adding TBAF (2.0 equiv) to the reaction mixture (entry 6). Furthermore, the Pd catalyst loading could be lowered to 5 mol %, offering 2a in 96% isolated yield as the E isomer with dr > 99:1 (entry 7). Reducing the organosilicon reagent to 1.5 equiv decreased the yield (entry 8). Other ligands with different carbon chains were screened for comparison, including dppm, dppp, and dppb, and all showed poorer reactivities than dppe (entries 9–11). Using Pd(dba) 2 alone without dppe was ineffective (entry 12). Other fluoride additives were also screened, including KF, CsF, and AgF, and yields were not as good as with TBAF (entries 13–15). In the solvent screening, 1,4-dioxane and DMF were inferior to toluene for this reaction (entries 16 and 17). In all cases, only the E isomer of 2a was detected. The optimized Hiyama coupling conditions were applicable to the C–F bond functionalization of various tetrasubstituted gem -difluoroalkenes 1 ( Scheme 2 ). The reaction at the 1.0 mmol scale was demonstrated to provide 2a in 79% yield. Besides the phenyl group, vinyl and allyl groups could also be installed ( 2b , 2c ). More importantly, this method allowed the introduction of alkyl groups, including linear and branched carbon chains, in good yields ( 2d – g , 70–84% yield), which could not be achieved by previously developed protocols. 6 , 7 The vinylic substituent group R 1 could be varied, tolerating different alkyl and benzyl groups ( 2h – m ). The ester substituent group R 2 could also be changed to a bulkier tert -butyl group and did not affect the reaction ( 2n ). All these products ( E )- 2 were obtained with excellent dr (>99:1). The aryl-substituted gem -difluoroalkenes 1 are intrinsically more reactive than the alkyl-substituted counterparts. 6c In the Hiyama cross-coupling reaction, the phenyl substrate gave monofluorostilbene product 3a in an excellent yield of 91% even without the addition of TBAF ( Scheme 3 ). Similarly, vinyl ( 3b ), allyl ( 3c ), and primary ( 3d – f )/secondary ( 3g , 3h )/tertiary ( 3i ) alkyl groups could also be installed. Basic amine groups were compatible, albeit in lower yields ( 3j , 3k ). Substrates containing different aromatic ( 3l – p ) and ester ( 3q , 3r ) substituent groups were tolerated. A heteroaryl group (thienyl) was shown to be compatible ( 3s ). In all cases, only the E products were obtained (dr > 99:1). Drug molecule modification involving the C–F bond Hiyama cross-coupling was explored ( Scheme 4 ). Isoxepac ( 4 ) is a nonsteroidal anti-inflammatory agent (NSAID). It was converted to α-diazo ester 5 in two steps. The gem -difluoroalkene 6 could be obtained from 5 . The key Pd-catalyzed Hiyama reaction using an organosilicon reagent enabled the installation of the n -hexyl group diastereoselectively in product 7 . Overall, the monofluoroalkene motif was successfully introduced to isoxepac in a short sequence. This approach would be useful for drug discovery since monofluoroalkenes have been identified as peptide bond isosteres for pharmaceutical development. 5 The reaction was not limited to tetrasubstituted β,β-difluoroacrylates. The gem -difluoroalkene 8 containing an amide moiety also afforded the C–F bond-coupled product ( E )- 9 with excellent diastereoselectivity ( eq 1 ). Moreover, trisubstituted difluoroacrylate 10 smoothly provided the trisubstituted monofluoroalkene product ( E )- 11 ( eq 2 ). Control experiments were conducted to gain more insight into the reaction ( Scheme 5 ). The trisubstituted gem -difluoroalkene 12 derived from aldehyde was not reactive under the standard conditions, highlighting the importance of the ester group in 1 for activation ( Scheme 5 a). Different organosilicon reagents were compared in the vinylation of 1h ( Scheme 5 b), and the siloxanes were markedly more reactive than trichlorovinylsilane and triphenylvinylsilane. For the Hiyama cross-coupling of aryl-substituted 1 , no TBAF was needed for the reaction, which was more convenient and ensured functional group tolerability (cf. Scheme 3 ). In fact, in the allylation reaction of 1h , adding TBAF decreased the yield of the desired product 3c due to the generation of isomeric side product 3c′ ( Scheme 5 c). The isomerization of 3c (1,4-diene) could be triggered by simply adding TBAF and heat to provide 3c′ (1,3-diene) in good yield ( Scheme 5 d). 9 In conclusion, we have developed a stereoselective C–F bond functionalization utilizing the Hiyama cross-coupling reaction between tetrasubstituted gem -difluoroalkenes and organosiloxanes. The reaction enables the installation of various functional groups, including challenging alkyl groups, in the ( E )-monofluoroalkene products. This protocol significantly overcomes the scope limitations of previous coupling methods with other organometallic reagents.
Data Availability Statement The data underlying this study are available in the published article and its Supporting Information Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.orglett.3c04037 . Experimental procedures, optimization data, characterization data, and spectral data ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was supported by the Research Grants Council of Hong Kong (CUHK 14304421) and the Chinese University of Hong Kong (Faculty of Science—Direct Grant for Research). We also thank the Key Laboratory of Organofluorine Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, for funding.
CC BY
no
2024-01-16 23:45:35
Org Lett. 2023 Dec 28; 26(1):376-379
oa_package/77/10/PMC10789091.tar.gz
PMC10789092
38147458
A nickel-catalyzed reductive dimerization of bromocyclobutenes to produce unusual and unprecedented cyclobutene dimers was developed. In a stereoconvergent procedure, various bromocyclobutenes were readily dimerized in good yields, with good diastereoselectivities and broad functional group tolerance. Notably, the presence of a carbonyl group in the starting material appears to dictate diastereoselectivity.
Cross-coupling reactions catalyzed by transition metals have become some of the most valuable synthetic tools over the years. Specifically, the union of a carbon electrophile [usually in the form of an organo(pseudo)halide] and a carbon nucleophile (such as an organometallic species) has proven its merit in a variety of contexts. 1 − 4 In contrast, reductive electrophile–electrophile cross-couplings are less firmly established. In line with the contemporary trend to “escape flatland”, as it commonly results from classical biaryl couplings, 5 C–C sp 3 couplings represent a particularly attractive scenario. However, such transformations also face the pervasive challenges of β-hydride elimination or hydrodehalogenation, which often hampers reaction success. 6 Nickel has proven to be an especially useful catalytic tool in this context, as its propensity to engage in single-electron transfer (SET) processes allows various oxidation states to be accessed. 7 , 8 Elegant nickel-promoted methods, including enantioconvergent coupling processes with C sp 3 electrophiles, have therefore been developed. 9 − 17 Cyclobutenes are important structural motifs present in a number of naturally occurring and biologically active compounds. 18 , 19 However, their synthesis remains challenging and is often planned late in a synthetic pathway because of their high propensity for electrocyclic ring opening. 20 , 21 Our group and others have developed valuable approaches to the synthesis of functionalized cyclobutenes 22 − 26 and successfully applied them in natural product synthesis. 27 − 30 In this context, 2-halo-cyclobutenes are particularly desirable building blocks. Thus, we have previously reported a Pd-catalyzed diastereodivergent de-epimerization of 2-chloro-cyclobutenes with malonate nucleophiles which results in the formation of highly functionalized adducts I ( Scheme 1 left) and enables the construction of two new stereocenters. 24 In an effort to access even more complex cyclobutene scaffolds, we speculated whether a reductive dimerization of two cyclobutenes could lead to products such as II , carrying up to four contiguous stereocenters. Notably, II can be considered three-dimensional analogs of mono- ortho -substituted biaryls ( Scheme 1 right and bottom). Herein, we report a stereoconvergent protocol that, for the first time, allows access to bis-cyclobutenes using nickel catalysis. We began our investigations by examining the reaction of trans -bromocyclobutene 1a ( trans ) under a variety of conditions ( Scheme 2 ). After extensive experimentation, it was found that a catalytic amount of NiCl 2 ·DME at room temperature affords a mixture of exclusively two diastereoisomeric cyclobutene dimers ( 2aa and 2ab, 2ac–2af not observed) in good yield and stereoselectivity with the cis,trans -cyclobutene dimer ( 2ab ) being obtained as the major isomer, as confirmed by X-ray analysis (see the Supporting Information for more details for the preparation of 2sa ). 31 Subtle changes of the nickel source, as well as the reaction solvent or reductant, had a deleterious effect on either the diastereoselectivity or the conversion rate (see the Supporting Information for more details). Interestingly, it was also found that other halide analogs produced substantially worse outcomes and gave either lower yield and lower diastereoselectivity (iodocyclobutene), or resulted in a complete shutdown of reactivity (chlorocyclobutene). Employing the cis -configured stereoisomer led to a slightly higher yield with comparable diastereoselectivity, thus supporting the notion that this process is stereoconvergent. To further verify this outcome and affirm the scalability of the reaction, we conducted the dimerization on a 2 mmol scale using trans -cyclobutene 1a and 5 mol % catalyst loading. The obtained yield was comparable with that of the small-scale reaction. Additionally, on a larger reaction scale (3.6 mmol), the same experiment was carried out using a mixture of trans/cis - 1a (see the Supporting Information ). With the optimized conditions in hand, we focused on exploring the generality of this process. As shown in Scheme 3 , various dimers of cyclobutenes bearing benzyl ester derivatives, both electron-rich ( 2b ) and electron-deficient ( 2c ) but also sterically hindered ( 2d ), and heterocyclic structures ( 2e ) were accessed in good yields and moderate to good diasteroselectivities. Alkyl cyclobutene esters reacted smoothly ( 2f – 2j ) and allowed even a bulky group ( tert -butyl moiety in 2f ) to be in close proximity to the reaction center ( Scheme 4 ). However, considering the results of substrates 2d and 2f , which show lower diastereomeric ratios with still comparable yields to the related benzyl ( 2a – 2c ) and alkyl esters ( 2g – 2 i ), it clearly stands out that steric hindrance seems to have a detrimental effect on the diastereoselective outcome of the reaction and suggests a non-innocent effect of the ester functionality. Importantly, dimer 2g , bearing TMSE (trimethylsilyl ethyl) esters, could be accessed—the presence of this group allows mild cleavage to reveal the cyclobutene carboxylic acid dimer. 24 Notably, alkenyl ( 2k – 2l ) and propargyl ( 2m ) esters, usually incompatible with state-of-the-art transition metal catalysts, were tolerated in this process. Similarly, cyclobutenes carrying phenyl esters were successfully dimerized ( 2n ), which allowed for the presence of other esters ( 2o ). Finally, our method could also be extended to thioesters ( 2p – 2r ), which showed excellent diastereoselectivity of up to 7.7:1 for the dimerized products. Next, we turned our attention to possible derivatization reactions. Surprisingly, heating of substrates 2aa and 2ab to 70 °C almost exclusively resulted in the pure trans -tetraene 6 , a structure confirmed by X-ray analysis ( Scheme 4 A). Shorter reaction times or lower reaction temperatures allowed only small amounts of ring-opened cyclobutene dimer to be formed, which demonstrated its unexpectedly high robustness to thermal conditions. Importantly, small amounts of isomer 3 were observed to be derived from diastereoisomer 2aa , which can be taken as evidence of a rapid isomerization reaction of 3 to 4 . In addition, 2a underwent both hydrogenation of the double bond and hydrogenolysis of the benzyl groups to yield carboxycyclobutane dimer 5 , whereas selective ester cleavage of 2a was easily achieved, which furnished the corresponding cyclobutene carboxylic acid dimer 6 with no erosion of the diastereomeric ratio ( Scheme 4 B). 32 Finally, we turned our attention to the reaction pathway and the origin of diastereoselectivity ( Scheme 5 ). Given the previously observed difference in diastereoselectivity between thioesters and esters (see Scheme 3 , with examples 2a and 2p or 2b and 2r ), we wondered whether the nature of the carbonyl group might facilitate temporary coordination to the catalyst during oxidative addition. 33 Hence, bromocyclobutene 7 carrying a benzyl ether instead of an ester was subjected to the optimized reaction conditions ( Scheme 5 A). The desired dimer 8 was obtained in moderate yield and very low d.r. (nearly 1:1) that, indeed, suggested the involvement of coordination by the carbonyl group. Furthermore, the diastereomeric ratio of the dimerization of 1a to yield 2a was followed over time and shown to remain unchanged, thereby ruling out the possibility that an epimerization event leading to the observed diastereomeric ratios could occur after product formation (see the Supporting Information for more details). On the basis of these results and important precedents, 34 − 37 we propose a mechanism for this transformation in Scheme 5 B. At the outset, Ni(II) is reduced to Ni(0) to set the stage for oxidative addition of the bromocyclobutene, which forms an allylnickel(II) complex ( 10 ). Along with the result of Scheme 5 A, 33 the radical recombination of nickel complex and allyl radical seems more likely to result in a cis -configured cyclobutene, forming complex 11 as the major species because of the coordination effect of an ester/thioester functionality. 38 Another reduction step to form the Ni(I) complex ( 12 ) then takes place with a second equivalent of bromocyclobutene subsequently being introduced to form 13 . Given the expected steric congestion around the nickel center, in combination with a saturated coordination sphere, it is likely that the second cyclobutene only adds when being trans -configured to the nickel complex. Lastly, complex 13 is prone to reductive elimination, which gives the homocoupled product 14 and completes the catalytic cycle after another Ni(I)-to-Ni(0) reduction. Finally, we speculated whether the bromocyclobutene could also be used in a reductive heterocoupling by carefully selecting a suitable second electrophilic partner. After screening various alkyl bromides and iodides (see the Supporting Information for a complete list of tested partners), we were, indeed, able to achieve a cross-coupling using cyclohexyl iodide ( Scheme 6 ). As previously observed in the dimerization of cyclobutenes, this process proved to be stereoconvergent and yielded 17 as a single diasteroisomer in trans configuration 39 starting from either trans -1a or cis -1a . Considering this stereochemical result, the ester functionality does not seem to play any role in this cross-coupling. We concluded that, because of the enhanced reactivity of the second reactant ( 16 ), radical recombination of an allyl radical with a Ni complex should occur only after formation of a Ni-alkyl complex, thereby preventing any carbonyl-directed oxidative addition (for a proposed mechanism of this reaction, see the Supporting Information ). In summary, we have developed the first Ni-catalyzed reductive dimerization of bromocyclobutenes in which the carbonyl function appears to be responsible for the observed diastereoselectivity. The reaction tolerates a wide range of cyclobutenes bearing ester and thioester moieties to give the desired dimers in good yields and diastereoselectivities. Initial investigations and results on a reductive heterocoupling point to further possibilities in this area.
Data Availability Statement The data underlying this study are available in the published article and its Supporting Information. Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.orglett.3c03909 . Experimental procedures and characterization data ( 1 H and 13 C NMR) for all new compounds ( PDF ) Supplementary Material Author Contributions § These authors contributed equally. Austrian Science Fund (P32206 and W1232) and European Research Council (CoG VINCAT 682002). The authors declare no competing financial interest. Acknowledgments We are grateful to Dr. Tim Grüne (U. Vienna) for XRDS measurements and the University of Vienna for generous support of our research programs. We thank Dr. Daniel Kaiser for proofreading the manuscript and for helpful suggestions. We thank Dr. Phil Grant for helpful advice and discussions. We are grateful to Dr. Lan-Gui Xie, Dr. Davide Audisio, and Dr. Yong Chen for initial and related experimental investigations.
CC BY
no
2024-01-16 23:45:35
Org Lett. 2023 Dec 26; 26(1):355-359
oa_package/d6/cd/PMC10789092.tar.gz
PMC10789093
38156902
Bicyclo[1.1.0]butanes (BCBs) have gained growing popularity in “strain release” chemistry for the synthesis of four-membered-ring systems and para - and meta -disubstituted arene bioisosteres as well as applications in chemoselective bioconjugation. However, functionalization of the bridge position of BCBs can be challenging due to the inherent strain of the ring system and reactivity of the central C–C bond. Here we report the first late-stage bridge cross-coupling of BCBs, mediated by directed metalation/palladium catalysis.
Bicyclo[1.1.0]butanes (BCBs) ( Figure 1 a) are a class of highly strained hydrocarbons that have become valuable tools for “strain release” chemistry. 1 These reagents possess the impressive ability to react with nucleophiles, 2 radicals, 3 electrophiles, 4 and transition metal catalysts, 5 with applications ranging from the synthesis of natural products 2d and para- ( 6 ) and meta- substituted 7 arene bioisosteres to use as cystine-selective bioconjugation agents. 2b , 8 Access to these building blocks has been streamlined with the recent developments of late-stage bridgehead and bridge metalation protocols that deliver a broad portfolio of BCBs, 9 including a convenient one-pot sulfone-based reaction sequence that affords exceptional diversity. 10 BCBs that are aryl-substituted at the bridge carbon atoms are attractive targets due to their potential use in accessing arene-functionalized products upon ring opening. Specifically, access to these products would open up new avenues for medicinal chemists in bicyclo[1.1.1]pentane, -[2.1.1]hexane, and -[3.1.1]heptane synthesis for reaching novel chemical space. 7b Recent methodologies developed for the synthesis of aryl-substituted BCBs include (1) biocatalytic double diazo–alkyne condensation that introduces two identical endo / exo bridge ester substituents (bridgehead aryl, Figure 1 b); 11 (2) asymmetric intramolecular diazo insertion into styrenes, catalyzed by rhodium(II) (bridge aryl); 2e , 12 and (3) bridgehead-directed metalation and cross-coupling (bridgehead aryl). 9a The methods outlined in each case address different challenges, such as the latter providing a divergent synthesis of bicyclopentylation reagents, and the asymmetric diazo insertion facilitating a route toward the total synthesis of piperarborenine B. 2d However, although intramolecular diazo insertion offers a powerful method for asymmetric bridge-arylated BCB synthesis, it suffers from the drawback of synthetic linearity rather than late-stage diversification. We previously developed a method that enables late-stage bridge functionalization through directed metalation/electrophilic quench, 9b although this tactic did not enable the introduction of aryl and alkenyl substituents. We questioned whether we might be able to extend this approach to bridge cross-coupling by transmetalation of the intermediate organolithium, enabling the rapid delivery of bridge aryl-substituted strain release reagents ( Figure 1 c). Notably, a similar strategy has been employed in the elegant polyfunctionalization of cubanes. 13 Reaction development began by employing three potential BCB organometallic coupling partners, boronic acid 1a , stannane 1b , and organozinc 1c (prepared from metalation of BCB 2a with organolithiums and electrophilic quench ( 1a / b ) or transmetalation to ZnCl 2 ( 1c )), in Suzuki, Stille, and Negishi coupling protocols, respectively ( Table 1 , entries 1–3). Interestingly, the former two strategies led only to complete decomposition of the starting material with no observable product, while entry 3 returned 2a with no sign of degradation. This was surprising given previous reports on cyclopropylzinc Negishi couplings as well as our own work on BCB bridgehead Negishi reactivity, 9a , 14 and it was hypothesized that TMEDA might be interfering with the reaction. To our delight, the use of TMEDA-free metalation in the generation of 3a ( t -BuLi in THF) and submission to equivalent coupling conditions (Pd(dba) 2 /2PPh 3 ) achieved cross-coupling in 28% yield (as determined by 1 H NMR spectroscopy; entry 4). A screen of 13 phosphine ligands was conducted, with the Buchwald-based ligands producing the highest yields and CyJPhos being optimal (48%; entry 5). 15 A temperature and solvent screen identified THF at 65 °C as being crucial for this transformation (entries 6 and 7). Increasing the equivalents of iodobenzene led to a further increase in the yield (60%; entry 8). While a useful result, the conversion could be further enhanced by increasing the catalyst loading to 15 mol %, giving 3a in 71% yield (entry 9). On scale-up, it became apparent that stirring the reaction mixture for 1 h at room temperature was crucial; otherwise, the reaction would fail due to Pd black formation. With optimized metalation and cross-coupling conditions in hand, we then examined the scope of the reaction ( Scheme 1 ). A selection of aryl iodides bearing electron-withdrawing and -donating groups at the para position was first investigated. To our delight, these couplings proceeded in good to excellent yields ( 3a – 3e , 60–84%). Reaction efficiency was maintained with ortho -substituted aryl iodide derivatives ( 3f and 3g ). The introduction of biorelevant functionality was also possible, for example, incorporating a galactose-bearing side chain in excellent yield ( 3h , 91%, 1:1 dr due to the racemic generation of 1c ). Alkenyl iodides were also compatible with the coupling conditions ( 3i , 64%); however, alkenes bearing an electron-withdrawing group were essential for the product stability. Heterocycle cross-coupling is also highly appealing from a medicinal chemistry stance due to the application of BCBs in para - and meta -arene bioisostere synthesis. We were therefore delighted to find that a representative range of azacycles could be installed in good yields (50–84%), including 2-substituted pyridine ( 3j ), indole ( 3k ), and quinoline ( 3i ). Pleasingly, these conditions could also be applied to BCB 2b , which is more sterically demanding at the bridgehead position (Ph substituent), giving 3m (64%) and 3n (55%). The latter coupling was also carried out on a 1 mmol scale without significant detriment to the yield (51%). Cross-coupling on other BCBs, such as trimethylsilyl-substituted BCB 2c and trisubstituted BCB 2d would demonstrate the feasibility of constructing more complex derivatives, including a tetrasubstituted product. However, no product was observed when 2c and 2d were subjected to the developed metalation and cross-coupling conditions, with neither undergoing productive metalation with t -BuLi at −78 °C in THF. Fortunately, TMEDA-free conditions for directed lithiation were identified ( s -BuLi at −45 °C in THF) 15 that could be applied to 2c and 2d . These substrates were then subjected to the cross-coupling conditions and, to our delight, produced silyl-substituted BCB 3o in 45% yield and tetrasubstituted BCB 3p in 65% yield. Resolving the cross-coupling issue of 2c and 2d inspired us to examine sulfone substrates; pleasingly, 4a could be obtained in an excellent yield of 82% with the s -BuLi metalation conditions. Having successfully demonstrated cross-coupling with trisubstituted BCB 2d , we questioned whether a complementary approach could be established through a second directed bridge metalation after bridge arylation ( Scheme 2 ). BCB 3c was chosen as a candidate, as the bridge arene possesses a para electron-withdrawing group (CF 3 ) that is tolerant of organolithiums. This substrate presents a regioselectivity challenge: the possibility of directed metalation ( 5a , Scheme 2 ) or benzylic deprotonation ( 5b ), both of which would provide a useful class of novel BCBs. Surprisingly, when 3c was subjected to the optimized conditions, neither the unsubstituted bridge nor the benzylic bridge underwent lithiation. Instead, s -BuLi was directed to the bridgehead methyl group, which then underwent BCB ring opening to give the corresponding enolate; quenching with allyl bromide afforded polysubstituted exocyclic cyclobutene 6 . This observation of this alternative metalation pathway may relate to restricted rotation of the directing group in substrate 3c , which prevents access to the unsubstituted bridge, as observed in the X-ray crystal structure of 3l and 1 H NMR spectrum of the bridge aryl-BCB derivatives. 15 In summary, we have developed a convenient and general late-stage Negishi cross-coupling strategy to access sp 2 -bridge-substituted BCBs. This approach enables the introduction of arenes, heteroarenes, and alkenes with broad functional group tolerance with respect to the arene: nitro, ester, halide, silyl, nitrile, ether, and acetal groups are all accommodated, which can allow for further manipulation. This approach enables the rapid delivery of new strain release reagents, which we expect to be of use to the wider chemical community for small-ring and bioisostere construction.
Data Availability Statement The data underlying this study are available in the published article and its Supporting Information . Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.orglett.3c04030 . Additional experimental discussion, experimental procedures, and copies of 1 H and 13 C NMR spectra ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments R.E.M. thanks the EPSRC Centre for Doctoral Training in Synthesis for Biology and Medicine for a studentship (EP/L015838/1) generously supported by AstraZeneca, Diamond Light Source, Defence Science and Technology Laboratory, Evotec, GlaxoSmithKline, Jannsen, Novartis, Pfizer, Syngenta, Takeda, UCB, and Vertex. E.A.A. and A.D.G. thank the EPSRC for support (EP/S013172/1).
CC BY
no
2024-01-16 23:45:35
Org Lett. 2023 Dec 29; 26(1):360-364
oa_package/dc/64/PMC10789093.tar.gz
PMC10789095
0
Background In the human genome, gene deletion (haploinsufficiency) or duplication (triplosensitivity) results in changes in gene dosage, while dosage changes in dosage-sensitive genes would result in phenotypic effects [ 1 – 3 ]. More than 300 haploinsufficient genes as well as 13 triplosensitive genes have been reported in human, which are associated with a variety of disorders such as neurodevelopmental disorders [ 4 ]. Recently, using machine-learning, a gene map was generated for dosage-sensitive genes from copy number variant data of nearly one million individuals. This map contains nearly 3000 haploinsufficient and over 1500 triplosensitive genes [ 3 ]. However, it is unclear why a subset of genes exhibit abnormal phenotypes upon gene dosage changes while the majority of others do not. The general mechanisms underlying dosage sensitivity remain poorly understood. Phase separation is a natural process occurring intracellularly that compartmentalizes protein, RNA as well as DNA in a concentration-dependent manner. During the phase separation process, these biomolecules assemble into separated condensates provided that these molecules are present at concentrations above a critical threshold [ 5 , 6 ]. High concentrations of specific components within these phase-separated condensates allow biological reactions to occur at accelerated rates, as these condensates enrich relevant molecules and exclude non-relevant or inhibitory molecules [ 5 ]. Therefore, dosage-sensitive gene products and phase-separating proteins share similar concentration-dependent properties. In addition to concentration dependency, products of dosage-sensitive genes and phase-separating proteins share other characteristics. For instance, intrinsically disorder regions (IDRs) and interaction domains/motifs within protein sequences assist in predicting dosage-sensitive genes [ 7 ], while interactions mediated by IDRs and multiple interaction domains/motifs constitute the driving forces of the processes behind phase separation. Furthermore, the protein products of dosage-sensitive genes tend to form homodimers [ 1 ], and phase-separating proteins often possess dimerization or oligomerization domains which are essential for the multivalent interactions to drive the phase separation process [ 8 ]. Dosage-sensitive gene products and phase-separating proteins are enriched in similar biological pathways such as transcription regulation, RNA splicing, and signaling pathway [ 6 , 9 ]. Lastly, dosage-sensitive genes lose their functions when under- or over-expressed [ 10 ]; similarly, phase-separating proteins result in abnormal protein assembly and cellular toxicity when abnormally expressed [ 11 , 12 ]. These similarities suggest that dosage sensitivity and phase separation are functionally related. However, little evidence supports this hypothesis to date. Several dosage-sensitive gene products have previously been reported to undergo phase separation, including MECP2, SYNGAP1, SOX2, and PAK2 [ 6 , 13 – 16 ]. However, so far only one recent study directly investigated the link between phase separation and dosage sensitivity [ 17 ]. In that study, loss-of-function (LoF) mutation in KMT2D was shown to impair its normal phase separation process owing to decreased KMT2D protein concentration, altering the functional partitioning of chromatin. As a result, patients carrying this mutation suffer from the haploinsufficiency-related disease named Kabuki syndrome [ 17 ]. Systematic studies to investigate the relationship between dosage sensitivity and phase separation are urgently required. In this study, both computational analysis and biological experiments showed that dosage-sensitive gene products exhibit a higher tendency to undergo phase separation. We then experimentally introduced pathogenic variations to dosage-sensitive genes to investigate whether dosage insufficiency leads to defect in phase separation. Furthermore, we utilized multi-omics data analysis to explore whether LoF genetic perturbations on phase-separating genes cause disturbed phenotypes. In addition, most of the current phase separation predictors rely on sequence features, and the prediction performance needs further improvement [ 18 ]. Based on the close ties between dosage sensitivity and phase separation, we developed an efficient phase separation predictor based on dosage-sensitive scores derived from population genetics data.
Methods Data acquisition Dosage-sensitive information (Dosage Sensitivity Curations, 2021-04-02) was downloaded from Clinical Genome Resource (ClinGen) [ 20 ]. LOEUF [ 22 ] scores (pLoF Metrics by Gene from gnomAD, 2021-01-05) and pLI [ 21 ] scores (Gene constraint scores from ExAC, 2016-02-12) were downloaded from Genome Aggregation Database (gnomAD). List of known phase-separating proteins (2021-06) was downloaded from PhaSepDB [ 39 ] ( http://db.phasep.pro/ ). TCGA somatic mutation annotation file (MuTect2 Masked Somatic Mutation, 2021-3-12), RNA-seq data (HTSeq-FPKM, 2021-8-9), copy number variation data (Gene Level Copy Number Scores, 2021-11-29), and TCGA sample clinical information (2021-3-12) were downloaded from TCGA data portal ( https://portal.gdc.cancer.gov/ ). ClinVar [ 35 ] vcf mutation data (vcf_GRCh38, 2020-9-14) and mutation summary data (variant_summary.txt, 2020-9-14) were downloaded from the National Center for Biotechnology Information (NCBI). The protein–protein interaction network data were downloaded from a previous study [ 49 ]. Acquisition of the human proteome The sequence data of the human proteome was downloaded from Uniprot (2020-08-06). The corresponding transcript of each gene in the Ensembl reference library (GRCh38, release-99) was mapped into the canonical proteins in Uniprot by using local blast tool. The parameters were as follows: blastp -outfmt 6 -evalue 1e-5 -num_threads 4. The corresponding protein and transcript were matched with the criteria of 100% match rate and the same protein length. Calculating phase separation scores of the human proteome The SaPS, PdPS, PScore, PLAAC, catGRANULE, and FuzDrop score of each protein was calculated using the corresponding tools under the default parameters [ 11 , 19 , 23 – 25 ]. PLAAC provides three summary scores for a given sequence, including LLR, CORE, and PRD. Since the LLR score is more appropriate in whole-proteome screening, the normalized LLR score (NLLR) to represent the PLD-forming propensity was used. SaPS and PdPS score based on ten features was used in this study. Calculating AUC of predicting phase-separating proteins Seventy-nine self-assembling human phase-separating proteins identified in a previous study [ 39 ] were collected, of which 53 were used for training and 26 were used for independent testing. One hundred twenty-one human partner-dependent phase-separating proteins were collected, of which 70 were used for training and 51 were used for independent testing. In total, 4491 human non-phase-separating proteins were collected, of which 2924 were used for training and 1567 were used for independent testing (Additional file 2 : Table S1). Independent test set was used to evaluate the AUC. Two times the number of proteins compared to phase-separating proteins were randomly selected from the non-phase-separating protein set as negative samples. All self-assembling proteins or partner-dependent phase-separating proteins were selected as positive samples. Above process was repeated 50 times and the mean AUC of scores were calculated respectively for comparison. Calculating DM scores of phase separation scores To account for the confounding effects of factors such as protein half-time on the phase separation scores, the DM values for each gene using rolling medians of phase separation score (PS) were computed: where is the rolling median of gene i from the scatter plot between confounding factor and phase separation scores. To compute the rolling medians, the following parameters were used: the number of genes in the window is 50 and the number of overlapping genes between adjacent windows is 25. Prediction of IDR The ESpritz DisProt program with the decision threshold set at a 5% false positive rate (FPR) was used to predict potential disordered regions [ 50 ]. Gene set enrichment analysis Gene set enrichment analysis was applied at webgestalt ( http://www.webgestalt.org/ ) [ 51 ] with over-representation analysis (ORA) as method and biological process in Gene Ontology (GO) as pathway database. Genome protein-coding gene set was used as reference set and method of weighted set over was used to reduce redundancy. Enriched pathways were selected based on FDR < 0.05. Gene lists of GO term were downloaded from Gene Ontology ( http://geneontology.org/ ). Identifying PTVs in ClinVar database The vcf mutation data of ClinVar was annotated through SnpEff with reference annotation file in ensemble (release-99) to obtain mutation information (gene, transcription, mutation position, and mutation type). Only mutations in the canonical transcription of each gene were selected. Mutations in ClinVar meeting the following criteria were selected: (1) with the status of pathogenic or likely pathogenic (P/LP); (2) without any conflicting interpretations; (3) with the review status of one or more gold stars. PTVs contained here nonsense, frameshift, and splice-disrupting mutations corresponding to “stop_gained,” “frameshift_variant,” and “splice_region_variant” in annotation of SnpEff. Identifying deletion copy number variants in ClinVar From mutation summary data of ClinVar, “copy number loss” variants meeting the following criteria were identified: (1) with the status of pathogenic or likely pathogenic (P/LP); (2) without any conflicting interpretations; (3) with the review status of one or more gold stars. Rules to predict NMD-escaping mutations The cDNA sequences and positional annotations of exons of each gene were downloaded from Ensembl (GRCh38, release-99). According to the position information of the premature termination codon (PTC) on the cDNA, the following rules were used to predict NMD-escaping mutation [ 37 ]: (1) if the PTC is in the last exon; (2) if the PTC is in the last 50 nt of the penultimate exon; (3) if the PTC is < 150 nt away from the start codon; and (4) if the PTC is in a long exon (> 400 nt). Quantification of Puncta in cells CellProfiler 3 was used to quantify the puncta in cells. First, all cells per image were identified based on target protein fluorescence or DAPI fluorescence. All punctas in the cells were subsequently identified under optimal parameters. Indicators including fluorescence intensity and areas for each cell and puncta were finally output by the program. Droplet recognition is implemented by Adapative Otsu’s method, which sets the threshold and divides the image into three parts (foreground, mid-level, and background) according to the brightness. The brightest foreground is the droplet we need. The fluorescence intensity is derived from the sum of the standardized pixel values of the pixels contained in the nucleus or droplets. The normalization method of the pixel value is to scale the metadata of the image so that it is in the range of 0.0–1.0. The average fluorescence intensity is calculated by dividing the fluorescence intensity of the Object by the corresponding number of pixels contained in the object (that is, the area of the Object). TruncPS model Dataset of positive phase-separating regions A number of previous studies selected regions of the phase-separating proteins for verification experiments to obtain the key regions of phase separation. The newly released version of PhaSepDB collects these protein regions selected as experimental region (LLPS regions). Here, repetitive protein regions were removed from PhaSepDB and protein regions that spontaneously phase separate experimentally were manually identified. Finally, 93 positive phase-separating regions in human were obtained (Additional file 5 : Table S4). Dataset of negative phase-separating regions The negative phase-separating regions were derived from two-part proteins. The first part were these remaining regions of phase-separating proteins after removing positive phase-separating regions. Fifty-six regions on phase-separating proteins (region length is greater than 20aa) were obtained. The second part was derived from non-phase-separating proteins. Regions on non-phase-separating proteins were sampled according to the length distribution of the dataset of positive phase-separating regions. The sampling size is twice that of the dataset of positive phase-separating regions. Finally, 242 negative phase-separating regions in human were obtained (Additional file 5 : Table S4). Features for the model Features used by the phase separation predictor SaPS [ 19 ] constructed by our laboratory and the embedding feature of the sequence were adopted. The Hydropathy, Kappa, and Net-charge score of a region were calculated by localCIDER using the default parameter [ 52 ]. The ESpritz DisProt program with the decision threshold set at a 5% false positive rate (FPR) was used to predict potential disordered regions [ 50 ], and the SEG local package with default parameters was used to detect low-complexity domains (LCD) within a given protein sequence [ 53 ]. The number of amino acids in the corresponding disordered or low-complexity region divided by the sequence length was defined as the IDR or LCD proportion. Bepler’s model was used to obtain sequence embedding features [ 54 ]. The feature vector (amino acid sequence length × 3705) obtained from embedding was averaged in the dimension of amino acid sequence length. Finally, a 3705-dimensional embedding feature vector was obtained. Model training Our model was constructed with XGBoost model, a tree-based machine learning algorithm with high efficiency and exemplary performance in handling tabular data. The fivefold cross-validation strategy was adopted to test the performance of the XGBoost model on the positive and negative datasets and calculated the average of AUC. At the same time, the mean AUC of PScore score, PLAAC score, IDR proportion, and LCD proportion were calculated respectively for comparison with the XGBoost model. The positive and negative datasets to train the XGBoost model were used to obtain the final model and predicted TruncPS scores of all NMD-escaping mutations. DosPS model A phase separation predictor called DosPS was constructed by utilizing LOEUF, pLI, pHaplo, and pTriplo score using a logistic regression (LR) model. The training set including 53 human self-assembling phase-separating proteins, 70 partner-dependent phase-separating proteins, and 282 randomly sampled non-phase-separating proteins were used to train model. Grid search was used to optimize the “random_state” and “C” parameters of the LR model. The independent test set was used to test the performance of scores. The model trained by training data was used to predict the DosPS score for the human proteome (Additional file 6 : Table S5). Experiments Cell lines, chemical reagents, and antibodies HeLa and HEK 293T cell lines were cultured in DMEM with 10% fetal bovine serum and 1% penicillin and streptomycin (Hyclone) at 37 °C and 5% CO 2 . Cell lines were either newly acquired from ATCC or authenticated within 6 months of growth and cells under culture were frequently tested for potential mycoplasma contamination. Lipofectamine 3000 transfection reagent (catalog no.L3000008) and Lipofectamine 2000 transfection reagent (catalog no.11668027) were obtained from Thermo Fisher Scientific. All antibodies used in this study are listed in Additional file 7 : Table S6. Cloning of constructs The full length of Homo sapiens SOX2 (NCBI Entrez Gene ID: 6657), PAX6 (NCBI Entrez Gene ID: 5080), HNRNPK (NCBI Entrez Gene ID: 3190), and PQBP1 (NCBI Entrez Gene ID: 10084) were amplified using PCR from human cDNA, and cloned into the pHis-parallel vector, with a 6 × His tag added at the N-terminus. The mutations in SOX2 and PQBP1 were introduced via PCR and confirmed by DNA sequencing. To generate FUS-SOX2 (1–128) chimaera, human SOX2 (residues 1–128), and FUS-IDR (NCBI Entrez Gene ID: 2521, residues 1–214) sequences were cloned from human cDNA, respectively, and inserted into the pHis-parallel vector with a 6 × His tag. For rescue constructs, PLVX-mCherry-SOX2, PLVX-mCherry-FUS-SOX2 (1–128), PLVX-mCherry-PQBP1, and their mutants were also constructed for expression in cells. The full length of Homo sapiens EHMT1 (NCBI Entrez Gene ID: 79813), TBL1XR1 (NCBI Entrez Gene ID: 79718), WDR45 (NCBI Entrez Gene ID: 11152), SLC2A1 (NCBI Entrez Gene ID: 6513), PHF6 (NCBI Entrez Gene ID: 84295), PBX1 (NCBI Entrez Gene ID: 5087), KCNQ2 (NCBI Entrez Gene ID: 3784), and FGD1 (NCBI Entrez Gene ID: 2245) were amplified using PCR from human cDNA and cloned into the PLVX-EGFP vector, with a EGFP tag added at the N-terminus. Protein expression and purification HNRNPK, PQBP1, HNRNPK-∆IDR, PAX6-∆IDR, PQBP1-∆IDR, and PQBP1 mutants were expressed in E. coli strain BL21 (DE3) cells. The bacteria were cultured at 37 °C at 220 rpm in a shaker incubator in LB medium to OD600 0.6–0.8, then induced with 0.5 mM IPTG for 16 h at 16 °C. The bacteria were collected by centrifugation at 4000 rpm for 30 min at 4 °C and resuspended in lysis buffer (20 mM Tris–HCl, 200 mM NaCl, 4 M Urea, 0.1 mM PMSF, 1 × protease inhibitor cocktail, pH 7.5) then sonicated for 30 min on ice (180 W, 5 s on and 5 s off). The lysates were collected by centrifugation at 20,000 g for 40 min at 4 °C. Next, the supernatant was loaded onto Ni 2+ -NTA resin. The column was washed with wash buffer (20 mM Tris–HCl, 200 mM NaCl, 4 M Urea, 30 mM imidazole, pH 7.5). Subsequently, proteins were eluted with elution buffer (20 mM Tris–HCl, 200 mM NaCl, 4 M Urea, 300 mM imidazole, pH 7.5). Eluted proteins were concentrated using Amicon Ultra filters (Millipore) and analyzed by SDS-PAGE. For SOX2, SOX2-p.Gly129fs, PAX6, HNRNPK-IDR, PAX6-IDR, and PQBP1-IDR protein purification, expression vectors were transformed in E. coli strain BL21 (DE3) and cultured at 37 °C to OD600 0.6–0.8, then induced with 0.5 mM IPTG for 4 h at 37 °C. E. coli cells were collected by centrifugation at 4000 rpm for 30 min at 4 °C and resuspended in lysis buffer (20 mM Tris–HCl, 200 mM NaCl, 6 M guanidine-HCl, 10 mM β-mercaptoethanol, 0.1 mM PMSF, 1 × protease inhibitor cocktail, pH 7.5) and lysed by sonication for 40 min (180 W, 10 s on and 10 s off). The lysates were clarified by high-speed ultracentrifugation for 40 min at 20,000 g at 4 °C. The supernatant was purified through Ni 2+ -NTA resin and washed with wash buffer (20 mM Tris–HCl, 200 mM NaCl, 6 M guanidine-HCl, 20 mM imidazole, and 10 mM β-ME, pH 7.5). Protein elution was done with elution buffer (20 mM Tris–HCl, 200 mM NaCl, 6 M guanidine-HCl, 20 mM β-mercaptoethanol, and 300 mM imidazole, pH 7.5). Eluted proteins were concentrated using Amicon Ultra filters (Millipore) and confirmed by SDS-PAGE. PAX6 proteins were then diluted to 20 mL by a low-salt buffer (20 mM Tris–HCl, 100 mM NaCl, 6 M guanidine-HCl, and 10 mM β-mercaptoethanol, pH 8.0), and further purified over a HiTrapTM Q column according to the manufacturer’s protocol (Cytiva). Fractions containing PAX6 proteins were pooled, concentrated, and analyzed by SDS-PAGE. All proteins were labeled with Alexa Fluor 488 (Thermo Fisher) and all purification steps were performed at 4 °C. Phase-separated droplet formation Phase-separated droplets of SOX2, SOX2-p.Gly129fs, PQBP1, PAX6, HNRNPK-IDR, PAX6-IDR, PQBP1-IDR, HNRNPK-∆IDR, PAX6-∆IDR, and PQBP1-∆IDR proteins formed by a quick dilution of the purified protein out of denaturing buffer into phase separation buffer containing 25 mM Tris–HCL pH 7.5 and various concentrations of NaCl to reach the final protein concentrations. Comparison of PQBP1 and its mutants was performed in 150 mM NaCl, 20 mM Tris–HCl pH 7.5, with protein concentrations ranging from 5 to 200 μM. Comparison of SOX2 and SOX2-p.Gly129fs were performed in 150 mM NaCl, 20 mM Tris–HCl pH 7.5, and 5 M NaCl pH 7.5, with protein concentrations ranging from 2.5 to 20 μM. Moreover, comparison of PQBP1-IDR and PQBP1-∆IDR was performed in 50 mM NaCl, 20 mM Tris–HCl pH 7.5. Comparison of HNRNPK-IDR and HNRNPK-∆IDR was performed in 150 mM NaCl, 20 mM Tris–HCl pH 7.5. Comparison of PAX6-IDR and PAX6-∆IDR was performed in 3 M NaCl, 20 mM Tris–HCl pH 7.5. HNRNPK and HNRNPK-∆IDR proteins were dialyzed into a dialysis buffer (25 mM Tris–HCl, 500 mM NaCl, and 0.1 mM PMSF, pH 7.5) at 4 °C overnight, and then concentrated and quickly diluted into a phase separation buffer containing 25 mM Tris–HCl pH 7.5 and different concentrations of NaCl. All phase diagrams were obtained on 384-well microscopy plates (Cellvis) and incubated at room temperature for 30 min before being imaged on an Olympus SpinSR spinning disk confocal super-resolution microscope with a × 100 oil objective. Fluorescence Recovery After Photobleaching (FRAP) measurements In vitro FRAP experiments were carried out with a NIKON A1 microscope equipped with a × 100 oil objective. Droplets were bleached with a 488-nm laser pulse (3 repeats). Recovery from photobleaching was recorded for the indicated time. Generation of heterozygous knockdown or knockout cell lines Knockdown of SOX2 and PAX6 and knockout of PQBP1 were performed using HEK 293T and HeLa cells, respectively. All the small guide RNAs (sgRNA) used in this study were selected using the CRISPR design tool ( https://portals.broadinstitute.org/gppx/crispick/public ). To generate the SOX2 knockdown (KD) HEK 293T cell line, the sgRNA (target sequence 5′-CGGCAATAGCATGGCGAGCG-3′) targeting the first exon of SOX2 genome was used. Knockout of PQBP1 was conducted in HEK 293T cells with two sgRNAs targeting the second exon of PQBP1 genome (5′-TCGAACACCTTGTACCAGCT-3′ and 5′-TGGTGGTAGGCCCTCCAACC-3′). Knockdown of PAX6 was conducted in HeLa cells with two sgRNAs targeting the first exon of PAX6 genome (5′-CCAGCCAGAGCCAGCATGCA-3′ and 5′-CTGGTCTTTCTGGGACTTCG-3′). Cells were transfected with sgRNAs by using Lipo3000 (Thermo Fisher Scientific) according to the manufacturer’s instructions. Twenty-four hours after the transfection, 200 cells were plated in a 150-mm cell culture plate. After 2 weeks, single-cell colonies were collected by Colony Cylinders. More than 20 colonies were analyzed by Western blot. Potential knockdown or knockout colonies were confirmed by DNA sequencing around the sgRNA targeting site. Mutation results are shown in Additional file 1 : Fig. S8D-F. Western blot Cell lysates were prepared from adherent cells. Proteins were fractionated by SDS-PAGE and transferred to the nitrocellulose filter membranes. The membranes were incubated overnight with primary antibodies at 4 °C, and HRP-conjugated secondary antibodies for 1 h at room temperature. Finally, a chemiluminescence reagent was used to amplify ECL signal and visualize the results. The band intensities were quantified using ImageJ software. Cell immunofluorescence staining Cells for fluorescence imaging were seeded onto number 1.5 glass bottom dishes, 24 h prior to experiments. Following washing with PBS for 5 min, cells were fixed in 4% (v/v) paraformaldehyde for 15 min and permeabilized with 0.1% (v/v) Triton-X 100 for 15 min. Cells were blocked for 1 h at room temperature with 5% (w/v) BSA containing 1% Tween-20 (PBS-T) in PBS. Cells were sequentially incubated with the indicated primary and secondary antibodies diluted in PBS-T (1:200–1:500) for 1 h. Secondary antibodies were conjugated to either Alexa Fluor 488 or 568. After washing for three times, Pro-Long Gold Antifade reagent (Life Technologies) were mounted onto samples. Imaging was conducted on Nikon A1R HD25 microscope or Olympus SpinSR rotary confocal microscope (100 × oil objective). Thresholds were kept constant across all images for endogenous cell immunofluorescence. Cell culture and transfection Cells were cultured in Dulbecco’s modified Eagle’s medium (Gibco) supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin (Hyclone) at 37°C and 5% CO 2 . All cell lines tested negative for mycoplasma contamination and grown to ~ 70% confluence for transfection. SOX2 knockdown HEK 293T cells were transfected with PLVX-mCherry-FUS-SOX2 (1–128), PLVX-mCherry-SOX2, and its mutant by using Lipofectamine 3000 (Thermo Fisher Scientific) according to the manufacturer’s instructions. And PQBP1 knockout HEK 293T cells were transfected with PLVX-mCherry-PQBP1 and its mutants. Cells were incubated with transfection mixture for 6–16 h and replaced with fresh medium. Live cell images were acquired using an Olympus SpinSR spinning disk confocal super-resolution microscope with a × 100 oil objective. Quantification of relationship between protein concentration and phase separation ability in vivo To quantify relationship between protein concentration and phase separation ability in vivo, mCherry tagged PQBP1 were transiently transfected in PQBP1 knockout HEK 293T cells in 35-mm glass bottom dish (Cellvis) and imaged on an Olympus SpinSR spinning disk confocal super-resolution microscope with a × 100 oil objective. CellProfiler 3 was used to quantify the puncta properties and mean fluorescence intensity of protein in cells. Dual luciferase reporter assay SOX2-KD cells were co-transfected with SOX2 expression plasmids, Firefly luciferase reporter plasmids, and the internal control vector pRL-TK (Renilla) using Lipofectamine 3000 (Thermo Fisher Scientific). Cells were incubated with transfection mixture for 12 h and replaced with fresh medium. Twenty-four hours after transfection, cells were lysed and assayed for luciferase activity using the Dual Luciferase Reporter Assay System (Promega). The data represent one of at least three independent assays. Standard deviations of the mean and Student’s t test were analyzed using GraphPad Prism 7. The experiments were repeated at least three times. RT-qPCR Control and SOX2 knockdown HEK 293T cells were transfected with indicated plasmids for 48 h. Total RNA was purified from cells using Trizol and quantified by Nanodrop. Two micrograms of total RNA was reverse transcribed to cDNA using TransScript® One-Step gDNA Removal and cDNA Synthesis SuperMix (TransGen, AT311-832 02). One microliter of 1:5 cDNA dilution was used for quantitative PCR with PerfectStart® Green qPCR SuperMix (TransGen, AQ601-01) on an ABI QuantStudio6 Real-time PCR system. Three replicates for each target gene were tested in each repeated experiment. The primers used in this experiment are listed in Supplementary table 7 . We used α-tubulin to normalize the data and calculated the normalized fold change for each target gene.
Results Dosage-sensitive gene products tend to undergo phase separation To better understand whether dosage-sensitive gene products exhibit a general tendency to undergo phase separation, we assessed the phase separation scores of dosage-sensitive genes using our previously developed phase separation predictor SaPS [ 19 ]. As a result, we obtained 311 haploinsufficient and 13 triplosensitive genes from the ClinGen database [ 20 ]. Compared to proteins in the human proteome, the scores for phase separation were significantly higher for dosage-sensitive gene products (Fig. 1 A). Furthermore, we found that eight dosage-sensitive genes belonging to both haploinsufficient and triplosensitive genes exhibited the highest phase separation scores (Fig. 1 A). Since the number of verified dosage-sensitive genes remains limited, we extended our analysis to genes predicted to have high dosage sensitivity potential. Haploinsufficient genes tend to be LoF-intolerant, and two LoF-intolerance scores, namely pLI [ 21 ] and LOEUF [ 22 ], were used to identify genes with high haploinsufficiency potential. In addition, the pHaplo and pTriplo score generated from large-scale copy number variant data were used to identify genes with high haploinsufficient/triplosensitive potential [ 3 ]. As shown in Fig. 1 A, genes exhibiting high dosage sensitivity potentials (corresponding to high pLI scores, low LOEUF scores, high pHaplo scores, and high pTriplo scores) exhibited significantly higher phase separation scores as well. Next, we examined whether dosage-sensitive scores are significantly correlated with phase separation scores using linear regression. As shown in Fig. 1 B and Additional file 1 : Fig. S2, both haploinsufficient and triplosensitive measures were significantly correlated with phase separation scores. Currently, multiple phase separation predictors are available, with each algorithm preferentially prioritizing phase-separating proteins featuring different sequence information [ 18 ]. We repeated the correlation analysis using phase separation scores generated by other available predictors, including PLAAC [ 23 ], PScore [ 24 ], catGRANULE [ 11 ], and FuzDrop [ 25 ]. As shown in Additional file 1 : Fig. S1-S2, phase separation scores generated by all these predictors were significantly correlated with dosage-sensitive scores as well. As available phase separation predictors do not rely on population genetics information used in dosage-sensitive scores, we were confident that the close relationship between dosage sensitivity and phase separation was not caused by the confounding bias from population genetics information or protein sequence information. Based on these findings, we hypothesized that dosage-sensitive scores can be used to measure the ability of proteins to phase separate. To validate our hypothesis, we built the phase separation predictors based on dosage-sensitive scores and evaluated the predictors by area under the curve (AUC). Phase-separating proteins can be divided into two groups: self-assembling proteins, which can phase separate spontaneously, and partner-dependent proteins, which interact with partners to undergo phase separation [ 19 ]. For self-assembling proteins (Additional file 2 : Table S1), the phase separation predictors based on dosage-sensitive scores achieved well AUC performance (Fig. 1 C). For partner-dependent proteins (Additional file 2 : Table S1), except PdPS which integrated posttranslational modification information, most of the available phase separation predictors performed poorly (Fig. 1 D). However, the phase separation predictors based on dosage-sensitive scores exhibited much higher AUC than most of available phase separation predictors (Fig. 1 D). Previous researches have demonstrated that dosage sensitivity is correlated with a number of factors such as protein half-life, mRNA half-life, and translation rate (Additional file 1 : Fig. S3A-C) [ 26 – 29 ]. To control the impact of other factors, we applied distance to the median (DM) score to standardize each factor respectively. The results showed that phase separation score is still significantly higher for genes with high dosage sensitivity (Fig. 1 E). We also performed multiple linear regression analysis in order to control multiple factors simultaneously, which also showed that the coefficient of phase separation to dosage sensitivity is the highest when we kept other factors consistent (Fig. 1 F). These in silico analysis demonstrated that dosage sensitivity highly correlated with phase separation independent of other related factors. Researches have also shown that promiscuous linear motifs in disordered regions are associated with dosage sensitivity [ 7 ]. Moreover, disordered regions in proteins are important mediators of phase separation. In order to exclude the possibility that promiscuous linear motifs in the disordered regions might be the cause of the association between phase separation and dosage sensitivity, we compared the dosage sensitivity score between the known phase-separating proteins with high and low proportions of disordered regions. Compared to proteins in the human proteome, the scores for dosage sensitivity were significantly higher both for phase-separating proteins with high and low proportions of disordered regions (Fig. 1 G). Therefore, this result demonstrated that dosage sensitivity is highly correlated with phase separation independent of disordered region proportions. Protein products of the dosage-sensitive genes PQBP1, HNRNPK, and PAX6 undergo phase separation Of the 317 dosage-sensitive genes from the ClinGen database, 17 gene products were previously reported to undergo phase separation, such as KMT2D [ 17 ], SYNGAP1 [ 14 ], and SOX2 [ 15 ] (Fig. 2 A, Additional file 1 : Fig. S4). Many dosage-sensitive gene products exhibit high phase separation scores but have not been verified experimentally (Fig. 2 A). Thus, we tested the phase separation ability of the three proteins, PQBP1 in Renpenning syndrome [ 30 ], HNRNPK in Au-Kline syndrome [ 31 ], and PAX6 in Aniridia [ 32 ]. We purified the bacterially expressed recombinant PQBP1, HNRNPK, and PAX6 proteins (Additional file 1 : Fig. S5) and analyzed their phase separation in vitro. As shown in Fig. 2 B, both PQBP1 and HNRNPK formed spherical liquid droplets, in a salt and protein concentration-dependent manner (Fig. 2 B). Next, we used fluorescence recovery after photobleaching (FRAP) to quantify the droplet’s fluidity. PQBP1 droplets showed a recovery of 50% fluorescence intensity within 90 s post-bleaching. Similarly, HNRNPK droplets reached 50% recovery within 30 s, indicating a highly dynamic exchange of both proteins between the droplets and the environment (Fig. 2 C). PAX6 condensed under conditions at low protein concentration and extremely high salt concentration, together with exhibiting a very slow recovery rate (Fig. 2 B, C). A similar case was reported before. SOX2, another pluripotent transcription factor, forms droplets only at low protein concentration and high salt concentration [ 15 ]. PAX6 and SOX2 form SOX2/PAX6/DNA ternary complex and together promote lens development [ 33 ]. Haploinsufficiency of either PAX6 or SOX2 results in similar eye diseases [ 32 , 34 ]. Together, these similar phase separation behaviors suggested that PAX6 is functionally related to SOX2. We also purified truncated proteins as control groups, including recombinant HNRNPK-IDR (residues 269–463), HNRNPK-∆IDR (residues 1–268), PAX6-IDR (residues 173–422), PAX6-∆IDR (residues 1–172), PQBP1-IDR (residues 142–265), and PQBP1-∆IDR (residues 1–141) proteins to test their phase separation ability. The IDR regions of these proteins undergo phase separation significantly, forming spherical droplets, but their ∆IDR regions did not form spherical droplets. HNRNPK-∆IDR and PQBP1-∆IDR formed precipitates and PAX6-IDR was in a soluble state (Additional file 1 : Fig. S5-6). These truncated proteins prove that the phase separation phenomenon depends on the predicted IDR. In addition, immunofluorescence experiments for PQBP1, HNRNPK, and PAX6 showed that these proteins formed clear puncta in cells (Fig. 2 D). Together, our computational and experimental results suggested that protein products of dosage-sensitive genes undergoing phase separation are more general than currently appreciated. Putative LoF mutations in dosage-sensitive genes result in phase separation defect After showing that dosage-sensitive genes tend to undergo phase separation, we next explored whether pathogenic variations in dosage-sensitive genes would impact their phase separation process. Since phase separation requires protein concentrations to reach a critical level, we hypothesized that products of dosage-sensitive genes would not undergo phase separation if LoF mutations cause protein levels to be below the concentration that triggers phase separation. To test our hypothesis, we searched the ClinVar database [ 35 ] for LoF mutations resulting in a decrease in protein level of haploinsufficient genes. Deletion of one copy of haploinsufficient gene results in reduced protein levels, while protein-truncating variants (PTVs) which introduce premature stop codons to haploinsufficient genes might promote degradation of mutant mRNAs by nonsense-mediated mRNA decay (NMD), lowering protein levels eventually as well [ 36 ]. Therefore, we defined here deletions and NMD-causing mutations as putative LoF mutations. Of the 311 haploinsufficient genes defined in ClinGen database, we found 263 genes that harbor pathogenic deletions and 234 genes possessing pathogenic NMD-causing mutations based on rules of NMD-escaping [ 37 ] (Fig. 3 A, Additional file 1 : Fig. S7, Additional file 3 : Table S2). For example, it was reported that one gene copy deletion of the haploinsufficient gene SOX2 causes Anophthalmia syndrome, which is characterized by abnormal development of the eyes and other parts of the body [ 34 ]. To assess the consequences of gene copy deletion on phase separation, we constructed two heterozygous knockdown cell lines via CRISPR-Cas9-mediated gene editing by targeting one of the two alleles. We found that compared to wild-type cell line, the expression levels of SOX2 or PAX6 in the knockdown lines were lower (Additional file 1 : Fig. S8A-B). As shown in Fig. 3 B–E and Additional file 1 : Fig. S8G-J, SOX2 or PAX6 formed a larger number of puncta in the nucleus of wild-type cells following immunofluorescence staining, while both of two independent knockdown cell lines per gene showed far less puncta. To explore whether phase separation intensity decreases more dramatically than protein concentration in the heterozygous knockdown cells, we normalized the phase separation intensity by mean fluorescence intensity indicating protein concentration in the cells. The results showed that the normalized phase separation intensity in the knockdown cell lines is still lower than that of wild-type cells (Fig. 3 D, E). In agreement with our hypothesis, one gene copy deletion of SOX2 or PAX6, which mimicked the heterozygous mutation in diseases, changed their phase separation property in cell. Next, we directly assessed how protein expression of haploinsufficient genes correlated with their phase separation ability. To this end, we transfected PQBP1 knockout cells with plasmids expressing mCherry-PQBP1 and mCherry-PQBP1-∆IDR fusion protein at different doses. We found that the intensity of mCherry-PQBP1 puncta fluorescence displays a non-linear relationship with the concentration of transfected protein (Fig. 3 F). As a negative control, mCherry-PQBP1-∆IDR did not form any puncta in cells, which was consistent with the droplet assay experiment (Fig. 3 G, Additional file 1 : Fig. S5-6). We found no correlation between the formation of droplets and mCherry-PQBP1-∆IDR levels in the negative control. This finding suggested the extent of phase separation primarily relies on protein expression levels of haploinsufficient genes. NMD-escaping mutations in dosage-sensitive genes lose phase-separation-prone regions and cause abnormal phase separation In addition to NMD-causing mutations, NMD-escaping mutations failing to trigger NMD commonly result in the expression of truncated proteins [ 38 ]. To assess whether the truncated proteins with loss of phase-separation-prone regions exhibit abnormal phase separation ability, we obtained NMD-escaping mutations from the ClinVar database. Of the 311 haploinsufficient genes in ClinGen database, 262 genes harbored pathogenic NMD-escaping mutations (Fig. 3 A, Additional file 1 : Fig. S7, S9A, Additional file 3 : Table S2). To identify truncated proteins which might lose phase-separation-prone regions, we developed a tool called TruncPS to evaluate the phase separation potential of the truncated regions (Additional file 1 : Fig. S9B, see Methods). Briefly, TruncPS used experimentally verified phase-separation-prone regions in PhaSepDB [ 39 ] as positive training set (Additional file 5 : Table S4), and evaluated the phase separation capability of the truncated region, namely by integrating multiple features including sequence embedding, intrinsically disordered region (IDR) proportion, low-complexity domain (LCD) proportion, hydropathy, kappa, and net-charge properties. As shown in Fig. 4 A, the prediction performance of TruncPS was much better than currently available phase separation predictors used to screen phase-separation-prone regions. To obtain phase separation impact score for NMD-escaping mutations, we applied TruncPS on the truncated regions of NMD-escaping mutations in the 262 haploinsufficient genes (Additional file 1 : Fig. S9C-D). For example, a frame-shift mutation at position 129 of SOX2 protein results in a loss of 188 aa (Fig. 4 B, Additional file 4 : Table S3), thus generating a truncated SOX2 version. A high TruncPS score of this mutation indicates that the ability of SOX2 to phase separate reduces following truncation. To validate this prediction, we assessed how the mutation changes phase separation ability of SOX2. As shown in Fig. 4 C, the protein-truncating variant SOX2 p.Gly129fs resulted in a significant reduction in phase separation compared to wild-type SOX2 protein, under physiological salt concentration. The contrast was even more significant in the presence of 5 M NaCl (Fig. 4 C and Additional file 1 : Fig. S5). We complemented SOX2 knockdown cells with similar level wild-type SOX2 or SOX2 p.Gly129fs proteins (Additional file 1 : Fig. S11C-E). And we observed the same intracellular results with extracellular results with the same mean fluorescence intensity of proteins (Fig. 4 D and Additional file 1 : Fig. S10A-B). Together, these findings strongly suggested that the loss of partial phase-separation-prone regions due to the mutation abolishes the ability of SOX2 to phase separate. Furthermore, a considerable number of haploinsufficient genes possess multiple NMD-escaping mutations which generate truncations with different lengths. As shown in Fig. 4 E, generally longer truncated region is coupled with higher TruncPS score. We observed that the length of the truncated region positively correlated with the decrease in phase separation ability. To validate this observation, we generated truncations with different lengths for PQBP1 (Additional file 4 : Table S3). PQBP1 consists mainly of a folded WW domain (residues 48–81), the central IDR (residues 104–163) with a polar-amino-acid-rich domain (PRD) of high charge density, followed by a nuclear localization signal (NLS; residues 170–187) and a C-terminal IDR (residues 190–265) (Fig. 4 F). The in vitro phase separation ability of a mutant form of PQBP1, namely p.Arg260*, was slightly lower than that of wild-type, while phase separation ability of p.Arg214fs, p.Glu183fs progressively decreased as truncated region lengthened. The p.Arg155* and p.Arg142* mutants, instead of forming spherical liquid-like droplets, precipitated heavily under all conditions tested (Fig. 4 G, Additional file 1 : Fig. S5). This finding is consistent with our in vivo results (Fig. 4 H). We complemented PQBP1 knockout cells with PQBP1 wild-type protein or mutant proteins, maintaining them at essentially endogenous levels (Additional file 1 : Fig. S11A-B). As shown in Fig. 4 H and Additional file 1 : Fig. S10C-D, the removal of the C-terminus resulted in a decrease in the intracellular condensates of PQBP1, in the case of p.Arg214fs, p.Glu183fs. Moreover, hardly any condensate was observed in cells with the two PQBP1 mutants, p.Arg155* and p.Arg142*. Together, these results demonstrated that in addition to LoF mutations which lower protein levels, loss of phase-separation-prone regions on dosage-sensitive gene products affect their phase separation process. Impaired phase separation caused by LoF genetic perturbations causes disturbed phenotypes which can be restored by rescuing phase separation The results thus far demonstrated that dosage-sensitive gene products tend to undergo phase separation and pathogenic variations in dosage-sensitive genes lead to an impaired phase separation process. To evaluate the effects of impaired phase separation on cellular behavior, we utilized a perturb-seq dataset. This dataset provided single-cell RNA-sequencing readouts after CRISPR-based perturbation of gene expression [ 40 ]. This genome scale profiling of genetic perturbations enables systematic assignment of cellular phenotypes for each gene perturbation. To test whether perturbation of phase-separating genes results in more dramatic phenotypic changes, we applied an energy test [ 40 ] that evaluates global transcriptional changes of each gene perturbation. As shown in Fig. 5 A, the p -value obtained from this energy test of genes with high phase separation scores were significantly lower than those genes with low phase separation scores. This finding indicated that the LoF of phase-separating genes results in dramatic phenotypic changes when compared to non-phase-separating genes. To further measure the severity of perturbation of phase-separating genes, we compared the p -values from energy test of known dosage-sensitive genes with those of phase-separating genes. As shown in Fig. 5 B, the p -value obtained through the energy test of known dosage-sensitive genes was similar to those of known phase-separating genes, as well as those for high-phase-separation scores. These results demonstrated that LoF genetic perturbations on phase-separating genes cause similar transcriptional phenotypes as those on dosage-sensitive genes, suggesting that perturbation of phase-separating genes results in dosage-sensitive-like effect. Considering that most TFs obtain phase separation ability, we then attempted to assess whether it could be possible to restore the function of phase-separating TFs carrying LoF mutations by rescuing their phase separation abilities. We first verified whether it is possible to rescue phase separation of SOX2 in the knockdown cell line by ectopically expressing chimeric SOX2 proteins with IDRs promoting phase separation. To this end, we generated FUS-SOX2 (1–128) chimeric proteins by connecting IDR-truncated SOX2 (residues 1–128) to the downstream of FUS IDR (residues 1–214). Previous experiments have characterized FUS IDR to phase separate in vitro [ 41 ]. When we complemented SOX2 knockdown cells with SOX2 protein or FUS-SOX2 (1–128) chimeric protein, we observed similar intracellular puncta compared to control cells expressing the mCherry-vector alone (Fig. 5 C and Additional file 1 : Fig. S10A-B, S11C-E). We next sought to determine whether the transcriptional activity of SOX2 proteins depends on IDR-driven phase separation. We used the dual-luciferase reporter assay to detect SOX2-dependent transcriptional activity. For this study, 26 copies of the canonical SOX2 binding motif are inserted upstream of a promoter for firefly luciferase. By co-transfecting plasmids driving the expression of SOX2 proteins with firefly luciferase plasmids and Renilla luciferase plasmids, significant luciferase activities could be detected in cells (Fig. 5 D). Since both SOX2 and FUS-SOX2 (1–128) provide similar phase-separating abilities as shown above, we attempted to rescue the transcription activity of SOX2 with FUS-IDR fused SOX2. Compared with SOX2 full-length proteins, the IDR deficient SOX2-p.Gly129fs proteins significantly reduced transcriptional activity, but the FUS-IDR fused SOX2 (1–128) improved luciferase expression, which indicated FUS-IDR rescued transcriptional activity of IDR-deficient SOX2. In addition, we used RT-qPCR experiments to analyze the effects of rescuing the SOX2 heterozygous knockdown cell line with the chimeric FUS-SOX2 (1–128) on the expression of endogenous target genes. For most of all twelve SOX2 activated target genes in TRRUST database [ 42 ], the heterozygous knockdown cells and the heterozygous knockdown cells expressing the IDR-deficient SOX2-p.Gly129fs proteins significantly reduced transcriptional activity compared with wild-type cells, but as with the wild-type SOX2, the FUS-SOX2 (1–128) improved expression of endogenous target genes in heterozygous knockdown cells (Fig. 5 E, Additional file 1 : Fig. S12). These evidences demonstrated the importance of phase separation for the transcriptional function of SOX2, implying a possible mechanism to restore LoF perturbations by rescuing phase separation abilities. Dosage-sensitive scores derived from population genetics data are effectively predictive of phase separation Features of protein sequences and structures that are prone to phase separation have been extensively discussed in previous studies [ 18 ]. Nevertheless, available phase separation predictors are far from perfect because of possible neglected principles. The close link between phase separation and dosage sensitivity suggests that phase-separating proteins can be predicted by dosage-sensitive scores derived from population genetics data. To this end, we integrated four dosage-sensitive scores (pLI, LOEUF, pHaplo, and pTriplo) by logistic regression model and established a phase separation predictor called DosPS (dosage sensitivity-based phase separation predictor). As shown in Fig. 6 A, the AUC value for DosPS on the test set was 0.8256, which outperformed all currently available phase separation predictors. We also attempted to integrate the dosage-sensitive scores and sequence-based phase separation predictors to improve the prediction performance. However, the integration of sequence-based phase separation predictors such as PLAAC and PScore did not improve the prediction performance of DosPS (Fig. 6 A). To demonstrate the differences between DosPS and the other phase separation predictors, we overlapped the top-scored proteins of six predictors (Additional file 1 : Fig. S13A). As shown in Fig. 6 B, DosPS-top-scored proteins were characterized by a lower percentage of disordered regions, which is different from the preference of other predictors for disordered regions. To validate the performance of the DosPS predictor, we experimentally validated the top-scored candidates. Candidates included EHMT1, TBL1XR1, SLC2A1, and WDR45 which are specific in DosPS-top-scored proteins and exhibit lower IDR percentage, PHF6, PBX1, KCNQ2, and FGD1 which are included in top-scored proteins of other predictors and exhibit higher IDR percentage. As shown in Fig. 6 C, these proteins exhibited appropriate cellular localization and formed prominent puncta in both the nucleus and cytoplasm. These results clearly showed that DosPS constitutes an efficient phase separation predictor featuring dosage-sensitive scores compared to other available tools that solely rely on primary sequence information.
Discussion In this study, we established a clear link between dosage sensitivity and phase separation. We showed that products of dosage-sensitive genes possess extremely high phase separation scores. In vitro and in cell experiments further proved that pathogenic variations in dosage-sensitive genes disturb the phase separation process either by reduced protein levels or by loss of phase-separation-prone regions. Multi-omics data analysis further demonstrated that LoF genetic perturbations on phase-separating genes lead to mimic dosage-sensitive effect. Featuring dosage-sensitive scores closely related to phase separation, the novel phase separation predictor DosPS performed better compared to other available tools. While previous studies explained dosage sensitivity with stoichiometric imbalance [ 1 , 43 ], we offer a novel theory to explain dosage sensitivity with phase separation. Previous studies failed to consider that among the genes in yeast that are highly sensitive to overexpression, 75% of these genes are not haploinsufficient genes [ 10 ]. Another study proposed the dosage-stabilizing hypothesis, stating that dosage-sensitive gene products lose normal function when underexpressed due to insufficient amount of protein, but become toxic when overexpressed due to adverse effects on protein homeostasis or the imbalance of protein composition in complex [ 10 ]. However, the molecular mechanisms underlying the dosage-stabilizing hypothesis remained under-researched. Based on our findings that protein products of dosage-sensitive genes are capable of high phase separation, we propose a model to explain dosage sensitivity with concentration-dependent triggering of phase separation process (Fig. 6 D). Expression of homozygous wild-type genes generates normal protein levels which are sufficient to trigger the phase separation process. Deletion or NMD-causing mutations result in a defect in phase separation due to reduced protein levels. The loss of phase-separation-prone regions by NMD-escaping mutations reduces the ability of the protein for phase separation, resulting in a phase separation defect. Furthermore, an aberrant increase in gene copy numbers of triplosensitive genes results in over-production of proteins. According to our model, two consequences of such overexpression are possible. First, such aberrant overexpression results in abnormal hyper-activation of related downstream pathways. Alternatively, abnormal accumulation of over-produced protein might result in abnormal aggregation of phase-separating proteins. Previous researches have focused on dosage sensitivity when over-expressing phase-separating proteins, suggesting that excessive aggregation of phase-separating proteins can be toxic to cells [ 11 ]. However, in addition to exploring the dosage sensitivity caused by overexpression of phase separation, we also analyzed the dosage sensitivity caused by the under-expression of phase-separating proteins in depth. In addition, we systematically discussed the impact of abnormal phase separation on downstream functions. In phase separation field, what kind of proteins are prone to phase separate has been extensively discussed [ 5 , 8 , 44 , 45 ]. However, previously established phase separation predictors were far from perfect, especially performed poorly for partner-dependent phase-separating proteins. By investigating the relationship between dosage sensitivity and phase separation, we provide a novel approach to predict phase separation. We found that dosage-sensitive scores predicted phase-separating proteins with high confidence for both self-assembling and partner-dependent phase-separating proteins. Since dosage-sensitive scores do not depend on primary sequence information, the partner-dependent phase-separating proteins, which are characterized by a lower percentage in intrinsically disordered regions, are identified by DosPS as well. Consequently, the percentage in disordered regions of high-phase-separation-potential proteins predicted by DosPS is significantly lower. Our analysis strongly suggests that previous sequence-based phase separation predictors are biased toward disordered regions [ 46 ]. In comparison, our newly devised approach offers a more reliable avenue for predicting phase-separating proteins, namely by using the dosage-dependent degree of proteins. Our findings provide a novel insight into combining phase separation mechanism and cellular events related to change of protein levels. While we showed that phase separation represents a potential mechanism of dosage sensitivity, a number of limitations remain to be addressed. Firstly, for the NMD-escaping mutations in haploinsufficient genes which generate truncated proteins with loss of phase-separation-prone regions. Usually, phase-separation-prone regions are not limited to regulating the process of phase separation [ 47 , 48 ]. For example, key regions for phase separation of SHP2 include the conserved well-folded PTP domain, which acts as a phosphatase regulating the homeostasis of protein tyrosine phosphorylation [ 47 ]. Loss of phase-separation-prone regions may disturb functional domains or interaction sites non-relevant to phase separation. Although abnormal phase separation might represent a more general mechanism for dosage sensitivity than currently appreciated theories, other mechanisms may underlie the origin of dosage sensitivity as well. Secondly, the precise relationship between the concentration of protein and the degree of phase separation process needs to be studied in further detail. How to utilize the existing experimental data to predict the threshold concentration of proteins to undergo phase separation in cell is still a challenging task. Lastly, while we show that triplosensitivity is closely linked to phase separation, we did not investigate how overexpression of protein contributes to disease by affecting phase separation. We speculate that overexpression of protein may change the properties of the droplets, such as becoming solid or gel, or cause the continuous activation of biological processes regulated by phase separation resulting in disordered cell states.
Conclusions In conclusion, we propose that aberrant phase separation is a biological process associated with the dysfunction of dosage-sensitive genes. We extend the pathogenic mechanism to the abnormal concentration of phase-separating proteins, which closely links the relationship between diseases and phase separation. In the future, we expect that correcting the abnormal phase separation process constitutes a suitable avenue for future treatment of dosage-sensitive diseases.
Background Deletion of haploinsufficient genes or duplication of triplosensitive ones results in phenotypic effects in a concentration-dependent manner, and the mechanisms underlying these dosage-sensitive effects remain elusive. Phase separation drives functional compartmentalization of biomolecules in a concentration-dependent manner as well, which suggests a potential link between these two processes, and warrants further systematic investigation. Results Here we provide bioinformatic and experimental evidence to show a close link between phase separation and dosage sensitivity. We first demonstrate that haploinsufficient or triplosensitive gene products exhibit a higher tendency to undergo phase separation. Assessing the well-established dosage-sensitive genes HNRNPK, PAX6, and PQBP1 with experiments, we show that these proteins undergo phase separation. Critically, pathogenic variations in dosage-sensitive genes disturb the phase separation process either through reduced protein levels, or loss of phase-separation-prone regions. Analysis of multi-omics data further demonstrates that loss-of-function genetic perturbations on phase-separating genes cause similar dysfunction phenotypes as dosage-sensitive gene perturbations. In addition, dosage-sensitive scores derived from population genetics data predict phase-separating proteins with much better performance than available sequence-based predictors, further illustrating close ties between these two parameters. Conclusions Together, our study shows that phase separation is functionally linked to dosage sensitivity and provides novel insights for phase-separating protein prediction from the perspective of population genetics data. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-023-03128-z.
Supplementary Information
Acknowledgements We gratefully acknowledge Gaofeng Pei from Dr. Pilong Li’s lab at Tsinghua University for luciferase plasmids used in Dual Luciferase Reporter Assay. Review history The review history is available as Additional file 9 . Peer review information Anahita Bishop and Andrew Cosgrove were the primary editors of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Authors’ contributions T.L. and Y.L. designed research; L.Y., J.L., X.L., and G.G. performed research; L.Y. and X.Z. analyzed data; L.Y., T.L., J.L., X.L., Y.L., T.C., and X.Z. wrote the paper. Funding This work was supported by the National Key Research and Development Program of China (Grant Nos. 2021YFF1200900, 2018YFA0507504); the National Natural Science Foundation of China (Grant Nos. 32070666, 32170684), and the National Science and Technology Innovation 2030-Major program of “Brain Science and Brain-Like Research” (Grant Nos. 2022ZD0213900, 2022ZD0204900). Availability of data and materials All study data are included in the article and supporting information. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-16 23:45:35
Genome Biol. 2024 Jan 15; 25:17
oa_package/d8/69/PMC10789095.tar.gz
PMC10789120
38194514
INTRODUCTION Characteristics peculiar to the work activity of military police officers contribute to illness, especially cardiovascular diseases, which represent the leading cause of mortality in the world, being responsible for around 17.9 million deaths annually ( 1 ) . Modifiable cardiovascular risk factors (CVRF), such as smoking, excessive alcohol consumption, dyslipidemia, insufficient levels of physical activity, sedentary behavior and excess weight, are associated with the development of these diseases. Furthermore, daily contact with violence, crime, long working hours, fear of death and professional insecurity can generate stress and disorders related to mental health ( 2 , 3 ) . Among CVRF, sedentary behavior estimated by time spent sitting, reclining or lying down is associated with all causes of mortality, regardless of regular physical activity, and deserves to be investigated regarding physical activity levels. Sedentary behavior can be characterized as any activity that reduces body energy expenditure to values close to resting levels, including activities such as sitting, sleeping, watching television and using the computer ( 4 ) . Military police officers, even exercising a profession that requires good physical conditioning and regular physical activity, may exhibit sedentary behavior, given that the nature of work demands administrative activities and patrols carried out in a sitting position ( 5 ) . As the military police are one of the main bodies for ensuring the safety of society, police officers and their supervisors need to be aware not only of maintaining the troops’ physical fitness ( 6 ) , but also of the risks that sedentary behavior brings to health and the performance of work activities. Regarding the state of the art, in the search in scientific literature in the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed/National Library of Medicine, Virtual Health Library/BIREME and Education Resources Information Center (ERIC) databases, DeCS/MeSH descriptors were used ( Polícia /Police OR Police Force; Atividade Física /Physical Activity; Doença Cardiovascular /Cardiovascular Disease; Comportamento Sedentário /Sedentary Behavior; Educação em Saúde /Health Education) in any language, with Boolean operators AND and OR. In the last ten years, no specific studies were found on sedentary behavior in military police officers. National and international research focused on physical activity and physical fitness in this professional category as well as the few educational programs aimed at analyzing these outcomes. The gap in the literature makes it important to know how sedentary behavior is expressed in military police officers, aiming to direct interventions and public policies on healthy lifestyles for this group, which could contribute to improving health, personal and professional satisfaction ( 7 ) . It is noteworthy that sedentary behavior may be associated with clinical and sociodemographic factors, as found in other studies with other population groups ( 8 , 9 ) . Based on the above, the present study aimed to verify the association between clinical and sociodemographic factors and time spent sitting in military police.
METHOD Study Design and Place This is a cross-sectional, analytical study, carried out from August 2022 to December 2022, in all organic Eastern Regional Policing Command (Eastern CPR) organic units of the Military Police of Bahia (PMBA), based in the city of Feira de Santana, namely: Eastern CPR; School Policing; Ronda Maria da Penha; Independent Tactical Police Company (Rondesp/East); 64 th Independent Police Company (CIPM); 65 th CIPM; 66 th CIPM; and 67 th CIPM. Study Sample and Sample Calculation The study sample consisted of 432 military police officers from all Eastern CPR units of Feira de Santana, including the category of enlisted personnel (soldier, corporal, sergeant and warrant officer) and officers (midshipman, lieutenant, captain, major, lieutenant colonel and colonel). All categories had a minimum working hours of 40 hours per week. During daily workday, everyone is instructed to perform some type of physical activity, but there is no supervision regarding compliance. Periodically, all of the corporation’s professionals, from both categories, undergo a physical fitness test. The sample calculation was carried out considering a 5% sampling error (α = 0.05), 95% confidence interval (1 – β = 0.95) and a prevalence of sedentary lifestyle of 37.25% ( 10 ) according to previously prepared studies. This sample N was adopted when considering that it was not a question of collection in a single location, that is, a conglomerate study design, in which police officers from different CPRs participated in the research. The formula n = N.Z2.p.(1 – p) / Z2.p.(1 – p) + e 2 .N – 1 was adopted, where: na: calculated sample; The population; E: normal variable; p: real probability of an event; e: sampling error. Thus, a sample of 428 participants was estimated. Instruments and Data Collection Data collection was carried out using a form built in Google Forms containing the questions from the International Physical Activity Questionnaire (IPAQ). The IPAQ is a validated instrument, developed by the World Health Organization through the Centers for Disease Control and Prevention and aims to estimate the usual level of physical activity and sedentary behavior in populations from different countries and sociocultural contexts, and has been validated for use in Brazil ( 11 ) . In this questionnaire, in section number five, there are questions related to sedentary behavior (measurements of sitting time) that were answered by police officers: “How much time in total do you spend sitting during a weekday?” and “How much time in total do you spend sitting during a weekend day?” Police officers should remember in a typical day the time in hours spent sitting at work, at home and during free time, such as resting, reading or watching television, not taking into account time spent sitting on the bus, train, subway or car. Hours were transformed into minutes. Time spent sitting was calculated as follows: time spent sitting during a weekday (Monday to Friday) in minutes × 5 added to time spent sitting during the weekend × 2, divided by seven. Time spent sitting elevated was considered for police officers who sat ≥180 minutes/day ( 12 ) . Additionally, the instrument was composed of clinical variables (diagnosis of hypertension, dyslipidemia and coronary artery disease (CAD)) and sociodemographic variables (age, sex, self-declared color, education, marital status, income, number of people who depend on income and monthly expense). Participants were invited to participate in the research after authorization from the Military Police Command. After acquiescence, Google Forms was sent to participants’ WhatsApp. One of the researchers attended police units during the week to explain the objective of the investigation and clarify possible doubts regarding filling out the information. Statistical Analysis All collected variables were subjected to descriptive analyses. For categorical variables, absolute (n) and relative (%) frequencies were calculated. For numerical variables, the mean, median, standard deviation, quartiles 1 and 3 (which correspond, respectively, to the 25 th and 75 th percentiles) and the minimum and maximum values were calculated. To assess the association between time spent sitting and sociodemographic and clinical variables, hypothesis tests were carried out. For nominal categorical sociodemographic and clinical variables, the chi-square test of independence was used, since the data met the assumptions of this test (expected frequencies greater than 5 in at least 80% of cells and 100% of cells with expected frequencies greater than 5 to 1) ( 13 ) . Statistically significant chi-square or Fisher’s exact tests were followed by analysis of adjusted standardized residuals (Pearson’s r residuals) to identify in which categories the observed frequencies differed from those expected. Residuals outside the range [–1.96; 1.96] were considered statistically significant ( 14 ) . For sociodemographic and numeric or ordinal clinical variables, the Mann-Whitney test was used. Due to the impact of sample size on p-value ( 15 ) , effect size measures were calculated for all tests. For the Mann-Whitney test, the effect size r was calculated, which can be classified as small (r > 0.1), medium (r > 0.3) or large (r > 0.5) ( 16 ) . For the chi-square test of independence, Cramer’s V effect size was calculated, whose classification depends on the degrees of freedom ( 16 ) . The degrees of freedom for Cramer’s V correspond to the minimum value between the number of rows and the number of columns of the cross-reference table minus one. The classification is described in Table 1 . In the multivariate way to assess Incidence Risk Ratio (IRR) associated with time spent sitting, the Poisson regression model was applied, with robust standard errors (obtained by the Huber-White estimator) ( 17 ) . This model included time spent sitting as a dependent variable and age, sex, diagnosis of hypertension, diagnosis of CAD and diagnosis of dyslipidemia as independent variables. Poisson regression coefficients with robust standard errors, when exponentiated, result in relative risks. IRRs that do not statistically differ from 1 (which, therefore, include the value 1 in their 95% confidence interval) indicate that that particular independent variable has no impact on the risk of the outcome occurring – in this case, time spent sitting ≥180 minutes per day. IRRs statistically greater than 1 indicate an increased risk of the outcome occurring, while IRRs statistically less than 1 indicate a reduction in this risk. For numerical independent variables, IRR indicates the expected change in risk for each unit increase in the independent variable. For categorical independent variables, the interpretation of IRR needs to take into account the reference category: IRR indicates the observed change in the risk of occurrence of the outcome when participants belong to that particular category versus when they belong to the reference category. All analyzes were conducted using R software version 4.1.0 ( 18 ) , and considered a significance level of 5%. Ethical Aspects The research followed the specifications of Resolution 466/12 and Resolution 510/16 of the Brazilian National Health Council, which regulate research involving human beings ( 19 , 20 ) . It follows the guidelines for research in a virtual environment from the Brazilian National Research Ethics Council, in accordance with Circular Letter 2/2021/CONEP/SECNS/MSA, which provides guidelines for procedures in research with any stage in a virtual environment. It was submitted to the Research Ethics Committee, and was approved in August 2022, under Opinion 5,577,350. All those invited to the research were previously informed about the objectives, justifications as well as the risks and benefits involved with participation. Consent was obtained from each guest who agreed to participate in the research, through the Informed Consent Form (ICF), which was also inserted in Google Forms, preceding the questions.
RESULTS The sample consisted of 432 participants. The predominance was male (82.35%), black race/color (black and brown) (87.04%), level of education of the head of the family higher education (47.69%) and with a partner (81, 94%). The means verified were 39.31 years of age, 6.09 minimum wages of monthly income, 3.28 people who depended on the monthly income and R$ 4,596.41 (US$919.28) of monthly expenses. Regarding personal history of cardiovascular risk, there was a predominance of people without hypertension (83.10%), dyslipidemia (49.07%) and without CAD (86.81%). It was observed that the age values of the group with time spent sitting ≥180 minutes per day tended to be lower than the group whose time spent sitting was less than 180 minutes per day (W = 1.5676.5; p = 0.020; r = 0.112). The observed effect size r can be classified as small. These results are detailed in Table 2 . There was no statistically significant association between time spent sitting and other sociodemographic and clinical variables. These results are detailed in Table 2 . Poisson regression indicated that age and sex are factors statistically associated with time spent sitting. The results indicated that the risk of belonging to the group of time spent sitting ≥180 minutes per day is lower among males (IRR < 1) when compared to females (reference category). With respect to age, increasing age was associated with a lower risk of time spent sitting ≥180 minutes per day (IRR < 1). These results are detailed in Table 3 .
DISCUSSION For the vast majority of military police officers, time spent sitting greater than or equal to 180 minutes per day (82.6%) was identified, showing exposure to this cardiovascular risk factor in a group predominantly in the young age group. In the work environment, remaining in a sitting position occurs both in the car, carrying out patrols and in administrative activities ( 8 ) . During the workday, they spend most of their time sitting, with infrequent peaks of intense activities, carrying weight on their bodies due to their uniforms, protective equipment, such as bulletproof vests and weapons used ( 21 ) , contributing to sedentary behavior, which can trigger cardiometabolic diseases ( 22 ) . Considering that around 49.07% had dyslipidemia, a possible aggregation of CVRF was observed in the sample studied, which reveals the need for prevention and control measures. The reduced lipoprotein lipase activity observed in waking behaviors that require low energy expenditure, in the range of 1.0 to 1.5, metabolic equivalent/s (MET), in a sitting, reclining or lying position, with the exception of sleeping hours, is associated with increased triglyceridemia, low HDL levels, hypertension, metabolic syndrome, among others ( 23 ) . Furthermore, sedentary behavior is associated with the development of several diseases and premature mortality, increasing the risk of dying by 50 times ( 5 ) . The military police officer’s workplace needs to be considered a space for the development of healthy habits, including less sedentary activities, regardless of the role performed, given that the daily time dedicated to work activities is at least 40 hours per day. Although police officers are encouraged to carry out physical activity within their working hours, they also need to be the target of actions aimed at combating excessive time spent sitting, lying down or reclining. They should also be encouraged not to perform daily activities that do not increase energy expenditure substantially above the resting level, such as using the computer, working in a sitting position, among other screen-based behaviors. In the sample of this present study, the police officers most exposed to time spent sitting greater than or equal to 180 minutes per day were female and younger. In this regard, the planning of strategic programs to encourage mobility, such as standing up every hour they spend sitting, can protect them from the negative impacts of sedentary behavior, being a priority and needing to gain visibility within military corporations as a whole. For instance, an intervention study, with 24 police officers who worked in a police office, assessed the theory derived from the sedentary intervention in the workplace, observing improvements in sitting and standing in the workplace, weight loss and team relationships. Furthermore, participants considered the interventions highly acceptable and practicable in everyday life as protective measures against cardiovascular diseases ( 24 ) . The data from this study showed the scarcity of operational service of female military personnel who are increasingly directed only to administrative activities in the military service, favoring sedentary behavior. The smaller number of women in the study may also reflect the difference in the availability of vacancies between sexes for police competitions, reinforcing the inequalities inherent to sexual division of labor both in restricted occupations of women and in disadvantage in the type of work performed, in wages, in professional career and in working conditions ( 20 ) . Physical activity is intrinsic to military police officers’ profession, as, as public security agents, they must be capable of pursuit, whether with vehicles, motorcycles, on horseback, or on foot ( 5 ) . Thus, the attention of military corporations is focused on physical conditioning, in particular strength and cardiorespiratory fitness, to meet work demands. These corporations need to encourage physical activity not only to indoctrinate their bodies, but to preserve life, also incorporating as a goal the incentive to combat sedentary behavior during workday. The military corporation’s involvement in encouraging police officers to lead a healthy life is essential for workers’ health and the efficient performance of their activities and, consequently, for the quality of services offered by the institution ( 3 ) . A solid intervention to raise awareness among the corporation, politicians and health managers is necessary, aiming at the implementation of specific programs related to encouraging adherence to healthy lifestyles by military police officers ( 2 ) , especially aimed at the most vulnerable groups, such as women and younger people, giving visibility to police health promotion ( 8 ) . The results of this study must be interpreted with caution, as the sample reflects the local characteristics of the police officers studied, limiting data extrapolation. Furthermore, although the instrument used to collect data is validated in the country, it was self-completed by police officers via a digital platform, which may underestimate or overestimate time spent sitting. A set of occupational variables was not explored in the study, and deserves attention in other investigations, as it may influence the outcome investigated. The originality of this study is highlighted, as it is the first to investigate sedentary behavior in military police officers. For this reason, it was difficult to compare the results with other studies.
CONCLUSION A high percentage of military police officers were exposed to time spent sitting for more than or equal to 180 minutes per day, especially those who were female and younger. Specific interventions to reduce time spent sitting during work activities are essential. These actions could support future public policies on healthy lifestyles for military police officers, enabling improvements in health indicators and prevention of injuries.
Extracted from the doctoral thesis: “Nível de atividade física em policiais militares: fatores preditores e protocolo de intervenção de Enfermagem”, Universidade Federal da Bahia, 2023. ASSOCIATE EDITOR: Cristina Lavareda Baixinho ABSTRACT Objective: To verify the association between clinical and sociodemographic factors and time spent sitting in military police. Method: This is a cross-sectional study, with 432 military police officers from Eastern Regional Policing Command units of the Military Police of Bahia de Feira de Santana. Data collection took place from August to December 2022 through Google Forms using the International Physical Activity Questionnaire. Results: Men predominated (82.35%), race/color was black (87.04%), the head of the family had completed higher education (47.69%) and police officers with a partner (81.94%). The risk of time spent sitting ≥ 180 minutes per day was lower in males (IRR < 1). Increasing age was associated with a lower risk of time spent sitting ≥ 180 minutes per day (IRR < 1). Conclusion: Male police officers with more years of experience were less exposed to sedentary behavior. Specific interventions and health policies aimed at combating sedentary behavior become relevant, aiming to promote health and prevent diseases. RESUMO Objetivo: Verificar a associação entre fatores clínicos e sociodemográficos e o tempo gasto sentado em policiais militares. Método: Estudo transversal, com 432 policiais militares das unidades do Comando de Policiamento Regional Leste da Polícia Militar da Bahia de Feira de Santana. A coleta de dados ocorreu de agosto a dezembro de 2022 através do Google Forms constando o Questionário Internacional de Atividade Física. Resultados: Predominaram homens (82,35%), raça/cor negra (87,04%), nível de escolaridade do chefe da família superior completo (47,69%) e policiais com companheiro(a) (81,94%). O risco do tempo gasto sentado ≥ 180 minutos por dia foi menor no sexo masculino (IRR < 1). O aumento da idade foi associado a menor risco de tempo gasto sentado ≥ 180 minutos por dia (IRR < 1). Conclusão: Policiais do sexo masculino e com mais anos de vida estavam menos expostos ao comportamento sedentário. Intervenções específicas e políticas de saúde voltadas ao combate do comportamento sedentário se tornam relevantes, visando à promoção da saúde e prevenção de agravos. RESUMEN Objetivo: Verificar la asociación entre factores clínicos y sociodemográficos y el tiempo de permanencia en la policía militar. Método: Estudio transversal, con 432 policías militares de unidades del Comando de Policía Regional Este de la Policía Militar de Bahía de Feira de Santana. La recolección de datos se realizó de agosto a diciembre de 2022 a través de Google Forms utilizando el Cuestionario Internacional de Actividad Física. Resultados: Predominaron los hombres (82,35%), la raza/color fue negra (87,04%), el jefe de familia tenía estudios superiores (47,69%) y los policías con pareja (81,94%). El riesgo de pasar tiempo sentado ≥ 180 minutos por día fue menor en los hombres (IRR < 1). El aumento de la edad se asoció con un menor riesgo de pasar tiempo sentado ≥ 180 minutos por día (IRR < 1). Conclusión: Los policías varones con más años de experiencia estuvieron menos expuestos al comportamiento sedentario. Cobran relevancia intervenciones y políticas de salud específicas dirigidas a combatir el sedentarismo, con el objetivo de promover la salud y prevenir enfermedades. DESCRIPTORS DESCRIPTORES DESCRITORES
CC BY
no
2024-01-16 23:45:35
Rev Esc Enferm USP.; 57:e20220089
oa_package/95/47/PMC10789120.tar.gz
PMC10789121
0
Pseudomonas aeruginosa (PA), a Gram-negative pathogen, is a common cause of nosocomial infections, especially in immunocompromised and cystic fibrosis patients. PA is intrinsically resistant to many currently prescribed antibiotics due to its tightly packed, anionic lipopolysaccharide outer membrane, efflux pumps, and ability to form biofilms. PA can acquire additional resistance through mutation and horizontal gene transfer. PA ATP synthase is an attractive target for antibiotic development because it is essential for cell survival even under fermentation conditions. Previously, we developed two lead quinoline compounds that were capable of selectively inhibiting PA ATP synthase and acting as antibacterial agents against multidrug-resistant PA. Herein we conduct a structure–activity relationship analysis of the lead compounds through the synthesis and evaluation of 18 quinoline derivatives. These compounds function as new antibacterial agents while providing insight into the balance of physical properties needed to promote cellular entry while maintaining PA ATP synthase inhibition.
Pseudomonas aeruginosa (PA), a Gram-negative, biofilm-forming bacterium, is one of the leading causes of nosocomial (or healthcare-acquired) pneumonia, surgical site infections, bloodstream infections, and catheter-associated urinary tract infections. Additionally, a recent international study found PA to be the source of approximately 23% of patient infections in intensive-care units. 1 − 3 Cystic fibrosis (CF) patients are particularly susceptible to PA infections, which are the leading cause of death in this population, making treatment of PA infection a standard of care for CF. 2 Despite the prevalence of nosocomial PA infections, treatment options are limited, with the standard treatments being β-lactam and/or aminoglycoside antibiotics. However, multidrug-resistant (MDR) PA strains are on the rise, with the Centers for Disease Control (CDC) reporting that 9% of all PA isolates in the United States in 2017 were MDR. Data from the CDC National Healthcare Safety Network reports that 16.8% of ICU patients and 39% of long-term care patients with ventilator-associated PA infections were resistant to three or more antibiotics. 1 , 4 PA utilizes both intrinsic (efflux pumps, low outer membrane (OM) permeability, biofilm formation, etc.) and acquired (via mutation or horizontal gene transfer) resistance mechanisms to overcome antibiotic action; therefore, the development of new antibiotics that treat MDRPA infections is desperately needed. 5 Since the development of bedaquiline (BDQ), an antitubercular antibiotic, by Johnson and Johnson in the early 2000s, 6 bacterial ATP synthase has been an attractive target for antibiotic development due to the role of ATP synthase in bioenergetics and pH homeostasis. 5 , 7 , 8 Additionally, unlike other bacteria, PA relies on ATP synthase for ATP production even during anaerobic growth, making it an even more attractive target for antibiotic and antibiotic adjuvant development. 7 , 8 ATP synthase ( Figure 1 A) is a membrane-embedded protein complex that harnesses energy from rotation of its multisubunit F 0 domain to synthesize ATP in the multisubunit F 1 domain. Rotation of F 0 is driven by protons moving along their electrochemical gradient. 9 The bacterial F 0 motor is composed of a rotor of 10–15 copies of the c subunit adjacent to subunit a and a dimer of b subunits that form the stator. Each c subunit contains a proton binding site (Asp60 in PA) in the middle of the membrane that is accessed by two aqueous half channels in subunit a . 8 − 11 PA ATP synthase is embedded in the inner membrane (IM) of PA, which is a phospholipid bilayer. A major challenge in the development of small-molecule ATP synthase inhibitors is that the molecules must have physical properties that allow them not only to enter the hydrophobic IM to reach the binding site on the c subunit of ATP synthase but also to traverse the asymmetric, polyanionic lipopolysaccharide OM that encapsulates the cell and avoid expulsion by promiscuous efflux pumps. Recently, we developed a series of quinoline-based inhibitors of PA ATP synthase that mimic BDQ by binding to the proton binding site of the c subunit of PA ATP synthase. 10 , 11 Of those, only compounds 1 and 2 ( Figure 1 B), which have a 1-(4-(aminomethyl)phenyl)- N , N -dimethylmethanamine off of the quinoline C2 and either a methyl sulfide or benzyl sulfide off of the quinoline C1, respectively, were capable of both inhibiting PA ATP synthase and acting as antibiotics against clinical isolates of MDRPA by successfully crossing the OM. 11 Herein we report a structure–activity relationship (SAR) study of 18 synthetic quinoline analogs derived from compounds 1 and 2 , which resulted in more potent PA ATP synthase inhibitors and a better understanding of the physical properties required to promote antibacterial activity. To explore the SAR profile of compounds 1 and 2 , a series of C2 amine derivatives ( 5 – 22 ) were synthesized via a one-pot, two-step reductive amination reaction starting from either 2-(methylthio)quinoline-3-carbaldehyde ( 3 ) or 2-(benzylthio)quinoline-3-carbaldehyde ( 4 ) in moderate yields ( Scheme 1 ). This series of amines was chosen to probe the effects of the size, rigidity, aromaticity, and lipophilicity of C2 on both ATP synthesis inhibition and antibacterial activity against PA. Compounds 5 – 22 were evaluated for their ability to inhibit in vitro NADH-driven PA ATP synthesis activity in DK8/pASH20 inverted membrane vesicles, which were prepared from expression of PA ATP synthase in Escherichia coli (EC) DK8 (a K-12 strain lacking an endogenously encoded ATP synthase) as previously described, using an end-point luciferin/luciferase assay at increasing concentrations of each compound. 11 When compared to the BDQ binding site on the c subunit of Mycobacterium tuberculosis ATP synthase, the analogous site on the c subunit of PA ATP synthase is less sterically congested. Therefore, compound 2 showed greater PA ATP synthase activity (IC 50 = 2.3 μg/mL) compared to compound 1 (IC 50 = 30 μg/mL) due to the larger C1 benzyl sulfide on 2 compared to the C2 methyl sulfide on 1 . 11 This trend was generally observed/confirmed in this series, as seen in Figure 2 , where compounds with a benzyl sulfide at C1 showed greater inhibition of ATP synthase activity compared with compounds with a methyl sulfide at C1 with the same C2 substitution. Only compounds 17 (SCH 3 ) and 18 (SBn), with 2-(1-methyl-1 H -pyrrol-2-yl)ethanamine substituted at C2, showed similar PA ATP synthase inhibition. Compound 12 , with a C1 SBn and C2 dimethylaniline, showed potent PA ATP synthase inhibition at low concentrations (<4 μg/mL); however, due to poor solubility in the assay medium at concentrations >16 μg/mL, an IC 50 could not be determined. Even at the highest concentrations tested, many of the compounds with a methyl sulfide at C1 showed incomplete inhibition of ATP synthesis. Absolute IC 50 values were determined for these compounds based on the assumption that binding completely inhibits activity, consistent with the mechanism of BDQ, 12 and partial inhibition indicates partial occupancy of the inhibitor binding site. Of the new C1 benzyl sulfide compounds, compounds 8 (C2 = N , N -dimethyl-1,4-butanediamine), 14 (C2 = 2-(1-ethylpiperidin-4-yl)ethanamine), and 16 (C2 = 2-(1-cyclopentylpiperidin-4-yl)ethanamine) showed the greatest PA ATP synthase inhibition with IC 50 = 1 μg/mL. Location of the C2 nitrogen on the side chain (i.e., its distance is closer or further from the quinoline in space) did affect PA ATP synthase inhibition, with compounds 8 , 14 , and 16 having the nitrogen a similar distance from the quinoline in space ( Figure 3 ). We hypothesize that the nitrogen likely interacts with Asp60 in the c subunit binding site, but further studies are needed to confirm this since the structure of PA ATP synthase has not been elucidated. The next most potent PA ATP synthase inhibitors in the series were compounds 2 , 10 , and 22 , with IC 50 values of approximately 2 μg/mL. Compounds 2 and 10 have a longer distance between C2 and the dimethylamine nitrogen compared to compounds 8 , 14 , and 16 , whereas compound 22 has a slightly shorter distance between the C2 and imidazole nitrogens. Compound 6 , which shortens the C2 side chain by one carbon compared to 8 , had slightly decreased PA ATP synthase inhibition (IC 50 = 6 μg/mL). Compounds 18 (IC 50 = 5 μg/mL) and 20 (IC 50 = 3 μg/mL) also have their C2 nitrogen functionality closer to the quinoline core and showed less PA ATP synthase inhibition activity; however, steric bulk around the terminal nitrogen in the C2 group seems to improve PA ATP synthase inhibition when comparing these to compound 6 . Finally, within the C1 benzyl sulfide series, larger, hydrophobic functional groups at the end of the C2 group are well-tolerated, as seen with compounds 14 , 16 , and 20 . As a control for off-target electron transport chain (ETC) inhibition in both the PA ATP synthase assay and the antibacterial assay against PA strains, compounds 5 – 22 were evaluated for their ability to inhibit the EC and PA ETCs ( Tables 1 and S1 ). The assay for PA ATP synthase activity in EC DK8/pASH20 membrane vesicles requires a functional EC ETC, so potent inhibition of the EC ETC by any compound would interfere with the determination of its IC 50 for PA ATP synthase. Only compound 22 (EC ETC IC 50 = 14 μg/mL) inhibited EC ETC within 10-fold of the measured PA ATP synthase inhibition activity in DK8/pASH20 membrane vesicles ( Table S1 ). Previously, compound 1 was shown to inhibit PA ETC with IC 50 = 29 μg/mL, which is equal to its PA ATP synthase IC 50 (30 μg/mL) in these vesicles. 11 Furthermore, potent inhibition of PA ETC could be an alternative mechanism of action for the antibacterial activity of the current series. Similar to the observed EC ETC inhibition, only compounds 6 and 22 had PA ETC within 10-fold of their PA ATP synthase IC 50 concentrations. This indicates that the antibacterial activity (described in Table 2 ) of compounds 2 , 6 , 8 , 10 , 14 , and 16 is due to PA ATP synthase inhibition and not PA ETC inhibition. Translating PA ATP synthase inhibition into whole-cell antibacterial activity against antibiotic-resistant PA strains is challenging because, as discussed above, ATP synthase is embedded in the hydrophobic inner membrane. To access their binding target, molecules must traverse the more hydrophilic and anionic outer membrane, creating a need to strike a balance between hydrophobicity and hydrophilicity. To determine this balance, compounds 5 – 22 were evaluated for antibacterial activity against a nonvirulent, biofilm-forming strain of PA (designated PA 9027), 13 an efflux knockout (KO) strain of PA (designated PΔ6), 14 and three MDRPA clinical isolates (ATCC BAA 2108, BAA 2109, and BAA 2110) from cystic fibrosis patients that are broadly resistant to penicillin and cephalosporin antibiotics, tigecycline, and nitrofurantoin and susceptible to quinolone and aminoglycoside antibiotics ( Table 2 ). Generally, as seen with the ATP synthase inhibition, C1 benzyl sulfides were more active than the analogous C1 methyl sulfides, of which only 9 , 11 , 13 , 15 , and 19 displayed weak antibacterial activity against efflux KO strain PΔ6. None of the compounds evaluated could overcome biofilm formation against PA 9027. Compounds 6 , 8 , 10 , and 14 were broadly active against PΔ6 and the three MDRPA strains with MICs between 16 and 128 μg/mL. Compound 14 was the most potent of the series, with MICs against BAA 2108, 2109, and 2110 of 32, 64, and 16 μg/mL, respectively, and compound 16 had the lowest MIC (8 μg/mL) of the series against efflux KO strain PΔ6 but was inactive against BAA 2110 at the highest concentration tested. Compound 22 also displayed antibacterial activity against PΔ6, BAA 2108, and BAA 2109, but as stated, some of this activity is due to PA ETC inhibition, as was seen with compound 1 previously. C2 benzyl sulfides 12 , 18 , and 20 were inactive against all PA strains. Recently, to aid in the discovery of Gram-negative antibiotics, rules of entry for Gram-negative bacteria, derived from evaluation of EC OM-penetrating drugs, have been established, which are (i) molecular weight <500 g/mol; (ii) cLogD 7.4 between −2 and 0; (iii) ≤5 rotatable bonds; (iv) high polar surface area (average 165 Å); (v) low globularity; and (vi) presence of a 1° amine or guanidinium. 14 − 16 While these rules do translate to other Gram-negative pathogens like Acinetobacter baumannii , PA has proven to be much more limited in chemical motifs that promote accumulation. 17 Evaluation of the compounds in this series with regard to the entry rules would indicate that none in the series readily cross the OM of PA ( Table S2 ). All compounds have molecular weights below 500 g/mol, but those with higher molecular weights show greater PA ATP synthase inhibitory and antibacterial activity in general ( Figure 4 ). No trend between PA ATP synthase inhibitory or antibacterial activity and globularity can be defined, but all globularities are categorized as low according to the entry rules ( Table S2 ). Unsurprisingly, the more flexible (>5 rotatable bonds) and hydrophobic (cLogP > 2) compounds showed the highest PA ATP synthase inhibitory activity since the IM is composed of hydrophobic fatty acids, but surprisingly, these also demonstrated the highest antibacterial activity against PA strains as well ( Figure 4 ) despite the entry rules. Additionally, strong inhibition of ATP synthase alone (i.e., PA ATP synthase IC 50 < 10 μg/mL) did not directly translate to PA antibacterial activity. For example, compounds 12 , 18 , and 20 showed no antibacterial activity despite having similar physical properties and ATP synthase inhibitor activity as 2 , 6 , 8 , 10 , 14 , 16 , and 22 . All compounds in the series are nitrogen bases with the quinoline core, the benzylic secondary amine at C2, and an additional nitrogen functional group at the end of the C2 side chain. As can be seen in Table 3 and Figure 5 , the approximate p K a of the conjugate acid (NH + ) of the C2 side chain varies across the series, with the tertiary piperidines ( 13 – 16 ) and tertiary dimethylamines ( 1 , 2 , 5 – 10 ) being the most basic and the N -methylpyrroles ( 17 and 18 ) and pyrimidines ( 19 and 20 ) being the least basic. The p K a value of this position directly affects antibacterial activity. For potent PA ATP synthase inhibition to translate into potent antibacterial activity against PA strains, the p K a of the conjugate acid of the C2 side chain needs to be above 6. For example, compounds 12 , 18 , and 20 , which are inactive against all tested strains of PA, with side-chain p K a values <6, have similar PA ATP synthase IC 50 concentrations ( Figure 2 ) and physical properties ( Figure 4 and Table S2 ) as compounds 6 , 10 , 14 , and 22 , which are active against multiple strains of PA, but with side chain p K a values >6. Due to the promising antibacterial activity of lead compounds 1 and 2 , a series of 18 new quinoline analogs were synthesized via a one-pot, two-step reductive amination sequence at C2 starting from 2-(methylthio)quinoline-3-carbaldehyde ( 3 ) or 2-(benzylthio)quinoline-3-carbaldehyde ( 4 ). Once synthesized, each analogue was evaluated for PA ATP synthase inhibition activity, PA and EC electron transport chain inhibition activity, and antibacterial activity against MDRPA clinical isolates. As seen with compounds 1 and 2 , analogs with a benzyl sulfide at C1 of the quinoline generally showed greater inhibition of PA ATP synthase compared to compounds with the methyl sulfide at this position. PA ATP synthase inhibition was also weakly affected by the size, flexibility, and length of the side chain at C2 of the quinoline, with compounds 8 , 14 , and 16 showing the greatest activity. Of the most active PA ATP synthase inhibitors, only compounds 6 and 22 inhibited the PA ETC within 10-fold of their PA ATP synthase IC 50 values, unlike lead compound 1 , which had equipotent PA ETC and PA ATP synthase inhibition activity. This indicates that this series is more selective for PA ATP synthase over PA ETC compared with the lead compounds. C1 benzyl sulfides 6 , 8 , 10 , 14 , 16 , and 22 displayed the most potent antibacterial activity against the panel of PA strains examined, including three MDRPA clinical isolates and an efflux KO PA strain. Antibacterial activity of the most potent compounds was similar against the MDRPA isolates and the efflux KO PA strain, indicating that these molecules are not readily effluxed. However, none of the compounds evaluated was capable of overcoming biofilm formation of the PA 9027 laboratory strain. Lack of direct correlation between PA ATP synthase inhibition and antibacterial activity against PA indicated that OM penetration affects antibacterial activity. Evaluation of the physical properties of the series showed that these compounds do not follow the entry rules for Gram-negative bacteria. Both the antibiotic active ( 6 , 8 , 10 , 14 , 16 , and 22 ) and inactive ( 12 , 18 , and 20 ) analogs with similar PA ATP synthase inhibition activity were more flexible (rotatable bonds ≥8) and more hydrophobic (cLogP > 2.5) than preferred by the entry rules and did not contain a primary amine or guanidinium. Interestingly, the relative basicity of the nitrogen (NH + ) side chain at C2 directly correlated with the antibacterial activity of analogs. The side chains of compounds 6 , 8 , 10 , 14 , 16 , and 22 have a nitrogen with a conjugate acid p K a > 6, whereas those of compounds 12 , 18 , and 20 have a nitrogen with a conjugate acid p K a < 6. While this trend needs to be further examined, this work indicates that the physical properties required for IM binding do not have to be incompatible with OM penetration in Gram-negative pathogens as has been seen previously, 18 which will increase the number of biological targets available for antibiotic development in the future.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsmedchemlett.3c00480 . Experimental procedures and 1 H and 13 C NMR spectra for compounds 5 – 22 , control experiments for inhibition of E. coli DK8 pASH20 electron transport chain and PA electron transport chain (Table S1), and physical properties for compounds 1 , 2 , and 5 – 22 (Table S2) ( PDF ) Supplementary Material Author Contributions † Authors A.P.L.W., C.A.B., A.M.C., A.K., A.S.R., and C.N.K. contributed equally. K.T.W. synthesized compounds 11 , 15 , and 21 , tested compounds in antibacterial activity assays, ATP synthase inhibition assays, and electron transport chain inhibition assays. C.A.B., A.M.C., A.K., and A.S.R. synthesized compounds 5 – 10 , 13 , 14 , and 17 – 20 and tested these in ATP synthase inhibition assays. C.N.K. tested all compounds against MDRPA strains for antibacterial activity. A.P.L.W. synthesized compounds 12 , 16 , and 22 . A.L.W. designed and supervised experiments and analyzed data related to synthesis and antibacterial evaluation. P.R.S. designed and supervised experiments and analyzed data related to ATP synthesis and ETC inhibition assays. K.T.W., A.P.L.W., P.R.S., and A.L.W. wrote the manuscript. The authors gratefully acknowledge the financial support of NIH NIAID Grant R15 AI163474 to A.L.W. and the University of North Carolina Asheville Department of Chemistry and Biochemistry. The authors declare no competing financial interest. Acknowledgments The authors thank Dr. Helen Zgurskaya and co-workers at the University of Oklahoma for providing the PA GKCW120 (PΔ6) strain. Abbreviations adenosine triphosphate cystic fibrosis outer membrane inner membrane electron transport chain Pseudomonas aeruginosa Escherichia coli multidrug-resistant minimum inhibitory concentration inhibitory concentration of 50% structure–activity relationship dimethyl sulfoxide dichloromethane tryptic soy broth benzyl knockout
CC BY
no
2024-01-16 23:45:35
ACS Med Chem Lett. 2023 Dec 26; 15(1):149-155
oa_package/7d/e0/PMC10789121.tar.gz
PMC10789125
38194516
INTRODUCTION Access to technology through mobile devices is increasing every year. Report shows that Brazil reached 258.3 million cell phones in 2022 ( 1 ) . Consequently, access and downloads of applications (apps) increased considerably, placing Brazil as the fourth country that downloaded the most apps, according to the State of Mobile 2022 report ( 2 ) . The popularization of smartphones is considered a technological revolution of great impact. Using mobile technologies, such as apps, for healthcare and information access purposes, is a promising form of intervention, considering cost-effectiveness, scalability and high reach. Mobile computing can be applied in various aspects within the healthcare area, such as remote monitoring and training professionals ( 3 ) . Using these technologies for health information and care promotion is defined as mHealth ( 4 ) , and contributes to reducing difficulties related to geographic barriers in healthcare and the provision of easily understood knowledge. Its potential includes support for clinical diagnosis, decision-making, behavior change, autonomous digital therapy and disease-related education ( 5 ) . The development of good quality health applications that work to change practices is one of the recommendations of the Digital Health Strategy (DHS28) for Brazil. Among the objectives of DHS28 are innovation initiatives, service models, knowledge extraction mechanisms and digital health apps originating from user needs ( 6 ) . Increasingly, studies are being conducted on apps for important health topics such as cancer, sexually transmitted infections, and pregnancy (7–9) . Using mHealth in maternal health is a current reality and covers different areas of the pregnancy-puerperal cycle, offering information about pregnancy ( 10 ) and aspects of childbirth and the postpartum period ( 11 ) . Obstetric complications are also addressed in the apps, such as postpartum hemorrhage (PPH), as it is one of the main causes of maternal morbidity and mortality in the world ( 12 ) . Authors assessed the effect of a training application on nurses’ and midwives’ knowledge and skills for PPH management and neonatal resuscitation, and found that knowledge and skill scores increased significantly after its use ( 13 ) . PPH is a relevant topic in the context of public health. Almost a quarter of maternal deaths worldwide are associated with this complication, being the first cause in low-income countries ( 14 ) . The United Nations (UN) emphasizes improving access to technologies and recommends that countries should integrate digital health and mHealth into their national health information systems and health infrastructure ( 15 ) . It is important that studies be developed to assess health application quality, as their content may influence users’ decision-making. The rapid increase in the number of smartphone applications makes this assessment increasingly necessary, as it is difficult to identify high-quality applications and the security of their information sources ( 16 ) . Applications aimed at pregnancy, for the most part, are of low quality ( 17 ) and present different content, however, in a fragmented way, and few present the sources ( 18 ) . When assessing applications, criteria such as appearance, structure, navigation, reliability and content are generally used ( 19 ) . However, assessing mHealth application quality requires specific criteria inherent to their development and content ( 16 ) . Therefore, in order to assess mHealth applications on PPH, the following research question was raised: what is the quality of the mobile applications on the management of PPH available in the digital stores of the main operating systems? This study aimed to assess mobile application quality on PPH management available in the digital stores of the main operating systems.
METHOD Study Design This is a descriptive and evaluative study, conducted in six stages: 1) Definition of assessment objectives; 2) Establishment of application inclusion and exclusion criteria; 3) Selection of information to be extracted; 4) Search for applications and analysis of the results obtained; 5) Presentation of assessment results; 6) Discussion of results ( 20 ) . This research is premised on carrying out a systematic assessment guided by a validated instrument and following a research protocol for a structured search. Data Collection Site The search for apps was carried out from January to February 2023, in App Store ® (iOS) and Google Play Store ® (Android), using two mobile devices that support the aforementioned operating systems: Xiaomi Redmi Note 10 version 13.0.11 (Android) and an iPhone 7 version 15.7.2 (iOS) device. The following search terms were used, individually, in English and Portuguese : hemorragia puerperal, risk stratification, parto seguro, hemorragia and pós-parto (postpartum hemorrage, risk index, safe delivery, hemorrhage e postpartum). Selection Criteria Free apps were included, aimed at healthcare professionals, compatible with Android and/or iOS operating systems and that mentioned in their title or description obstetric emergencies and/or PPH, and that covered in their content information about PPH management. Paid apps, those that required institutional login and those that were temporarily disabled were excluded. Two searches were carried out in each digital store, by two researchers, until ratifying the inclusion of applications. Data Collection Access to app information and content occurred by checking the data available in the digital stores themselves and by downloading them directly to mobile devices (cell phones). From there, the application was accessed, and all its content was fully explored by the authors, examining the information about PPH, the gaps and the way the content was presented. Figure 1 reflects the screening and selection process for applications. During data collection, 1,224 apps were identified by including search terms. Of these, 1,210 were excluded because they did not meet the inclusion criteria. Thus, 14 applications were selected to assess the eligibility criteria. Of these, seven were excluded, as four were repeated, one was temporarily disabled, one was paid and one needed registration. Seven applications were included in the sample: six extracted from Google Play Store ® and one from App Store ® . To assess the apps, a validated instrument was used, developed specifically to assess mHealth apps: the Mobile Application Rating Scale (MARS) ( 16 ) . It consists of 19 objective items and 4 subjective items, and assesses engagement, functionality, aesthetics, information and subjective quality. Each area has a score ranging from one to five, obtaining a mean and, in the end, an overall quality mean, with the subjective area being assessed separately (1 = Inadequate, 2 = Poor, 3 = Acceptable, 4 = Good and 5= Excellent). To assess information about PPH in the apps, a table was developed consisting of 20 pieces of information related to PPH management, divided into four categories: 1) Definitions/stratifications; 2) Prevention; 3) Diagnosis; 4) Treatment. Each one is made up of five pieces of information, which are equivalent to 100% of the expected quantity for the category. Therefore, each item is equivalent to 20%, and, at the end, the percentage of information present is calculated. Data regarding PPH management were extracted from official documents from national and international healthcare organizations ( 12 , 22 , 23 ) . Data Analysis and Treatment Data analysis was carried out descriptively and quantitatively, after assessing the app quality and reading and extracting the main information about PPH. From the analysis, the results were discussed according to the MARS quality assessment criteria and based on scientific literature on the subject. Ethical Aspects For the present study, assessment by an ethics committee was not necessary, given its defining characteristics, in accordance with current regulations. Furthermore, the study did not involve the participation of any human volunteers as invited research participants for any of its stages.
RESULTS The most prevalent language was English (n = 3), and an app was developed in Portuguese. An app was customizable in terms of language, with 30 versions available depending on the country. Six apps had the Android operating system. The majority had their last update carried out in 2022 (n = 4), and there were no user reviews in digital stores (n = 5). Two apps had more than 10,000 downloads and one had more than 100,000. The largest size of the apps was 54.83 MB while the smallest was 11.1 MB ( Chart 1 ). In application assessment by MARS, the highest means were obtained in functionality (4.88) and the lowest in engagement, with two reaching a mean above 3.0 and one above 4.0. No application achieved an overall quality mean of 5.0, and three achieved means greater than 4.0 ( Chart 2 ). In the subjective quality assessment, an application received a mean of 5.0 points. The rating given by users to apps in digital stores was similar to the MARS quality mean. The contents on PPH management ( 12 , 22 , 23 ) were assessed by categories, as follows: 1) Definition/stratifications: PPH (vaginal), PPH (cesarean section), massive PPH, primary PPH, secondary PPH; 2) Prevention: risk factors, antepartum risk stratification, intrapartum risk stratification, preventive care measures (timely umbilical cord clamping, controlled umbilical cord traction, Brandt-Andrews maneuver, uterine massage, mother-child skin-to-skin contact, rational use of oxytocin during labor, not performing the Kristeller Maneuver), preventive medication measures (oxytocin 10 IU/intramuscular after birth); 3) Diagnosis: visual estimation, weighing pads/pads, collection devices, clinical parameters, shock index; 4) Treatment: medication (oxytocin, tranexamic acid, methylergometrine, misoprostol), non-surgical (uterine massage, intrauterine tamponade balloon, non-pneumatic anti-shock suit), surgical (compressive sutures, vascular sutures, hysterectomy, damage control), treatments for other causes (trauma, thrombin and tissue), care procedures (elevating lower limbs, supply of O 2 , indwelling urinary catheter, monitoring). Chart 3 displays the percentage of information on PPH management present in the applications included in the sample. Of the apps assessed, 71.4% (n = 5) presented less than 50% of the information and two apps presented less than 30%.
DISCUSSION According to MARS, applications considered excellent (mean 5) must contain: engagement (fun, interesting, customizable, interactive, sending alerts, messages, reminders, feedback); good functioning (easy learning and navigation, flow logic and gestural design); pleasing aesthetics (graphic design, visual appeal, color scheme and stylistic consistency); quality information (adequate text, good extensiveness, feedback, references, credibility); and a good subjective quality assessment, which involves interest in using and recommending the app. The absence or deficiency of such aspects leads to a reduction in its score and overall quality rating ( 16 ) . The mean quality of the applications assessed in this study was 3.88, rated as acceptable quality. Apps for pregnancy and postpartum have an overall quality mean of 3.06 according to the same instrument, lower when compared to health apps in general (3.74). Among the items, functionality is the area with the best assessment in the applications and stands out with higher means, while engagement, information and aesthetics have lower means ( 24 , 25 ) . “Safe Delivery”, one of the apps assessed, received the highest mean in quality assessment (4.67) and proved to be a dynamic, interesting and interactive application, with explanatory videos, knowledge tests, user input and customizations regarding the language and user profile, with content suited to the target audience (engagement). Furthermore, it did not exhibit any malfunction, connectivity, with good layout and graphics (functionality and aesthetics). “ Risco Hemorrágico Obstétrico ”, with a quality rating of 4.07, it was developed by researchers from the Universidade Federal de Minas Gerais , with the aim of carrying out risk stratification for PPH of pregnant women in the antepartum and intrapartum periods. It has a clean and easy-to-use design, but with few interaction features and does not allow customization. In the app, risk factors are classified as medium and high risk, with no low risk stratification, as recommended by the Pan American Health Organization and the World Health Organization ( 22 ) . “ACOG DII SMI” has broad content, with small functional flaws (some links/buttons do not lead to the proposed content and show an error signal) and an overall mean rating of 4.02. It is not very easy to use, requiring many clicks to reach the main result as well as little interactivity. It was developed with the aim of providing standardized approaches to maternal and child health. Most of its content is presented in text and slide format, with some checklists and few images. These facts justify the lower means in aesthetics and engagement. User interface visual design is one of the important points in app development. Research carried out to analyze the user interface design of 88,861 apps in the App Inventor gallery showed that the majority did not comply with design guidelines and did not have good aesthetics ( 26 ) . Overall app development requires attention from the developer community. Users of mHealth apps in maternal health report a greater likelihood of using apps that are aesthetically pleasing and have minimal technological barriers ( 27 ) . Regarding information accuracy in the app description, the presence of goals/objectives, the quality and scope of the information covered, presence of visual information, credibility and evidence base used, the apps assessed in this study obtained quality means between 3.62 and 4.67, similar to those of other studies ( 25 ) . Efforts are recommended in developing content for apps focused on improving the quality of health applications, in order to bring about changes in user practices ( 17 ) . Ratings carried out on apps about pregnancy showed that the majority had flaws in the quality of their information, without scientific evidence or citation of their content sources. Furthermore, it was observed that the focus was maintained on functionality. All lacked transparency regarding affiliations, i.e. they did not inform whether the app development was associated with any public or private institution, or the author’s own development ( 28 ) . “Maternal & Newborn Care Plans”, “Postpartum Hemorrhage” and “ GPCs Ginecología y Obstetricia ” were the apps that obtained the lowest mean in terms of engagement, with scores between 2.2 and 2.4 being attributed. The low mean engagement attributed to Maternal & Newborn Care Plans may be a reflection of the presentation format of its content, at the time, presented in text, with few interactive elements, without images and without personalization. Achieving good user engagement with the technology offered is essential for interventions and health behavior change to be effective. This problem can be solved by incorporating more customizable features, with more attractiveness and options that improve application interactivity, making it easier to use for longer periods of time ( 29 ) . “Postpartum Hemorrhage”, developed for cases of PPH management in the primary center and safe referral to a tertiary center, has a simple design, with almost no interaction with users, content available in text, flowchart and some illustrative images of the topics covered, having been rated as “poor” in engagement (mean 2.2). However, its mean functionality was excellent (5.0). With regard to evidence base assessment (if the application was tested/verified by evidence), guided by MARS ( 16 ) , only studies that involved assessing “Safe Delivery” were found, with these positive results in relation to its effectiveness in improving professionals’ knowledge in PPH management and neonatal resuscitation ( 13 ) . Regarding the topics about PPH covered in the apps, most focused their content on bleeding disorder treatment. Guidance flows, medications to be offered and conduct in more serious cases were presented, such as using intrauterine tamponade balloon, compressive sutures and hysterectomy. Information regarding prevention, risk factor screening and risk stratification for PPH was less covered. The list of the main risk factors for PPH was present in the applications “ GPCs Ginecología y Obstetricia ”, “ Risco Hemorrágico Obstétrico ” and “ACOG DII SMI”. However, risk stratification was presented in only two of them. National and international health organizations recommend that risk factors for PPH be investigated in all pregnant women since prenatal care as well as carrying out risk stratification so that appropriate and preventive measures are taken for each case ( 12 , 22 , 23 ) . Some applications may have presented a low percentage of information on PPH management due to the purpose for which it was intended. As an instance, “ Risco Hemorrágico Obstétrico ” is cited, which was developed solely for risk stratification of PPH, which may have led the developers to believe that it was not necessary to include information on definitions and treatment. Only one app assessed addressed the Shock Index (SI), with clear and objective information about its values and the interpretation of its results. SI is an early predictive value of hemodynamic instability in PPH ( 22 ) , being a consistent predictor in comparison with conventional means in PPH ( 30 ) . It is emphasized that the most prevalent information in the apps (those related to treatment) is of great relevance, as correct and effective treatment minimizes the chances of maternal morbidity and mortality, favoring a good prognosis. However, missing and/or incomplete information (definitions, prevention and diagnosis) demonstrate a dassessment of measures that can prevent or predict cases of PPH, guiding the conduct to be followed. From the above, it is clear that ratings of technological tools developed for maternal health regarding their effectiveness and general quality need to be implemented to guarantee the security of the information offered. They appear to be a potentially effective strategy for changing behavior, needing to encompass aspects such as engagement and aesthetics, good interactivity and images, with pleasant aesthetics and minimal technological barriers. The main gap identified in this study was that none of the applications addressed the essential information for complete PPH management in a unified way, requiring users to download more than one app. The main limitation of this study was restricted access to some applications that required institutional login or were paid, not allowing for content and quality assessment.
CONCLUSION In quality assessment, the applications achieved acceptable quality. Engagement and aesthetics had the lowest mean ratings. Regarding the extent of information, the majority presented a low percentage of information on PPH in accordance with what is recommended by national and international healthcare organizations. Only one of the seven applications assessed was tested through a scientific study. It is recommended that the applications developed present a quality assessment and prioritize information that meets the target population’s knowledge demand. Good quality apps, with comprehensive content based on good practices and scientific evidence, can have a positive impact on qualified care in obstetric care and professional decision-making, for concrete and effective ongoing education.
ASSOCIATE EDITOR: Rebeca Nunes Guedes de Oliveira ABSTRACT Objective: To assess mobile application quality on the management of postpartum hemorrhage available in the digital stores of the main operating systems. Method: A descriptive evaluative study, carried out from January to February 2023 on the App Store ® and Google Play Store ® . The Mobile Application Rating Scale was used to assess quality (engagement, functionality, aesthetics, information and subjective quality). Information extraction and assessment on postpartum hemorrhage was carried out using a table with information based on official documents, containing stratification, prevention, diagnosis and treatment. Results: Seven applications were included; of these, three were in English, six had an Android operating system. The quality mean was 3.88. The highest means were for functionality, reaching 5.0 (n = 6), and the lowest were for engagement, less than 3.0 (n = 4). The majority of applications presented less than 50% of the information on postpartum hemorrhage management. Conclusion: The applications assessed achieved an acceptable quality mean and, according to health organizations’ current protocols, did not contain the necessary information for complete postpartum hemorrhage management. RESUMO Objetivo: Avaliar a qualidade dos aplicativos móveis sobre o manejo da hemorragia pós-parto disponíveis nas lojas digitais dos principais sistemas operacionais. Método: Estudo descritivo de avaliação, realizado de janeiro a fevereiro de 2023 nas lojas digitais App Store ® e Google Play Store ®. Foi utilizada a Mobile Application Rating Scale para avaliação da qualidade (engajamento, funcionalidade, estética, informação e qualidade subjetiva). A extração e a avaliação das informações sobre hemorragia pós-parto foram realizadas a partir de um quadro com informações baseadas em documentos oficiais, contendo a classificação, prevenção, diagnóstico e tratamento. Resultados: Sete aplicativos foram incluídos; desses, três estavam em inglês, seis tinham sistema operacional Android . A média de qualidade foi de 3,88. As maiores médias foram da funcionalidade, alcançando 5,0 (n = 6), e as menores foram de engajamento, menos que 3,0 (n = 4). A maioria dos aplicativos apresentou menos de 50% das informações sobre o manejo de hemorragia pós-parto. Conclusão: Os aplicativos avaliados alcançaram média de qualidade aceitável e, conforme os protocolos vigentes das organizações de saúde, não continham as informações necessárias para o manejo completo da hemorragia pós-parto. RESUMEN Objetivo: Evaluar la calidad de las aplicaciones móviles sobre el manejo de la hemorragia posparto disponibles en las tiendas digitales de los principales sistemas operativos. Método: Estudio de evaluación descriptivo, realizado de enero a febrero de 2023 en las tiendas digitales App Store ® y Google Play Store ® . Se utilizó la Escala de Calificación de Aplicaciones Móviles para evaluar la calidad (compromiso, funcionalidad, estética, información y calidad subjetiva). La extracción y evaluación de la información sobre la hemorragia posparto se realizó mediante una tabla con información basada en documentos oficiales, que contiene clasificación, prevención, diagnóstico y tratamiento. Resultados: Se incluyeron siete aplicaciones; de ellos, tres estaban en inglés, seis tenían sistema operativo Android. El promedio de calidad fue 3,88. Los promedios más altos fueron para la funcionalidad, alcanzando 5,0 (n = 6), y los más bajos fueron para el compromiso, menos de 3,0 (n = 4). La mayoría de las solicitudes presentaron menos del 50% de la información sobre el manejo de la hemorragia posparto. Conclusión: Las aplicaciones evaluadas alcanzaron un promedio de calidad aceptable y, según los protocolos vigentes de las organizaciones de salud, no contenían la información necesaria para el manejo completo de la hemorragia posparto. DESCRIPTORES DESCRIPTORES DESCRITORES
CC BY
no
2024-01-16 23:45:35
Rev Esc Enferm USP.; 57:e202320263
oa_package/1b/db/PMC10789125.tar.gz
PMC10789127
38194515
INTRODUCTION Preterm birth, birth occurring before 37 gestational weeks, is often associated with low birth weight, birth weight less than 2,500 grams. Both conditions are directly related to neonatal morbidity and mortality ( 1 ) , a component of impact on infant mortality and repercussions on the child’s health ( 1 – 3 ) . Brazil is among the countries with the highest number of premature births per year in the world, including recurrence of prematurity among multiparous women with increasing rates ( 1 , 2 ) . It is also noted that 70% of child deaths in Brazil occur in the neonatal period and are linked to prematurity, with around 30 million of these children falling ill during their first days of life ( 1 – 3 ) . In the meantime, seeking quality health care in this context is urgent and one of the strategic components is the home visit (HV). It is recognized as a potential strategy to promote continued care for children at home as long as it is structured in comprehensive and collaborative efforts, based on educational, humanized and comprehensive care ( 1 , 4 – 6 ) . Furthermore, in Brazil and around the world, in the context of maternal and child care, there is a growing number of Home Visitation Program initiatives ( 4 , 5 ) . With regard to the neonatal population, in the national territory, the measures adopted in the home environment began with the elaboration of the Humanized Care Standard for Low Weight Newborns – Kangaroo Care established by the Ministry of Health through Ordinance GM/MS No. 1,683 of July 12 th , 2007 ( 7 ) . A better understanding of the needs of the neonatal public, especially premature newborns (PTNB) and low birth weight newborns (LBW) has led national bodies to dedicate themselves to promote safe and quality perinatal care in which parents and family members are involved in care ( 7 ) . As a public policy, this was the first step towards establishing home care, specifically for the PTNB and LBW population. This measure strengthened the use of safe and responsible hospital discharge, preparation of the family for return home and monitoring of these newborns at the Primary Care level ( 7 ) , with an understanding focused on the longitudinal interaction between the family and the health professional as a differentiated opportunity for health assessment and guarantee of supportive care, with chances of expanding autonomy and resilience, especially in parenting issues ( 1 , 5 , 6 ) . In this context, nurses are highlighted due to their training aimed at family health care, possession of basic and expanded skills that, when systematized, provide quality monitoring and support for women, families and children in transitional moments experienced with the birth of a child ( 1 , 6 ) . All moments of transition promote change, in this sense, it is essential to recognize and understand the effects and meanings that the individual identifies and that occur with changes in the states of being. Thus, nurses’ interventions at home are targeted and continuous actions that provide openness to knowledge of the moments experienced and are capable of promoting benefits to mental health, child and maternal well-being, encouragement of parenting, family relationships and favoring responses individual and collective positives ( 1 , 6 , 8 ) . In view of the above and given the urgency of supporting pregnancy, birth and parenting in the context of premature and low birth weight births ( 1 , 5 ) , the objective was to report the structures of the experience of home visits by nurses to premature and low weight newborns.
METHOD Study Design This is a descriptive study of the experience report type, structured based on the experience of the nurse authors in developing home visits nested within a doctoral study. Place The experience took place in a city in the center-east of the state of São Paulo, whose estimated population, in 2021, was 256,915 inhabitants and in cities belonging to its microregion ( 1 ) , with a total of 3,503 births in 2020, 362 of which were below of 2,500 grams of body weight, that is, 10.3% of births and 359 PT births, totaling 10.2% of all births ( 1 ) . The Kangaroo Method is not a focus on the care of PTNB and LBW newborns in the city. Population and Selection Criteria The recruitment and invitation to participate in the study was aimed at mothers of PTNB and LBW and occurred after the birth of the child, while the woman was still hospitalized, in a municipal philanthropic maternity hospital, considering the following inclusion criteria: living in the city of interior of São Paulo listed for the study or being from the microregion referenced to it; being monitored by the Unified Health System (SUS); not experiencing clinical complications; your child should be a low birth weight newborn and borderline or moderately premature, with discharge scheduled to occur together with the child. The exclusion criteria were: declaration of abusive use of psychoactive substances; being homeless or sheltered; your child is a twin and/or has a congenital malformation diagnosed in the maternity ward. Setting The study setting was based on home visits to mothers of premature newborns and/or low birth weight newborns and focused on expanding accountability, continuity and instrumentalization for home care through maternal participation and empowerment in the process of return home with a view to parental care, health education, use of the kangaroo position and identification of elements that enhance maternal self-efficacy, based on relevant scientific evidence. Data Collection The experience described took place through home visits between August 2020 and August 2021. An invitation was made to thirty-one women, from which seventeen refused to participate under the following justifications: fear of receiving a home visit due to the pandemic caused by the Sars-CoV-2 virus, family interference, social noise, maintaining privacy, mother’s demands related to household chores and child care. Therefore, eight women took part in this program for home visits. Each received six visits over four months, each lasting an average of 120 minutes. The purpose of the home visit was to talk about what it was like to care for a child at risk at home, providing support for this care and encouraging the use of the kangaroo position. All HV were conducted by the first author of the study under the support of a second visitor, all co-authors of this report. Data Analysis and Processing The results are presented based on the experience of visiting nurses in the home care environment for premature and/or low birth weight newborns, with reflections and actions guided by literature review ( 1 , 9 ) and the experiences found in the process of conducting of the study. The contents of the home visits were recorded individually in field diaries and synthesized into a single record with organization and transcription. The conduct of the study was based on the Theoretical Framework of the Theory of Transitions by Afaf I. Meleis ( 8 ) and followed the Consolidated Criteria for Qualitative Research Reports – COREQ ( 10 ) . Ethical Aspects This experience report originated from activities developed in a research that sought, among its specific objectives, to propose a guiding document for nurses’ home visits to mothers of premature and/or low birth weight newborns after hospital discharge. This study received approval from the Human Research Ethics Committee in 2020 under numbers 4,108,812 and 4,138,360, complying with the precepts of Resolutions 466/12 and 510/16 of the National Health Council. The inclusion of participants in the study was by signing the Free and Informed Consent Form (TCLE).
RESULTS Given the intention of providing support to women who are mothers of children born PT and BP from VD, there was a need to structure them. The structuring began with the question “What needs are common to the population of women who are mothers of children born PT and BP with a view to caring for this child? What is the evidence related to their experiences of parenting?” To this end, the scientific literature was mapped on the elements that constitute and support home visits aimed at mothers of low birth weight and premature newborns, published in 2021 ( 9 ) . The essentiality and influence of the approach used by nurses in HV was revealed when listening efforts and offering support for family reorganization and maternal empowerment based on their knowledge, beliefs and values are central and contribute to strengthening the bond with the professional. Thus, women needed to have the opportunity to narrate, to expose themselves, to expose their real needs in a context of sensitive listening and directed towards dialogue. Therefore, recognizing the existence of social constructions regarding particular care in the face of the condition of prematurity and LBW needs to be based on dialogue, with attention to the meanings conveyed and their determinants. Furthermore, there was our understanding that the precepts of kangaroo care corresponded to the needs of PT and BP children, parents and families, which led us to strategically encourage the use of the kangaroo position. The approach to scientific evidence and collaborative dialogue between us culminated in the proposal of documents guiding HV for women with premature and low birth weight children, namely: “Home visits for families with premature and low birth weight babies” ( Figure 1 ) and “Guiding question strategy for home visits” ( Figure 2 ). In front of these markers, the visiting team and its training were established. To this end, the proponent of the doctoral study invited people who were nurses and who, in her perception, shared the precepts presented above and were available to perform HV. The group of visitors was made up of three people, the main doctoral researcher with specific lato sensu and stricto sensu professional training in the neonatal area, a nurse beginning stricto sensu training in the area of neonatology and a nursing graduate, at the time, with DV experiences are part of the literature mapping mentioned above. The visitors carried out dialogic circles until they felt they were able to carry out the practical development of the HV. The conception was anchored in the clarity of the structures, both regarding knowledge and attitudinal aspects. In relation to the latter, the proposal was to focus on building a relational environment that promoted the narrative and the visitor would launch some nucleus based on the instruments, if the process did not spontaneously permeate it. Questions brought up and not present in the instruments were accepted, for example, questions regarding the pandemic caused by the Sars-CoV-2 virus, which brought great concerns and doubts during home visits. HV began with the child’s return home. The first HV occurred as soon as possible for mother and baby, with the intention that its occurrence would occur within the first three days after discharge or at most in the first week after discharge, following documentary indications ( 11 , 12 ) . When monitoring a child born at risk, according to the Kangaroo Method (MC) guiding document, primary care (AB) is recommended to carry out three consultations in the first week, one of them at the HV level, two in the second week and a weekly appointment from the third week after the child’s hospital discharge until the child reaches a weight of 2,500 grams ( 13 ) . Prematurity and low birth weight constitute a profile of neonatal vulnerability with indications of continued care and close to PC upon hospital discharge ( 14 ) . The visitors focused on approaching and creating a bond with the woman mother and her family, when the effort and commitment involved sensitive and empathetic listening towards understanding demands and needs, not only related to issues of child care, but also of walking in life as a whole. The experience in an intermediate care unit made women who were mothers of premature and low birth weight babies try to learn as much as possible about how to care for their child, and it was from there that they gathered tools for returning home. Our meeting took place when their babies were not yet expected to be discharged, I established contact and, over time, we met in my search for the hospital care unit. The fact that we saw each other frequently made me understand that the professional-patient bond was established there. This way, they (mothers of premature and low birth weight babies) felt more confident in presenting their demands and finding me as support. [Field note from main visitor]. The understanding was that the totality reverberates in private experiences and vice versa, as well as the existence of complex crossings in this process. When listening, special attention was given to the externalization of feelings and, those close to suffering and hope, gained special attention. It is clear that each phase is driven by a search. The beginning of returning home is maintained by the feeling of anxiety about the child’s well-being, nutrition and weight gain. As these concerns are resolved, new doubts and learning opportunities are necessary. [Field note].
DISCUSSION To create a bond between the nurse and the mother, we focused on encouraging narratives about their experience of motherhood with a premature child, resuming and embracing projections and what was experienced. The hospital discharge process after the birth of a premature and low birth weight child and their first months at home are seen as difficult moments with dual feelings of happiness and anguish. The birth and care of a premature and low birth weight child result in challenges arising from an unexpected way of caring and affect the mother’s quality of life, as well as her family and social life ( 15 ) . The dialogue unfolding from the narratives established in the HV allowed verbalization and accommodation of frustrations in the exercise and experience of the maternal role, as well as educational interventions related to the concepts that acted as restrictive to women’s autonomy. Actions guided and triggered in and by the context of life are powerful for identifying social determinants, beliefs and individual and social network resources and directing interventions for support, empowerment and autonomy. Home visits, when well established, favor the identification of vulnerable situations that expand in individual and collective circumstances ( 16 ) as well as counterpoints to them. In the first HVs, the maternal focus on the ‘correct’ performance of actions that meet the child’s basic human needs (especially hygiene, food and sleep) was notable. Thus, the conversations and questions were on this topic and, the fact that we were at home, favored narratives related to the inclusion of the partner/father in this process, with his clear desire to integrate him, despite the crossing of gender issues that end up referring women to the centrality of parental experiences. In this way, it is clear that gender issues permeate motherhood and dialogues aimed at countering them were busy topics in HV, especially after the perception of being able to meet the child’s needs. The woman feels exhausted by the demand and considers contacting people in her social network with reflections related to expanding the inclusion of the child’s father. Recognizing and welcoming the partner to help the woman reduces maternal stress and anxiety, making intervention programs that significantly help support the mother ( 17 ) crucial. The recognition of paternal importance given by the partner’s appreciation of activities with the child makes the father’s insertion in strengthening the parental bond positive and alleviates feelings of maternal anguish and stress. After discharge, the partner’s participation in caring for the child allows for fewer moments of insecurity for the mother, demonstrating how much this involvement has consequences for the baby’s life history and within the home environment ( 18 ) . However, for the above presence and participation, the insertion of the partner needs to occur from prenatal care and throughout labor and birth. It happens that he has always been placed on the margins ( 19 ) . The woman mother weaves a process of reinterpretation of motherhood based on the redefinition of her conception of prematurity and low birth weight and the place of the father and the extended family, so that, for the visiting nurse, a support anchored in the mediation of these elaborations, unique to each woman. In this process, they experience moments of ‘crises’ and given the challenging circumstances for the personal-parental skills they are experiencing, they feel the need for continuous support and to talk more frequently with the visiting nurse, an aspect made possible by the adoption of an electronic instant messaging resource. Conversations established through messages provide greater flexibility in moments of exchange and create space to support new information and collaboration in moments of uncertainty, strengthening the construction of bonds and appreciation of assistance from the participants’ perspective ( 20 , 21 ) . Therefore, it was identified that women needed greater proximity for involvement to be established, therefore, contact by electronic means was a resource used and was a supporting and facilitating tool in coping with daily difficulties (21). Nursing practice through telephone communication or virtual social networks facilitates access to health, informational support and monitoring for mothers of premature babies. Furthermore, it improves hope and perceived maternal self-efficacy ( 22 ) . Increased hope is related to the ability of women mothers to reduce their stress and anxiety, as well as overcome despair with guidance and an appropriate approach ( 22 ) . Maternal beliefs in their skills and abilities are a vital positive psychological factor for coping and performing through life’s challenges and incidents, such as premature birth, with repercussions on the better quality of the mother-child relationship ( 22 ) . It is up to the nurse to identify the maternal perspective regarding the care that restricts premature and low birth weight newborns and, from then on, enrich the maternal knowledge, skills and self-confidence of the woman to carry it out, with contributions to identity maternal. The nurse brings to the home meeting ideals and reflections that emerge individually from the relationship established with parental caregivers, as well as creating the intervention based on it in alignment with existing scientific evidence. The basis for the composition of care elements is immersed in the history of life and the processes experienced, as they guide which paths have already been taken and which are still unknown. Within the elements of care and as an early stimulus is the kangaroo position, with positive scientific evidence for the mother and the newborn ( 23 ) such as better thermal regulation and physiological stability of the child, promotion of breastfeeding (BF), with expansion of milk production and its duration, positive stimulus to neurobehavioral development and effects on pain conditions ( 23 , 24 , 25 , 26 ) . The use of the kangaroo position at home was desirable in the construction of HVs, however we identified that the majority of mothers had little information about the position, few had prior knowledge about it. Few women spoke about it, demonstrating how the Kangaroo Mother Care has been approached in an incipient way in prenatal care, birth and follow-up of women who carry, give birth and care for a premature child with low birth weight. Another point to be highlighted is maternal health, a pressing condition for better child care conditions. Women mothers tend, in initial meetings, to report little about themselves and focus almost exclusively on the health and well-being of their child, overriding care for themselves ( 6 ) . However, over time, they report tiredness and begin to talk about themselves, their exhaustion, their needs, with opportunities to promote reflections regarding maternal mental health. Interaction with the child is often impaired due to maternal stress and anxiety with feelings of inability to care for the child, parental burnout, a term related to emotional exhaustion due to caring for the newborn and which can result in distancing from the mother-child binomial. It must be identified and treated early so that the relationship initiated is preserved and there is greater engagement with the care of PTNB and NBW ( 27 ) . The physical conditions of postpartum recovery are little evidenced by maternal reports, despite being a phase of great physical, mental and emotional changes, the mother appears shyly alongside the risk birth, however, mothers of PTNB and LBW have greater chances of problems in relation to their physical and mental health, being more prone to postpartum depression than women who gave birth to full-term babies ( 28 ) . They often take this position, perhaps due to the insensitivity of health professionals to women, combined with interactions that reinforce the child’s centrality and the maternal duty to carry out care determined by the professional, sometimes immersed in judgments. This context inhibits the establishment of bonds and authentic relationships between women, family and professionals, as well as being restrictive to the process of maternal autonomy in child care. Throughout the HV, largely due to the dialogical proposal, a growing process of trust and authentic self-display in meetings was evident, favoring relational mutuality. There is an urgent need for shifts in the practical exercise of HV, in particular, in replacing the reduced understanding of it being ‘the act of entering a home to obtain and give health information’ in the direction of investing in the establishment of a relational space in the home ( 29 ) for the purposes to accommodate particular needs. The willingness for collaborative construction is the greatest challenge of HV as it is not safe to say that it is clear in its full meaning. The process of constructing respectful and dependent care for others is essential, driven by their particularity and not individuality ( 30 ) . Therefore, it presents a challenging task for the nurse when mobilizingzar the uniqueness of dialogue in the construction of new parental skills and with the recognition that transitional periods are surrounded by imbalances, uncertainties and personal and social conflicts ( 9 ) . Adaptations to the home environment in response to the arrival of a premature and low-weight child place parental caregivers, especially the mother, in movements of perception regarding their way of understanding the new roles that are constructed from their cultural insertion and influenced by the over the years due to life contexts that internalize different ways of feeling and viewing oneself as competent and skilled in carrying out actions that are understood in parenthood. Maternal self-efficacy is directly related to the support of parental caregivers in the care of premature and low birth weight babies and, although motherhood suggests that the woman has responsibilities for her child, in premature birth, her experiences and knowledge are devalued, being denied to them the authority and management of the care of the child, despite being responsible. Finally, it should be noted that the quality of the home visit is closely linked to the time spent by the professional providing assistance and the intention designed for carrying it out. The average time for HV experienced in this study is about 120 minutes, a time considered opportune so that the researchers could enter the maternal universe and make efforts towards intersubjective care. This reality proves to be challenging in the Brazilian scenario, assuming the growing number of live births in the country, especially high-risk newborns who require home health care and the tiny number of professionals dedicated to such work ( 14 ) .
CONCLUSION The report defends the structuring of HV aimed at parental caregivers who are caring for children born preterm and with low birth weight, using a dialogical approach to encourage the exposure and acceptance of particularities. Furthermore, it was found that as the bond between visiting nurse and mother is established, prompt contact by electronic means for reception is a supporting factor to be required in current times. The documents created and used favored the conduction of conversations in HV for this population followed by great weaknesses in the health services. The ‘being’ professional nurse favored knowledge for a ready technical dialogue and the clarity of being the horizon of care, sensitive listening and collaborative dialogical efforts, was fundamental for the establishment of a relational and care space. In this context, the home support offered is associated, through maternal verbalization, with positive health practices and, even though there is a public health policy structure designed for this purpose and which uses home visits, they perceive it as a new action that is not included completely in the reality of the population, despite its theoretical structuring being widespread, which leads us to understand that care actions carried out at home need to be further worked on and implemented using tools that bring the professional closer and direct them to the particularities arising from the experience maternity care in situations of prematurity and low birth weight.
ASSOCIATE EDITOR: Ivone Evangelista Cabral ABSTRACT Objective: To report the structures of the experience of nurse’s home visits to premature and low birth weight newborns. Method: This is a descriptive study of the experience report type, structured on the experience of the nurse authors in the development of 48 home visits in a city in the state of São Paulo and its microregion between August 2020 and 2021 with eight mothers of premature and low weight newborns. Results: The guiding documents “Home visit for families with preterm and low birth weight newborns” and “Strategy of guiding questions for home visits” were created and used to promote open narratives from parental caregivers about caring for at-risk newborns, creating a relational space aimed at joint construction. Conclusion: The documents used have favored home visits, helping nurses to establish professional bonds and build relational space through dialogue when conducting their activities in the home environment. RESUMO Objetivo: Relatar os estruturantes da experiência de visitação domiciliar realizada por enfermeiros aos recém-nascidos prematuros e de baixo peso. Método: Trata-se de estudo descritivo do tipo relato de experiência, estruturado na vivência das enfermeiras autoras no desenvolvimento de 48 visitas domiciliares, em cidade do interior paulista e sua microrregião, entre agosto de 2020 e de 2021, com oito mães de recém-nascidos prematuros e de baixo peso. Resultados: Foram criados e utilizados os documentos orientadores “Visita domiciliar para famílias com RN Prematuro e de Baixo Peso ao Nascer” e “Estratégia de perguntas norteadoras para visita domiciliar” para promover narrativas abertas dos cuidadores parentais sobre o cuidado com o recém-nascido de risco, criando um espaço relacional direcionado à construção conjunta. Conclusão: Os documentos utilizados favoreceram a condução das visitas domiciliares, auxiliando o enfermeiro a estabelecer vínculo profissional e construir espaço relacional através do diálogo na condução de suas atividades em ambiente domiciliar. RESUMEN Objetivo: Informar los factores estructurantes de la experiencia de visitas domiciliarias de enfermeros a recién nacidos prematuros y de bajo peso. Método: Se trata de un estudio descriptivo de informe de experiencia, estructurado sobre la vivencia de las enfermeras autoras en el desarrollo de 48 visitas domiciliarias en una ciudad del interior de São Paulo y su microrregión, entre agosto de 2020 y 2021, entre ocho madres de recién nacidos prematuros y de bajo peso. Resultados: Se elaboraron y utilizaron los documentos orientadores “Visita domiciliaria para familias con RN Prematuro y de Peso Bajo al Nacer” y “Estrategia de preguntas guía para visita domiciliaria” con el fin de promover narraciones abiertas de los cuidadores parentales sobre el cuidado del recién nacido de riesgo, creando así, un espacio relacional con miras a la construcción conjunta. Conclusión: Los documentos utilizados favorecieron la realización de las visitas domiciliarias, ayudando al enfermero a establecer un vínculo profesional y a construir un espacio relacional a través del diálogo en la conducción de sus actividades en el entorno domiciliario. DESCRIPTORES DESCRIPTORES DESCRITORES
CC BY
no
2024-01-16 23:45:35
Rev Esc Enferm USP.; 57:e20230209
oa_package/e2/14/PMC10789127.tar.gz
PMC10789171
38226044
Introduction Globally, there has been a rapid increase in both the incidence and mortality rates of cancer patients attributed to various risk factors, including population growth, aging, and socioeconomic development. Studies reveal that individuals residing in industrialized nations have a two to three times higher chance of developing cancer compared to those in underdeveloped countries. This is primarily due to differences in life expectancy, educational achievement, wealth, early detection and treatment of cancer, and improved registration. 1 , 2 However, around 70% of cancer-related deaths occur in low- and middle-income countries. Numerous studies and data from cancer registry reports indicated that Iraqi people are at an increased risk for developing cancer. 3 , 4 The incidence rate (IR) of cancer in Iraq has increased from 38.91 per 100,000 people in 1994 to 78.93 per 100,000 people in 2020. 5 The actual cause of this apparent increase is uncertain, at least for a few cancer types. However, the implementation of early detection programs for specific cancers or improved diagnosis and reporting, population aging, lifestyle changes, environmental damage caused by wars, and economic sanctions are probable factors. 3 , 6 Health policymakers must develop programs utilizing epidemiological indices to calculate the disease burden in the community to control, prevent, and treat cancer. 7 The incidence rate, death rate, and population-based cancer survival are three indices that make up a crucial instrument for estimating the burden of cancer. Moreover, variations in these indicators over time can reflect healthcare quality. 8 The mortality-to-incidence ratio (MIR) is an index that assesses the impact of cancer on the community and illustrates how well the healthcare system performs concerning patient care and cancer outcomes. 9 The International Agency for Research on Cancer Registration (IARCR) manual proposed that if cancer registries could not estimate survival directly through comprehensive follow-up of all patients with cancer who had been registered to determine their vital status, the MIR could be used as an alternative indicator of survival. 10 The lack of active monitoring through population-based cancer registries, particularly in developing countries, hinders efforts to build reliable five-year cancer survival estimates. As a result, several studies examined the validity of MIR or MIR complement [1-MIR] as a valuable predictor of cancer survival. In their research, Sunkara and Hebert described the MIR as a helpful indicator for cancer screening and care in colorectal cancer patients. 11 Similarly, Stenning-Persivale et al. reported that the 1-MIR is an appropriate tool for approximating observed five-year survival for the ten types of cancers studied. 12 Ellis et al., on the other hand, stated that the inherent variability in the sensitivity of the MIR to changes in survival and the level of survival at any time since diagnosis between cancers of different lethality invalidates the 1-MIR as a survival measure. 13 The likelihood of a patient surviving cancer is significantly increased by earlier detection and more effective treatments. 14 However, more expensive healthcare is needed for screening tests and more potent treatments. Moreover, public health organizations may become overburdened and unable to offer adequate care as the population increases. 15 We assumed that countries with low growth rates and higher total health expenditure out of the gross domestic product (e/GDP) would have favorable cancer MIR, as recent studies on specific cancers have supported this idea. 16 - 18 Iraq is categorized as an upper-middle-income country. Iraq’s economy has suffered decades of political unrest and fluctuating oil prices, resulting in exceptional challenges and damage to the health system. However, over the last decade, Iraq has witnessed some improvements in its health outcomes despite the conflicts. 19 According to World Bank Data, the per capita health expenditure increased from 173.19 USD in 2012 to 202.31 USD in 2020), 20 and the e/GDP grew from 2.69% in 2012 to 5.08% in 2020. 21 Table 1 displays that the annual population growth rate (%) decreased from 4.5 in 2012 to 2.4 in 2020. To the best of the authors’ knowledge, no previous study in Iraq has used data from population-based cancer registries to estimate the national survival rate of all cancers combined. Therefore, this study was conducted to determine a nine-year time trend (2012-2020) of the MIR for all cancer patients combined in Iraq as an alternative survival measure and the impact of health expenditure presented as e/GDP (%) and population growth rate on it.
Materials and Methods Study Design This is a retrospective, registry-based study that includes data on cancer cases and deaths reported during the period 2012-2020. Data Sources and Collection The data used in this study was obtained through a review of the official Iraqi Cancer Registry (ICR) annual reports, which are publicly available at ( https://moh.gov.iq/?page=35 ). The primary anatomical sites of all cancer types were identified and coded according to the International Classification of Diseases for Oncology, Third Edition (ICD-O-3). The reported data included cancer incidence and mortality rates by sex and type of cancer recorded by the Iraqi Cancer Board, Ministry of Health and Environment, for 2012-2020. They are exclusive to Iraqi nationals and do not apply to expatriates working in Iraq. The data for Iraq’s health expenditure out of GDP (e/GDP (%)) during the studied years was obtained from the World Bank Data. 21 The data for the Iraqi population growth (annual %) during the studied years was obtained from the World Bank Data. 22 Definition of Indicators Gross domestic product (GDP): It is “an economic indicator that measures the monetary value of the total goods and services produced within the borders of the country during a specific period (typically one year)”. 23 Health expenditure as a percentage of the gross domestic product (e/GDP (%)) is the percentage of total general government expenditure on health. 24 Population growth rate refers to the ratio between the annual change in the population size and the total population for that year, usually multiplied by 100. 25 Ethical Approval The Ethical Committee of the College of Medicine, University of Basrah, approved the study (Project ID: 030409-007-2023). Statistical Analysis All incidence and death rates were crude rates and expressed per 100,000 persons. The cancer incidence rate for each calendar year of the study refers to the patients diagnosed with cancer in that year, depending on pathology reports. The cancer mortality rates were presented for people certified as having died from cancer in that year. The MIR for all cancer types combined was calculated by dividing the crude mortality rate by the crude incidence rate for all cancer types for each year of the study period and comparing them to the annual population growth rate and e/GDP (%). The 1-MIR was evaluated as a proxy measure for the 5-year relative survival in the same calendar period for all cancer types combined registries. 11 The median was used to measure the central tendency to obtain an overall assessment of the distribution of the MIR. It was calculated as total and for males and females separately. The AAPC in MIR was computed to evaluate the magnitude and direction of the trends using the National Cancer Institute’s Joinpoint Regression software program (version 4.9.1.0). 26 A simple linear regression analysis was done using the IBM Statistical Package for the Social Sciences (SPSS) for Windows, Version 24.0. (IBM Corp., Armonk, N.Y., USA), taking the MIR as the dependent variable and annual population growth rate and e/GDP (%) as the independent variables. P values <0.05 were considered statistically significant. Scatter plots were made using Microsoft Excel 2019.
Results The overall incidence rate of cancers during the 9-year study period was 74.38/100,000 population (64.31 for males and 84.71 for females, with a female-to-male ratio of 1.32:1). It increased significantly from 61.69 in 2012 to 78.93 in 2020, with an AAPC of 4.1% (P = 0.002). The overall mortality rate was 24.57/100,000 population (24.74 for males and 21.24 for females). It can be seen in Table 2 that it decreased from 30.05 in 2012 to 26.46 in 2020, with an AAPP of -0.2 (P = 0.960). The overall study period (both sexes) median MIR for all cancers combined was 0.33 (0.38 for males and 0.28 for females). The overall median survival estimates for all cancers combined for both sexes, as reflected by the MIR complement (1-MIR), was 0.67 (67%). It was significantly higher for females than males [0.72 (72%) and 0.62 (62%), respectively, P = 0.003] ( Figure 1 ). No statistically significant decrease was noticed in the MIR over time. It decreased from 0.49 in 2012 to 0.33 in 2020 with an AAPC of -3.1 (P = 0.400). In contrast, the incidence rate increased with time ( Figure 2 ). The MIR estimates were negatively but not significantly associated with e/GDP (R 2 = 0.263, P = 0.158). This means that the more resources are allocated to health, the more patients diagnosed with cancer will survive. The regression is Y = 0.48-0.04*x. For a 1-unit increase in e/GDP, there is a 0.04-unit decrement in MIR ( Figure 3 ). Figure 4 demonstrates a statistically significant positive association between the annual population growth rate and a higher overall cancer MIR with an R 2 value of 0.505, P = 0.032 (meaning that the annual growth rate explained 50.5% of the total variability in the MIR). The regression formula is Y = 0.19+0.05*x. For every 1 unit increase in growth rate, there was a 0.05 unit increment in MIR.
Discussion Understanding population survival is critical both for individuals and public health. Given the scarcity of comprehensive population survival studies, estimating survival based on the complement of mortality and incidence ratios is an option. 9 This study revealed that during the study period (2012–2020), the overall trend of MIR has not significantly decreased, with an AAPC of -3.1% p = 0.400, indicating an increase in survival for all cancer patients combined. The median of the MIR of all cancers combined in Iraq for that period was 0.33 for both males and females (0.38 for males and 0.28 for females), giving an overall MIR complement (1-MIR) or a proxy 5-year survival rate of 0.67 or 67% [0.62 (62%) for males and 0.72 (72%) for females]. It is better than that reported for Brazil from 2002 to 2014, which was 52% for males and 56% for females. 27 This difference could be partly explained by the fact that our study looked at a different time frame than the Brazilian study, or it might result from an improvement in the quality of treatment. Our results are comparable to that in Australia. Based on data sourced from the Australian Institute of Health and Welfare Cancer (2020), the overall MIR for all cancers combined for the period 2012–2019 was 0.34 for both males and females, 0.35 for males, and 0.33 for females, giving a survival rate of 66%, 65%, and 67%, respectively. 28 The level of completeness of Iraqi death certificates may preclude a meaningful comparison. Nevertheless, the reliability of death registration would be questioned when a condition such as a constant mortality rate was reported. 29 Additionally, the survivorship of diagnosed cancer cases that have been officially registered in the ICR is estimated through cancer patient follow-up and routine reviewing of the relevant death records. 30 Even so, it is still possible that the actual number of cancer deaths was underestimated. Our study found that the survival was statistically higher for female than male cancer patients (the medians were respectively 72% versus 62%, p = 0.003). This result is in agreement with what was reported by Zhua Y et al., who indicated that male cancer patients have higher mortality rates and shorter survival times than female patients. 31 The observed differences have complex causes, but they can be attributed to behavioral factors such as smoking and alcohol consumption, delayed diagnosis, sex chromosomes, and sex-biased molecular changes. 32 A negative but not statistically significant linear relationship was found between the e/GDP and the MIR (R 2 = 0.263, P = 0.158). A similar result was reported by Lee et al., who found no statistically significant association between pancreatic cancer MIR and e/GDP. 33 However, Ades et al. observed that the more budget spent on health, the fewer the deaths of cancers, and a statistically significant correlation was found between MIRs of all cancers and % e/GDP (r = -0.4726, P = 0.013). 24 Similarly, Batouli et al. found that cancer MIR in high-income countries (0.47) was significantly lower than that of middle/low-income countries (0.64), with a p<0.001. 34 In high-income countries, the total health expenditure showed a statistically significant inverse relationship with the overall cancer MIR (P <0.001). Better or more frequent screening programs in countries with higher e/GDP lead to increased cancer diagnosis, early detection, and treatment, thus raising the reported incidence and lowering mortality. 33 The annual population growth rate and MIR were found to have a statistically significant positive relationship (R 2 = 0.505, P = 0.032). The relationship between population growth and MIR is complex and multifactorial. Longer life expectancy and lower birth rates, on the other hand, are associated with an aging population, which has an impact on the extent of cancer incidence. 35 While mortality from noncommunicable diseases increases with age, such as cardiovascular disease, the age-related increase in cancer mortality appears to be slowing. 36 Hashim et al. reported that the mortality rate for most cancers stabilized or decreased after the age of 85, particularly for non-hormonal cancers. Whether this represents an organic leveling of mortality rates or a reduction in the validity of most cancers’ registration at the various oldest old is debatable. 37 Similarly, Caroli et al. observed that between 1970 and 2015, the age-standardized mortality rates for all cancers combined showed a heterogeneous but widespread decline in their study of the mortality time trends of 17 cancer types in 11 countries. 38 It seems that general senility, which restricts cell proliferative potential and the angiogenesis necessary for tumor growth, affects the severity of cancer in old age. 36 The factors that could influence the patterns of cancer mortality among elderly patients include the presence of comorbidities, less intensive screening, reduced aggressive treatment, disease misclassification, and alterations in the underlying risk factors like hormones. 37 There are a few limitations to consider with this study. Firstly, there may be some incompleteness in death registries, which cannot be completely ruled out. Secondly, some inaccuracies about the cause of death may occur, particularly among the elderly. Most cancer patients do not die as a result of their disease, and for those who do die, the duration of survival varies widely. 39 Despite these limitations, this is the first study that establishes the MIR of cancers and its 9-year trend in Iraq, and it offers a distinctive viewpoint on the relationship between MIR of cancers and e/GDP and annual population growth rate. MIR is a simple and quick indicator that can provide important information relevant to the local impact of cancer and be applied as a relative marker of cancer care and the performance of a country’s overall health system, 11 even though its exact role is still debatable because it would never replace the importance of survival data from cohort surveys. 13
Conclusion Following the findings of previous studies in some countries, 40 , 41 the results of this study showed that the incidence of cancer in Iraq during 2012-2020 increased while there was a decrease in mortality rates. As indicated by the MIR and the MIR complement (1-MIR), the proxy five-year survival rate is improving in Iraq with time. Females showed significantly better cancer outcomes than males. High health expenditure as a percentage of GDP favorably affected overall cancer survival, though this relationship was not statistically significant. While a low growth rate significantly increases cancer patient survival. The findings of this study could help policymakers evaluate current laws and develop effective cancer intervention strategies. Future cohort survival analysis research is required to assess the reliability of MIR in predicting the five-year survival of cancer patients in Iraq.
Background: Cancer continues to be a significant worldwide health concern with substantial mortality. The cancer mortality-to-incidence ratio (MIR), a proxy measure of observed five-year survival, can serve as a valuable indicator of cancer management outcomes and healthcare disparities among countries. This study aims to determine the MIR trend for all cancers combined among Iraqi citizens during 2012-2020 for health expenditure percentages out of the gross domestic product (e/GDP (%)) and population growth rate. Methods: The study used the Iraqi Cancer Registry annual reports for cancer data and World Bank data for health expenditure and population growth. Simple linear regression analysis examined the relationship between health expenditure, growth rate, and MIR, while joinpoint regression analysis examined the trend over time. The Ethics Committee of the College of Medicine at the University of Basrah approved the study. Results: An increasing trend in crude incidence rates for all cancer types combined was seen with a decrease in mortality rates from 2012 to 2020 in both sexes. A non-statistically significant reduction in MIR was found with an average annual percent change (AAPC) of -3.1% (P = 0.100). The decrease in MIR was higher among females than males, with a statistically significant difference (P = 0.003). High health expenditure presented as e/GDP (%) was associated with a favorable cancer survival rate, but this was not statistically significant (R 2 = 0.263, P = 0.158). In contrast, a low growth rate was significantly associated with cancer patients’ survival (R 2 = 0.505, P = 0.032). Conclusions: As indicated by the MIR and the MIR complement (1-MIR), the proxy five-year survival rate is improving in Iraq with time. Although not statistically significant, high health expenditure favorably affected overall cancer survival. A low growth rate, on the other hand, significantly improves cancer patients’ survival. Keywords:
Source of Funding None. Conflict of Interest None of the authors declares a conflict of interest.
CC BY
no
2024-01-16 23:45:36
Qatar Med J. 2024 Jan 15; 2023(4):38
oa_package/15/0f/PMC10789171.tar.gz
PMC10789179
38226123
Introduction Chronic eosinophilic pneumonia (CEP) is not a very commonly encountered pulmonary disease that is often characterized by dyspnea and cough accompanied by bilateral pulmonary infiltrates and peripheral eosinophilia. This disease often presents in middle-aged women, often nonsmokers, with a history of atopic diseases such as asthma or eczema. It is estimated to account for up to 2.5% of cases of all interstitial lung diseases [ 1 ]. Although sputum analysis is often unrevealing, bronchoalveolar lavage (BAL) with eosinophilia of >25% is diagnostic. Lung biopsy is usually unnecessary and reserved for cases with a questionable diagnosis. Often initially mistaken for pneumonia, CEP rapidly remits following systemic steroid treatment but has a high rate of recurrence [ 2 ]. We present a patient with classic CEP in terms of epidemiology and clinical presentation. However, the onset of CEP seems to correlate with the initiation of naltrexone-bupropion combination medication, used by the patient to assist with weight loss. The literature review suggests prior rare case reports of CEP associated with intramuscular naltrexone [ 3 , 4 ] and eosinophilia associated with bupropion [ 5 ]. This case is notable for two important reasons. The first is to establish a possible correlation between the use of naltrexone-bupropion and the development of CEP, as it would be interesting to uncover other cases in the future. The second is to emphasize the importance of risk analysis, as diagnosis could be established with BAL alone, while our patient underwent concurrent lung biopsy and suffered significant hemoptysis leading to intubation with ICU transfer and prolongation of hospital stay.
Discussion The case described above is the classic presentation of CEP. Though a relatively rare condition, CEP typically presents with persistent dyspnea, dry or productive cough, fever, and night sweats. Due to this disease's prolonged nature and rarity, CEP often takes months of symptoms before a diagnosis is made. The physical exam is variable, with one-third of patients demonstrating wheezing and another one-third having crackles. Chest radiograph demonstrates bilateral pulmonary opacities, often in the upper lobes and near the periphery, described as a “photographic negative of pulmonary edema” pattern. These opacities may even be migratory [ 1 ]. The first hint at CEP, in this case, came from peripheral eosinophilia on CBC. BAL is diagnostic, which almost exclusively reveals > 25% eosinophils. Sputum analysis is usually unhelpful and is often negative for eosinophils. Lung biopsy is usually unnecessary but can confirm the diagnosis of CEP in the rare instance of unrevealing BAL. Interstitial and alveolar eosinophils and histiocytes characterize lung biopsy [ 1 ]. These features are consistent with our case's BAL and lung biopsy findings. In our patient, a lung biopsy was performed concurrently with BAL, although the BAL findings ultimately were sufficient to confirm the diagnosis of CEP. In retrospect, lung biopsy, though also supportive of CEP, may not have been entirely necessary. This is a very important point to consider, as our patient suffered from post-biopsy bleeding and hypoxemia requiring intubation. Our case patient represents the typical demographics of CEP, with a female: male ratio of 2:1 and often presenting in the fourth or fifth decade of life. Patients who suffer from CEP are often nonsmokers but have atopic conditions such as asthma or eczema [ 1 ]. Differential diagnosis includes fungal infections (e.g., aspergillosis) and vasculitis (e.g., eosinophilic granulomatosis with polyangiitis, formerly Churg Strauss). Chest radiographs may appear like cryptogenic organizing pneumonia (COP), but COP does not have eosinophilia on BAL and has a slower recovery compared to CEP [ 1 ]. It is also important to note that CEP has some similarities to acute eosinophilic pneumonia (AEP), with some key differences in epidemiology and laboratory findings. Unlike CEP, AEP is characterized by a rapid symptom onset (< 1 month) and commonly occurs without peripheral eosinophilia [ 1 ]. AEP is similar to CEP in that patients are often atopic, chest X-ray shows bilateral infiltrates, BAL is diagnostic with >25% eosinophils, lung biopsy is not usually necessary, and the prognosis is excellent with rapid response to systemic steroid therapy [ 1 , 2 ]. AEP is frequently mistaken for pneumonia and often treated like acute respiratory distress syndrome (ARDS) until eosinophilia is revealed on BAL [ 2 ]. Whereas CEP often presents in a smoldering fashion in nonsmokers and often recurs with steroid discontinuation. AEP is strongly correlated to smoking or other inhalation exposures. Some evidence has suggested that AEP often resolves following smoking cessation or exposure removal - in some cases even without steroid treatment - but rapidly recurs upon re-exposure [ 2 , 6 - 7 ]. Given the correlation of symptoms in our patient to the start of naltrexone-bupropion treatment, this case is suspicious of a drug-induced etiology. The literature review is unrevealing for prior cases of eosinophilic pneumonia associated with a naltrexone-bupropion combination. Drugs that have an established connection to eosinophilic pneumonia include NSAIDs, methotrexate, cocaine, and antibiotics such as ampicillin and nitrofurantoin [ 1 ]. Cases of eosinophilic pneumonia have also been reported with some cephalosporins, including ceftaroline, cephalexin, and cephradine [ 8 ]. Given that our case patient was treated with antibiotics on several occasions for pneumonia prior to diagnosis, it is prudent to consider the possibility of a medication effect. However, it appears that her symptoms began after starting naltrexone-bupropion and prior to antibiotic treatment, suggesting that antibiotics were not the culprit. Furthermore, her symptoms would briefly resolve following antibiotics and steroids and worsen shortly thereafter, lending further suspicion to a non-antibiotic trigger. A few cases of eosinophilic pneumonia have been reported with the use of long-acting intramuscular naltrexone. In one study, long-acting naltrexone (380 mg or 190 mg in microspheres) was used to treat alcohol dependence, given as an intramuscular injection every four weeks. Of 205 pts in the higher-dose group, there was one reported case of eosinophilic pneumonia and one case of interstitial pneumonia. This appears to be the first reported instance of these adverse effects in naltrexone microsphere use [ 3 ]. Another case describes eosinophilic pneumonia that developed in a 59-year-old man one month following the start of intramuscular naltrexone for alcohol dependence, with recovery following a change to oral naltrexone [ 4 ]. At least two other cases of intramuscular naltrexone-induced AEP have been described [ 9 , 10 ]. There have been a few cases of eosinophilia reported with bupropion treatment. In one example, a 48-year-old woman developed a dry cough and myalgias 19 days after starting bupropion (150 mg daily) for depression. CBC demonstrated eosinophilia with normal pre-treatment CBC. The authors cite at least two prior cases of eosinophilia associated with bupropion. Hematologic reactions to bupropion are relatively rare, the most common of which is leukopenia. In this case, the patient’s symptoms and eosinophilia resolved following discontinuation of bupropion [ 5 ]. In the setting of a dry cough, one may speculate whether there may have been an eosinophilic pneumonia component to this drug reaction, but this case does not mention any abnormal imaging studies or invasive testing. Literature review at the time of this case report is unable to surface any prior link between combination naltrexone-bupropion and eosinophilic pneumonia; this association appears to be novel. However, given the prior rare instances of eosinophilic pneumonia in microsphere naltrexone and eosinophilia in bupropion use, it is possible that either of these medications, alone or in combination, may have contributed to the development of this disease in our patient. It would be interesting to observe whether future cases of CEP may be linked to this drug combination. Our patient was managed with systemic steroids, inhaled steroids, and discontinuation of the suspected offending drug, naltrexone-bupropion. The high recurrence rate of CEP can lead to steroid dependence, provoking research into other possible treatment options. One case report outlines a woman who maintained remission on inhaled steroids following systemic steroid cessation [ 11 ], but further research is necessary to conclude whether inhaled steroids may contribute to remission maintenance. There are few case reports that have reported that the anti-immunoglobulin E (anti-IgE) antibody omalizumab was used to treat cases of steroid-dependent eosinophilic pneumonia [ 12 , 13 ]. In some cases, patients were able to maintain disease remission on inhaled corticosteroid/long-acting beta-agonist inhalers following omalizumab treatment [ 14 ]. These cases generally involved patients with atopic features such as asthma. However, research suggests that IgE does not directly lead to mast cell degranulation, the inflammatory signaling pathways enhanced by IgE lead to increased cytokine production and prolongation of mast cell life, which may contribute to increased systemic mast cell presence. This mechanism appears to be independent of allergen cross-linking [ 15 ].
Conclusions These cases suggest that there is still much to learn regarding the underlying pathophysiology of CEP, and management options will likely evolve as the scientific community learns more about this disease. The CEP needs to be considered when the clinical presentation is of CEP seen in the context of medication use.
Chronic eosinophilic pneumonia (CEP) is not a commonly encountered pulmonary disease that presents with bilateral pulmonary infiltrates accompanied by peripheral and bronchoalveolar lavage (BAL) eosinophilia. Recovery is rapid with systemic steroids but has frequent recurrences. We present a case with the classic presentation of CEP that appears to be related to a weight loss medication called naltrexone-bupropion. This case is unique in that this drug combination does not appear to have an established link to CEP, though literature reveals possible association with its individual components. Understanding the mechanism underlying this link may help to better understand CEP as a disease process.
Case presentation A 42-year-old African American woman presented to the emergency department for a week with exertional dyspnea and a non-productive cough. Over the last six months, she had been diagnosed with recurrent episodes of pneumonia, which was treated on an outpatient basis with antibiotics, steroids, and inhaled bronchodilators. Her symptoms would temporarily improve and then recur. Five days before the presentation, she was diagnosed again with pneumonia in an urgent care clinic and started on cefdinir. She had been otherwise healthy and had no comorbidities except a history of iron deficiency anaemia. Further questioning revealed that she had started taking an appetite suppressant, naltrexone/bupropion, shortly before the repeated episodes of pneumonia began. She denied any recent lifestyle changes, did not smoke, and worked as an occupational therapist. Vital signs were notable for mild tachycardia, tachypnea, and hypoxemia, which improved on supplemental oxygen. A physical exam revealed right-sided crackles and rhonchi but was otherwise unremarkable. A complete blood count (CBC) revealed leukocytosis with eosinophilia (Table 1 ). Despite a history of iron deficiency anemia, she was not anemic on presentation. She remarked that her hematologist had noted elevated eosinophils on her most recent CBC, but no cause was identified at that time. Computed tomography (CT) angiogram of the chest was negative for pulmonary embolism but elucidated bilateral airspace disease with mediastinal and hilar lymphadenopathy. Consolidative changes are shown by bold blue arrows in the CT chest imaging (Figures 1 - 2 ). She was started on empiric antibiotics for presumed community-acquired pneumonia. She underwent bronchoscopy with BAL and transbronchial biopsy of the right upper lobe but experienced significant hypoxemia because of bleeding during the biopsy and required intubation. Following the biopsy, she received systemic steroid therapy. BAL revealed eosinophilic predominance (75% of nucleated cells) with culture-negative bacteria, acid-fast bacilli, and fungi. Lung biopsy revealed eosinophilic pneumonia with interstitial eosinophils and chronic inflammatory cells, negative for bacteria on culture. Pathology was negative for parasites and did not indicate vasculitis. A set of blood cultures were negative. The morning following the biopsy, the patient was extubated and did not experience hemoptysis or further hypoxemia. She continued to improve clinically with resolving consolidation on the chest X-ray. She was discharged on hospital day four on oral steroid therapy prednisone that was initially started at 0.5 mg/kg/day. The initial dose of prednisone was continued for two weeks after the complete resolution of symptoms and chest radiography abnormality. Prednisone was gradually tapered off in the next four weeks thereafter. She was advised to discontinue the use of naltrexone/bupropion, which was the suspected trigger of her eosinophilic pneumonia. Subsequent follow-up revealed sustained resolution of symptoms and peripheral eosinophilia even when she was taken off systemic steroids.
CC BY
no
2024-01-16 23:45:36
Cureus.; 15(12):e50621
oa_package/13/de/PMC10789179.tar.gz
PMC10789181
0
Corrigendum on: Association between exposure to tobacco information through mass media, smoking households and secondhand smoke exposure in adolescents: Survey data from South Korea By Wenbin Du 1 , Gaoran Chen 1 , Minmin Gu 1 , Huixin Deng 2 , Won G. Choi 3 Tobacco Induced Diseases, Volume 22, Issue January, Page 1-11, Publish date: 5 January 2024 DOI: https://doi.org/10.18332/tid/175705 In the corrected version of the article, the last author’s surname was corrected from Cho to Choi. The mentioned change is corrected also online.
CC BY
no
2024-01-16 23:45:36
Tob Induc Dis. 2024 Jan 16; 22:10.18332/tid/178472
oa_package/f4/42/PMC10789181.tar.gz
PMC10789182
0
INTRODUCTION Chronic obstructive pulmonary disease (COPD) is a common lung disease, and its incidence is increasing all over the world 1 . A cross-sectional study reported that the prevalence of mild cognitive impairment (MCI) was higher in COPD patients than in non-COPD patients 2 . A recent study reported that the incidence of cognitive decline in COPD patients was about 54% 3 . COPD with decreased cognitive function may reduce the treatment efficiency, not only affecting the physical function but also increasing mortality and disability 4 . Smoking is a common risk factor for COPD, inhaling harmful substances contained in tobacco will reduce lung function and injure small airways and alveoli. Evidence suggests that the population-attributable risk for COPD from smoking is 51% globally. In addition, continuous smoking in COPD patients will aggravate the damage to lung function 5 . Moreover, smoking is independently associated with cognitive decline 6 . Although, several early studies reported that the acute intake of nicotine can improve cognitive function through nerve excitation 7 , 8 . However, a recent study indicated smoking is a common risk factor for cognitive function 9 , and the prevalence of cognitive impairment in smokers is significantly higher than that in non-smokers. One possible mechanism is that long-term intake can damage blood vessels, increase oxidative stress, and lead to cognitive decline 10 . Therefore, the high prevalence of cognitive impairment in patients with COPD may be related to smoking. However, most of the previous studies on smoking were about cigarettes, and few studies discussed the relationship between sun-cured tobacco and cognitive decline. Sun-cured tobacco is planted by farmers and is a cultivated plant that belongs to handmade tobacco. Sun-cured tobacco can be divided into different types according to its shape, color, cultivation methods and so on 11 . Compared with cigarettes, many people prefer the taste and natural ingredients of sun-cured cigarettes. The diversity of natural conditions in China provides good conditions for the cultivation of sun-cured tobacco. Therefore, many provinces in China are rich in sun-cured tobacco, such as Sichuan, Jiangxi, Guangdong, and Heilongjiang. Different natural conditions, cultivation techniques, and sun-curing tobacco methods, have resulted in various types of sun-cured tobacco. The sales of this sun-cured tobacco have become the main source of the local economy, and sun-cured tobacco is very popular with people, especially the elderly 11 . Furthermore, a study confirmed that there were significant differences in the composition of cigarette and sun-cured tobacco 12 . Furthermore, the results of a 5-year prospective study showed that the proportion of male smokers of sun-cured tobacco was 75.75%, among which those aged 65–74 years were the most in the well-known sun-cured tobacco-producing area of Shifang City in China 13 . Combining the popularity of sun-cured tobacco and the difference with cigarettes, it was of practical significance to discuss the use of sun-cured tobacco. We speculated that sun-cured tobacco and cigarettes may have different cognitive effects on patients with COPD. However, the differences between them have not been discussed. Moreover, the cognitive decline of smokers is related to smoking status, duration, and type of smoking 14 . For COPD patients, the current research mainly explored the connection between the smoking status and duration on cognitive decline, and there are few studies that have mainly focused on the effects of different smoking types on cognitive function. COPD patients performed poorly in cognitive tests to evaluate attention, memory, language, and executive function 15 , but smokers often suffer from memory loss and verbal fluency (reflecting executive function) disorder. In view of this, this study explored the association of sun-cured tobacco and cigarette use with the decline of global and specific cognitive functions (such as verbal fluency and memory) in COPD patients.
METHODS Study design This was a cross-sectional study, which was conducted from March 2022 to February 2023 in Sichuan Province, China. Sichuan Province is one of the provinces rich in sun-cured tobacco, and the planting scale of sun-cured tobacco in Sichuan is large, and its output and quality are among the best in the country. Sample size We used the following formula to calculate the sample size 16 : with an estimated prevalence of 54% 3 and 95% confidence interval, and a tolerance error of d=0.1, to obtain p=0.1×0.54=0.054. The required sample size was n=327, and it was increased by 10% to take into account non-response rate and sampling error. The sample size of this study was 360 individuals. About 401 participants were included in this study. Participants The inclusion criteria were: 1) diagnosed with COPD according to Global Initiative for Chronic Obstructive Lung Disease (2020 REPORT) 17 ; 2) aged ≥40 years; and 3) having clear consciousness and being willing to give written informed consent. The exclusion criteria were: 1) unable to complete the investigation due to the serious illness; 2) having a mental illness; 3) having other diseases, including stroke and dementia; and 4) smoking both cigarette and sun-cured tobacco. The participant selection procedure is shown in Figure 1 . Recruitment method This study was divided into two stages to recruit participants. The specific recruitment procedures and precautions were detailed in another study 18 . Smoking and assessment The questionnaire was used to assess the smoking status of participants. Participants who never smoked or have smoked <100 cigarettes in their lives were in the non-smoking group. Subjects who smoked at least one cigarette or sun-cured tobacco a day were divided into the cigarette-smoking group and the sun-cured tobacco group, respectively. Smoking quantity (number of daily cigarettes) and duration of smoking (years of smoking) were also evaluated in this study 19 . Demographic characteristics (such as age, gender, education level, marital status, living arrangement, monthly household income, drinking habit, family history of dementia), factors associated with COPD [such as lung function index: percent predicted FEV1 (FEV1% pred) and smoking], were obtained at baseline from face-to-face interviews. Cognitive and lung function assessment Cognitive function assessments were conducted by standardized, trained, and certified researchers and included measures of global cognitive function, memory, and verbal fluency. The global cognitive function was evaluated by the Beijing version of the Montreal Cognitive Assessment (MoCA), and the cut-off point of <26 was taken as the standard to identify an individual with MCI after excluding patients with dementia. MoCA includes eight cognitive domains: orientation, language, working memory, concentration, short-term memory, attention, executive function, and visuospatial ability 20 . The maximum score of MoCA is 30 points, and a higher score represents better cognition function 20 . Ten unrelated words were used to evaluate memory, including immediate and delayed recall. The total score of the two items was 10, and the memory function was always divided into the sum of the two items. Higher scores indicated better memory function. Both of the two memory tests have good construct validity and consistency 21 . An animal fluency test was conducted to evaluate verbal fluency. Participants were required to say as many animal names as possible within the 60s. The number of animals mentioned by participants was the score of this function. The verbal fluency test had well-documented reliability and validity 22 . The percent predicted FEV1 (FEV1% pred) is an index to evaluate the severity of lung function, and GOLD1 to GOLD4 represent mild to extremely severe lung function injury 18 . Statistical analysis We used SPSS Statistics 23.0 (IBM Corp, Armonk, NY, USA) for data analysis. Continuous variables were described as mean ± SD, and categorical variables as frequencies and percentages. We used ANOVA for continuous variables and the chi-squared test for categorical variables. Binary logistic regression was used to analyze the relationship between smoking type and MCI. Multivariable linear regression coefficients were used to examine the independent relationship among non-smoking group, cigarette-smoking group and sun-cured tobacco group in global cognitive function, memory and verbal fluency. The results were evaluated in three models. Model 1 was a univariate regression model, Model 2 was adjusted for demographic characteristics (such as age, gender, education level, marital status, living arrangement, monthly household income, family history of dementia, current drinker), and Model 3 was further adjusted for factors related to COPD [such as FEV1% pred, smoking quantity (cigarettes/day) and duration of smoking]. Before constructing the regression model, we checked the multicollinearity between covariates. A variance inflation factor (VIF) less than 10 means no multicollinearity. P-values were two-sided, and less than 0.05 were considered statistically significant.
RESULTS The characteristics of the participants are presented in Table 1 . A total of 401 participants were included, and more than half were male (68.1%). One hundred ninety participants were non-smokers, and 103 and 108 participants were in the cigarette-smoking and sun-cured tobacco groups, respectively; 83.3% of the participants in the sun-cured tobacco group had less than 9 years of education, which was higher than that in the cigarette-smoking group (27.2%) and the non-smoking group (41.1%). The prevalence of MCI was 58.9%, and the prevalence of the sun-cured tobacco group (89.8%) was significantly higher than that of other groups. For participants in the non-smoking group, the mean z scores of global cognitive function, verbal fluency and memory were the highest in the three groups, and the scores of the cigarette-smoking group were higher than that of the sun-cured tobacco group (0.39 vs 0.10 vs -0.78; 0.34 vs 0.04 vs -0.63; 0.26 vs -0.17 vs -0.29; all p<0.001) ( Table 2 ). The results of multivariable logistic regression showed that in three models, the participants of the cigarette-smoking group and sun-cured tobacco group were more likely to suffer from MCI compared to the non-smoking group (OR=11.18; 95% CI: 1.28–97.5, p=0.029; and OR=10.5; 95% CI: 1.14–96.4, p=0.038 in Model 3) ( Table 3 ). Multivariable linear regression results showed that the z scores of both global cognitive function (β= -0.48; 95% CI: -0.91 – -0.05, p=0.028 vs β= -0.61; 95% CI: -1.04 – -0.18, p=0.005) and verbal fluency (β= -0.69; 95% CI: -1.23 – -0.16, p=0.011 vs β= -0.79; 95% CI: -1.33 – -0.26, p=0.004) were lower in cigarette smokers and sun-cured tobacco smokers compared to non-smokers in Model 3, and the z scores of the sun-cured tobacco group were the lowest. However, the association with memory loss was not significant in either Model 2 (p=0.055) or Model 3 (p=0.707) in the sun-cured tobacco group ( Table 4 ). The results of multivariable linear regressions of cognitive function between the sun-cured tobacco group and cigarette-smoking group showed that participants in cigarette-smoking group had better global cognitive function (β=0.97; 95% CI: 0.73–1.21) and verbal fluency (β=0.75; 95% CI: 0.50–1.01) compared with those in the sun-cured tobacco group in Model 1. However, in Models 2 and 3, participants in the cigarette-smoking group and sun-cured tobacco group had no significant association between global cognitive function, verbal fluency, and memory ( Table 5 ).
DISCUSSION To our knowledge, this was the first study in China to investigate the association of sun-cured tobacco and cigarette use with the decline of global and specific cognitive function in COPD patients. In the unadjusted model, the z score of the global cognitive function of the sun-cured tobacco group and the cigarette-smoking group was significantly lower than that of the non-smoking group. After adjusting for the demographic and disease-related confounders, the z scores of global cognitive function of the sun-cured tobacco group and the cigarette-smoking group were still significantly lower than those of the non-smoking group, which was contrary to previous research. For example, a study demonstrated that acute nicotine intake was associated with cognitive benefit 23 . In addition, a meta-analysis showed that smoking could improve cognitive function 24 . However, the negative impact of smoking on cognition was also increasingly confirmed. Our findings are similar to a case-control study, in which, the incidence of MCI in the elderly who smoke was 3.04 times higher than never smokers (OR=3.04; 95% CI: 1.45–6.35) 6 . Some longitudinal studies and observational studies also indicated that smoking might increase the risk of cognitive decline 25 , 26 . The different effects of nicotine intake and smoking on cognitive function may be related to the content and duration of nicotine intake in tobacco. A meta-analysis showed that elderly smokers in the general population were at higher risk of cognitive decline than non-smokers 27 . However, the risk of cognitive decline was significantly higher for people with COPD than for the general population, and this difference is particularly pronounced among smokers 28 . This is similar to our findings. There are several possible mechanisms to explain the link between smoking and cognitive decline. The composition of tobacco includes some toxicants and some neurotoxic compounds that affect the nervous system, which could lead to the decline of cognitive ability of the elderly 29 . In addition, the decline of cognitive function caused by smoking is mainly related to the lesions of periventricular and subcortical white matter. Another mechanism that could explain the association between smoking and cognitive decline is lung function. As is well known, smoking is an important risk factor for lung injury and COPD. A previous study reported independent links between lung function and cognitive decline 30 , and poor lung function was associated with poor cognitive function. Therefore, smokers in COPD patients may be more likely to have cognitive decline. In this study, apart from global cognitive function, the z scores of verbal fluency and memory remained significantly lower in the sun-cured tobacco group and cigarette-smoking group than in the non-smoking group. However, after adjusting for demographic and disease-related factors, the results of multivariable linear regression showed that there were no significant differences in memory. However, when only adjusting for the demographic confounders, the memory of the cigarette-smoking group was significantly lower than that of the non-smoking group, but there was no significant correlation between the sun-cured tobacco and non-smoking groups. The result of this study was similar to a cohort study, in which the current smokers had a greater decline in global cognitive function and executive function in the past 10 years than those who never smoked. The negative influence of smoking was greater on executive function than memory 9 . However, one meta-analysis revealed that nicotine could enhance cognitive function in multiple domains, such as memory 24 . The quantity and duration of nicotine intake might cause this inconsistency. Compared to the non-smoking group, the global cognitive function and verbal fluency of the sun-cured tobacco group and cigarette-smoking group were significantly lower, and the decline in the sun-cured tobacco group was even greater (-0.61vs -0.48), but there was no significant difference in memory. The differences in cognition between sun-cured tobacco and cigarette use in COPD patients have never been compared in previous studies. However, the relationship between smoking use of combustible cigarettes, e-cigarettes and e-liquid, and passive smoking, and the incidence of subjective cognitive decline in Korean adults has been discussed. The results of that study showed that different types of smoking may lead to different incidences of subjective cognitive decline 31 . Sun-cured tobacco has a long history in Sichuan, China and is widely planted in all parts of Sichuan, and is especially popular among elderly men. According to a previous investigation, sun-cured tobacco and cigarettes were popular among COPD patients in Sichuan Province. The contents and components of cigarettes and sun-cured tobacco differed due to the differences in manufacturing and storage conditions 32 . An early prospective study showed that the use of sun-cured tobacco was related to mortality from cerebrovascular disease, tumors, cardiovascular disease, and respiratory disease. Therefore, the association of sun-cured tobacco and cigarette use with cognition may differ. Limitations A few limitations of our study should be illustrated. Firstly, this was a cross-sectional study, from which it is impossible to determine the causal relationship between smoking and cognition. Secondly, despite all the efforts to control some confounding factors, there were still some potential confounding factors that have not been taken into account. Thirdly, this study did not include smoking cessation as a covariate but used smoking quantity and duration of smoking instead. Fourth, the variables on smoking were all self-reported by participants and convenient sampling was used in this study, which may lead to bias. Finally, limited generalizability to other countries could be included in our study.
CONCLUSIONS Compared with non-smokers, cigarettes and sun-cured tobacco may damage the cognitive function of COPD patients, especially in global cognitive function and verbal fluency. While advising COPD patients to quit smoking and paying attention to the harm of smoking to human health, we should not only pay attention to cigarettes but also pay attention to users of sun-cured tobacco.
INTRODUCTION Some elderly people in China prefer sun-cured tobacco to cigarettes, and the composition of sun-cured tobacco and cigarettes is inconsistent. The influence of cigarettes on the cognitive function of COPD patients has been widely reported, but the research on sun-cured tobacco is relatively rare. Our study explored the association of sun-cured tobacco and cigarette use with cognitive decline in COPD patients. METHODS This was a cross-sectional study. A total of 401 COPD patients were included, and 190, 103, and 108 participants were included in non-smoking, cigarette-smoking, and sun-cured tobacco groups, respectively. We evaluated the global cognitive function using the Beijing version of the Montreal Cognitive Assessment, verbal fluency function using an animal fluency test, and memory function using ten unrelated words. RESULTS The participants of both cigarette-smoking (AOR=11.18; 95% CI: 1.28– 97.5) and sun-cured tobacco (AOR=10.46; 95% CI: 1.14–96.4) groups were more likely to develop mild cognitive impairment compared to the non-smoking group. The mean z scores of global cognitive function, verbal fluency, and memory were lower in cigarette-smoking and sun-cured tobacco groups than those in a non-smoking group; Multivariable linear regression showed that global cognitive function (β= -0.61; 95% CI: -1.04 – -0.18; and β= -0.48; 95% CI: -0.91 – -0.05) and verbal fluency (β= -0.79; 95% CI: -1.33 – -0.26; and β= -0.69; 95% CI: -1.23 – -0.16) of the sun-cured tobacco group and the cigarette-smoking group were significantly lower than those of the non-smoking group when adjusting for demographic and disease-related characteristics. However, there was no significant difference between the cigarette-smoking and sun-cured tobacco groups in global cognitive function, verbal fluency, and memory. CONCLUSIONS Compared with non-smokers, the use of cigarettes and sun-cured tobacco may damage the cognitive function of COPD patients, especially in global cognitive function and verbal fluency.
ACKNOWLEDGMENTS We would like to thank all COPD patients who completed questionnaires, for their participation. We gratefully acknowledge Qionglai Medical Center Hospital for providing a platform for this study. CONFLICTS OF INTEREST The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none was reported. ETHICAL APPROVAL AND INFORMED CONSENT Ethical approval was obtained from the Bioethics Committee of Qionglai Medical Center Hospital, China (Approval number: 202203; Date: 15 February 2022). Participants provided informed consent. DATA AVAILABILITY The data supporting this research are available from the authors on reasonable request. AUTHORS’ CONTRIBUTIONS XC wrote the manuscript and conducted the statistical analysis and interpretation. JL, JLiu, XL, MD, YY and XD were in charge of the study concept and design. All authors revised and approved the final manuscript. PROVENANCE AND PEER REVIEW Not commissioned; externally peer-reviewed.
CC BY
no
2024-01-16 23:45:36
Tob Induc Dis. 2024 Jan 16; 22:10.18332/tid/175973
oa_package/80/c7/PMC10789182.tar.gz
PMC10789189
38225929
Introduction As members of social groups, humans can inherently engage in social interactions. Before infants acquire language skills, they demonstrate the capacity to communicate and interact with others through nonverbal actions ( Oryadi-Zanjani, 2020 ). Nonverbal communication permeates human cultures worldwide, often complementing or replacing verbal communication in everyday social interactions ( Hall, Horgan & Murphy, 2019 ). Abundant evidence suggests that individuals frequently employ nonverbal sensorimotor communication to swiftly convey coordination signals in the context of real-time social interactions or joint actions ( Laroche et al., 2022 ; Miyata et al., 2021 ; Edey et al., 2020 ; Varni et al., 2019 ). In other words, individuals convey information to others by embedding communicative messages within instrumental actions ( Pezzulo et al., 2019 ) to facilitate the coordination of interindividual interactions, a phenomenon referred to as sensorimotor communication (SMC). For instance, in competitive sports, an athlete may intentionally modify his kicking to convey the upcoming coordination direction to teammates. Here, the initial kicking action serves as an instrumental act, while the information regarding the coordination direction (manifested as an exaggerated deviation in the individual’s kicking trajectory) is communicative. Likewise, athletes can execute deceptive body movements that disrupt their opponents’ motor prediction processes. Sensorimotor communication relies on instrumental actions and enables the conveyance of communicative information during the execution of instrumental actions. Information transfer in sensorimotor communication is highly flexible and rapid ( Laroche et al., 2022 ; Vesper, Schmitz & Knoblich, 2017 ). Swift information transfer between message senders and receivers through actions is achievable even without prior agreement among interacting parties regarding the meaning of the action ( Pezzulo et al., 2019 ). Consequently, it is frequently observed in complex joint actions and social interactions. Asymmetric joint action is a relatively complex type of joint actions because it necessitates spatial and temporal coordination among participants who receive incongruent information ( Zhang, 2019 ; Vesper, Schmitz & Knoblich, 2017 ). For instance, two individuals are instructed to touch a designated target location sequentially. One of them possesses knowledge of the target location, while the other remains unaware. Sensorimotor communication plays an essential role in joint action because effective motor coordination can only be achieved if the participant who possesses more information (the information sender) conveys the target information to the less informed participant (the information receiver). The bidirectional model of influence asserts that effective communication hinges on the sender’s precise articulation of the message to ensure comprehension by the receiver ( Beebe, Beebe & Ivy, 2015 ). Consequently, the precise calibration of the kinematic characteristics of action by message senders, such as motion height, motion time, and motion speed ( Trujillo, 2020 ) based on communicative information, is a prerequisite for sensorimotor communication to enable asymmetric joint action. The process by which message senders establish the mapping between task target information and their action characteristics assumes particular significance. Previous research in the domain of asymmetric joint action has established that message senders possess the capability to adjust the kinematic characteristics of their actions in correspondence with changes in the physical attributes of the task target. For instance, Schmitz et al. (2018) observed that message senders effectively conveyed three different weight categories—light, medium, and heavy—by grasping a cylinder at varying heights. Specifically, they grasped it at a higher position to indicate a lighter weight, a middle position for a medium weight, and a lower position for a heavy weight. Furthermore, Vesper, Schmitz & Knoblich (2017) noted that message senders adapted their motion time based on the distance to the task target, with longer motion times represented more distant targets. These observations align with the theory of embodied cognition, which underscores the profound influence of bodily actions and sensory experiences on forming abstract concepts ( Ye, 2010 ; Li & Wang, 2015 ). Sensorimotor experiences are bodily actions and sensory experiences ( Jin et al., 2019 ; Ye, 2010 ). According to this theory, when individuals engage with concepts, relevant embodied simulations, and neural systems are activated even when there is no real-time, online interaction with these concepts ( Barsalou et al., 2008 ; Barsalou, 2009 ). In sensorimotor communication, processing the weight/motor distance information of a task target automatically activates the corresponding sensorimotor experiences, subsequently influencing the grasp height/motion time of message senders’ actions. However, the studies mentioned above leave specific critical questions unanswered. First, although these investigations confirm that message senders adapt the kinematic characteristics of their actions based on the target, none of them compare these actions with the kinematic characteristics of actions performed by individuals in the tasks without cooperation. Consequently, it remains challenging to discern whether the disparities in message senders’ actions stem from variances in instrumental movements associated with distinct task targets. Alternatively, it could be intentional sensorimotor communication by the individuals involved. For instance, in a study by Schmitz et al. (2018) , the act of grasping the cylinder by message senders served both an instrumental purpose and a communicative intention. Consequently, the issue of whether the alteration in grasping height results from differences in the object’s weight or intentional communicative messages conveyed by the message senders remains elusive. Second, the physical attributes of the targets in the aforementioned research tasks evoke substantial divergence in individual sensorimotor experiences, such as incremental differences in weight (light, medium, and heavy) and incremental changes in distance (near, medium, and far). In such cases, the message senders can readily determine the kinematic characteristics of the corresponding motion by observing variations in the target’s physical attributes. However, in intricate social interactions characterized by limited differentiation in target-induced sensorimotor experiences, the way message senders engage in sensorimotor communication warrants exploration. To address Problem 1, the current study extends prior research by introducing a single-person baseline condition. This addition aims to isolate instrumental action distinctions stemming from task-related factors from the sensorimotor communication of message senders. Additionally, previous investigations have revealed that sensorimotor communication does not manifest uniformly across all phases of message senders’ actions ( Vesper, Schmitz & Knoblich, 2017 ). Building upon this insight, the present study deconstructs the action process of message senders. Research has demonstrated that message senders systematically adjust kinematic characteristics ( Trujillo, 2020 ; De Ruiter et al., 2010 ) and enhance the informativeness of their actions ( Winner et al., 2019 ) contingent on the communicative context to facilitate effective message delivery. This is exemplified by the elongation of motion time ( Vesper et al., 2016 ) or an increase in motion amplitude ( Wood et al., 2022 ; McEllin, Knoblich & Sebanz, 2018 ). The present study’s Hypothesis 1 asserts that message senders tend to amplify specific motion characteristics during particular motion phases when demonstrating cooperative intention (Coop), as compared to a baseline condition when there is no cooperative intention (single-person baseline, No-coop). To address Problem 2, this study devises two distinct types of asymmetric joint action tasks: distance and orientation tasks. Both task types consist of four target keys, requiring both participants to sequentially press a designated target key. However, only one of the participants possesses knowledge of the target key’s location. In the distance task, the targets are placed evenly along the same direction but differ in distance. Conversely, the targets are placed in different directions but cover the same distance in the orientation task. In both task types, message senders are tasked with establishing a mapping relationship between the spatial-physical characteristics of the target key (motion direction and motion distance) and the kinematic attributes of their actions ( e.g. , motion time). This mapping relationship, known as space–time mapping, conveys the target message and subsequently facilitates joint actions. Specifically, the distance task primarily focuses on the space–time mapping between motion distance (target) and motion time (action). In contrast, the orientation task places greater emphasis on the space–time mapping between motion direction (target) and motion time (action). In accordance with the theory related to embodied simulation, it is well established that as one moves further away, the accompanying motion time tends to increase ( Sevdalis & Keller, 2011 ). Consequently, in a distance task characterized by a more pronounced differentiation in target-induced sensorimotor experiences, message senders can establish space–time mapping relationships between motion distance and motion time through embodied simulations of the spatial distance characteristics of the task target. Therefore, Hypothesis 2a in this study posits that in the distance task with cooperative intention, message senders will extend their motion time in direct proportion to the spatial distance information of the target to effectively convey the target message to others. In the orientation task, the mapping relationships between spatial orientation and time are notably intricate. Forming space–time mappings in orientation tasks solely through target-induced sensorimotor experiences presents considerable challenges, rendering orientation tasks less differentiated. A correlational study examining the Space-Time Association of Response Codes Effect (STARC) identified three primary spatial orientations (left–right, front–back, and up-down) within the mental timeline ( He et al., 2020 ; Coull, Johnson & Droit-Volet, 2018 ; Teghil, Marc & Boccia, 2021 ; Von Sobbe et al., 2019 ; Starr & Srinivasan, 2021 ; Valenzuela et al., 2020 ). Due to the influence of reading and writing conventions, the left direction typically represents earlier times, while the right signifies later times ( Dalmaso, Schnapper & Vicovaro, 2023 ; Pitt & Casasanto, 2020 ). This low-level embodied simulation establishes a mental timeline oriented from left to right. In contrast, the mental timeline associated with up-and-down orientation is primarily linked to high levels of verbal metaphors ( He et al., 2021 ). For instance, Chinese linguistic metaphors such as “morning ( )” and “afternoon ( )” equate to earlier and later times, respectively. “ ” represents the up part of the orientation, and “ ” represents the down side of the orientation. These linguistic metaphors activate spatial schemas that offer reference points for time processing ( Boroditsky, Fuhrman & McCormick, 2011 ). In the present study, the target keys within the orientation tasks encompass four distinct orientations: left-up, right-up, left-down, and right-down. This setup may engage embodied simulation for left–right orientation and utilize linguistic metaphors for up-down orientation. Since embodied simulation rooted in reading and writing habits occurs more frequently than verbal metaphors, producing mental timelines in the “left–right” direction is likely to be more effortless and rapid than those in the “up-down” direction ( Chen, 2018 ). Additionally, prior research has indicated that Mandarin-speaking individuals tend to construct their timelines from left-up to right-down ( Hartmann et al., 2014 ; Sun et al., 2022 ). Consequently, Hypothesis 2b in this study posits that in the orientation task with cooperative intention, message senders may extend the motion time in correspondence with the target’s left-up, right-up, left-down, and right-down orientation sequence to convey the target message to others effectively.
Materials & Methods Participants MorePower 6.0.4 was used to calculate the sample size. A sample of at least 60 is required for a 0.8 probability to correctly reject the null hypothesis (power = 0.8) given a medium effect size (two-tailed test, partial η 2 = 0.06) for the 2 × 2 ×4 within-interaction. A total of 65 participants (36 males, M age = 20.06 years, SD age = 2.80 years) were recruited from Tianjin Normal University. To control for individual differences, such as arm length and height, which could potentially impact the kinematic indices of participants’ arm motion time and height, these attributes were equated before the experiment ( M arm = 68.22 cm, SD arm = 4.87 cm; M height = 169.66 cm, SD height = 9.42 cm). All participants were right-handed as determined by the Edinburgh Handedness Inventory ( Oldfield, 1971 ) and reported normal or corrected-to-normal vision and normal hearing. All participants spoke Mandarin. The participants signed prior informed consent before the experiment and received monetary compensation. The experimental protocol was approved by the ethics committee of Tianjin Normal University (No. 2021030809). Experimental design This study employed a 2 × 2 × 4 within-subjects experimental design, incorporating the factors of cooperative intention (Coop vs. No-coop), task characteristic (distance vs. orientation), and target (T1, T2, T3, vs. T4). The dependent variables encompassed participants’ keystroke responses and motion trajectory characteristics in each experimental condition, as elaborated upon in the Data Analysis section. Apparatus The experimental program was developed, and the stimulus presentation was executed using Psychtoolbox 3.0 for MATLAB 2019a ( The MathWorks, Inc, 2019 ). The experimental stimuli were displayed on a Dell screen (Model U2417H, 24 inches in size, with a resolution of 1,920 × 1,080 pixels). Two sets of customized keyboards were employed as response devices, each consisting of five keys with a base size of 3 cm × 3 cm. These keys were connected to transmission lines (each 1 m in length) and were ultimately assembled on a motherboard to create a set of keyboards. Notably, each key on this keyboard could move freely. For motion tracking, a Nokov optical 3D motion capture system (Mars 4H, NoKov Corporation, Beijing, China), manufactured by Beijing Nokov Science & Technology, was employed. A motion capture marker was affixed to the tip of the participant’s right index finger, and seven high-power HLED luminaires (sampling rate = 100 Hz) were used to capture the motion trajectory of the fingertip (marker). Experimental setup The participant was seated in the middle of the table (60 cm in length, 80 cm in width, and 78 cm in height), and the screen was positioned 65 cm away from the participant. A customized keyboard was placed on the table with two types available: the distance keyboard and the orientation keyboard. On the distance keyboard, the starting key was situated five cm from the table’s edge, and the intervals between T1, T2, T3, T4 and the starting key were 10 cm, 20 cm, 30 cm, and 40 cm, respectively. The diameters of the keycaps for the starting key, T1, T2, T3, and T4 were two cm, one cm, two cm, three cm, and four cm, respectively. On the orientation keyboard, the starting key was positioned 25 cm away from the table’s edge, with consistent 20 cm intervals between T1, T2, T3, T4, and the starting key. All keycaps had a diameter of two cm. Please refer to Fig. 1 for a visual representation of the setup. This configuration was designed to ensure that regardless of the keyboard type, the coefficient of difficulty calculated by Fitts’ law ( Eq. (1) ; Fitts, 1954 ; Vesper, Schmitz & Knoblich, 2017 ) for a participant moving from the starting key to each target key remained consistent at 4.32. Fitts’ law evaluates the relationship between the coefficient of difficulty of the motion (ID) and the amplitude of the motion (A), target width (W). Tasks Distance task The distance task consisted of two variations, with and without cooperative intention. Both employed the distance keyboard. In the distance task without cooperative intention, participants were tasked with completing a keystroke assignment based on the target cue presented on the screen by responding at a natural pace. This condition served as a baseline for participants’ actions under various task targets. Participants were instructed to position the tip of their right index finger at the center of the starting key (starting posture) before each trial. At the beginning of each trial, a target key cue was presented in the center of the screen (2 s) with one of the four target keys highlighted in red. The red dot indicated the target. Following the target key cue presentation, a yellow “+” appeared on the screen, accompanied by a brief “bee” tone (200 ms) to signal the impending task initiation. When the yellow “+” and the “bee” sound vanished, the participants commenced the keystroke task. During the task, a white “+” was displayed on the screen, and participants were required to press the starting key followed by the designated target key (T1/T2/T3/T4). Pressing the starting key triggered a “da” sound, while pressing the target key (T1/T2/T3/T4) resulted in a “di” sound. The dwell time of the “da” and “di” sounds was determined by the dwell time of the participant key press, as depicted in Fig. 2A . Subsequently, participants were instructed to return their fingers to the starting position. A total of 60 trials were conducted for this task, with 15 trials for each target (T1/T2/T3/T4). These 60 trials were randomly divided into four blocks, with randomized orders and intervals ranging from 16 to 24 s between blocks. In the distance task with cooperative intention, each pair of participants collaborated to complete the task, with Participant A and Participant B working together to press the same target key. During each trial, only Participant A possessed knowledge of the target key’s location; Participant B did not. Participant A was required to nonverbally convey the target key’s location to Participant B during the key press. Subject B could hear the sound produced by the keys but could not observe the action. In this scenario, Participant A was real and Participant B was virtual. Participant A was told that Participant B was a stranger and a same-sex peer. To enhance the realism of the virtual participants, Participant A was informed before the experiment that Participant B was located in an adjacent lab. Additionally, the experimenter temporarily left the lab for 1–3 min before the experiment began and informed Participant A that she was checking on the readiness of the other lab. The primary distinction in the distance task with cooperative intention, compared to without cooperative intention, was the appearance of a prompt on the screen that read “Please wait for your partner to press the key” (1–3 s). This prompt was displayed after Participant A completed the motion and returned his or her finger to the starting position. Participant A was informed that his or her partner would press the key during this time. This prompt was introduced to enhance the realism of the virtual participants. Orientation task The orientation tasks included two types, with and without cooperative intention. The tasks were the same as the distance task with and without cooperative intention, except for the use of the orientation keyboard. The flow of the orientation task with cooperative intention can be seen in Fig. 2B . Procedure In the preparation phase, participants filled out the informed consent form, personal information form, and Edinburgh Handedness Inventory. The experimenter measured and recorded the participants’ height and arm length and then affixed a motion capture marker to the fingertip of their right index finger. Before the main experiment started, the participants completed four practice trials, one for each target key, to familiarize themselves with the procedure in the No-coop condition. To ensure that the participants fully understood the requirements, the formal experiment proceeded only after the participants successfully executed all four practice trials. During the formal experiment, the participants performed the task first in the No-coop condition and then in the Coop condition. This sequence was designed to prevent the Coop condition from influencing the motion performance of the No-coop condition. The sequences of the distance task and orientation task were counterbalanced between participants. After the end of the experiment, the participants filled out a questionnaire in which they were asked to explain how they solved the task. The questionnaire consisted of four questions: (1) What kind of person do you think your partner is? (2) Under the distance task with cooperative intention, how did you convey information to your partner, and what strategy did you use? (3) Under the orientation task with cooperative intention, how did you convey information to each other, and what strategies did you use? (4) Did you experience any discomfort or confusion during the entire experiment? Data analysis Keystroke response The key press responses of the participants were measured in this study to assess action characteristics at different stages and to evaluate overall action performance. The participants’ dwell time (DT) on the target key, which represented the time from pressing to lifting the target key, and the participants’ motion time (MT), representing the time from lifting the starting key to pressing the target key, served as metrics for assessing localized action characteristics. The participants’ total motion time (TMT), representing the duration from pressing the starting key to lifting the target key, served as an assessment index for holistic action characteristics. After excluding invalid trials, each of the three indicators underwent a 2 (task characteristic: distance, orientation) × 2 (cooperative intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) repeated-measures ANOVA with Bonferroni correction and paired t -tests using SPSS (v.23.0; SPSS Inc., Chicago, IL, USA). A statistical threshold of p < 0.05 was considered significant. To assess the quality of sensorimotor communication by message senders, this study also calculated the signal-to-noise ratio (SNR MT , SNR DT , SNR TMT ) for the quality of message communication based on the participants’ keystroke responses, as outlined in Eq. (2) . MT1, MT2, MT3, and MT4 represent the average MT/DT/TMT of target keys T1, T2, T3, and T4, and SDT1, SDT2, SDT3, and SDT4 denote the MT/DT/TMT variability of target keys T1, T2, T3, and T4. The signal-to-noise ratios (SNR MT , SNR DT , SNR TMT ) for the quality of message communication in the MT, DT, and TMT under different experimental conditions were subjected to 2 (cooperative intention: Coop, No-coop) × 2 (task characteristic: distance, orientation) repeated-measures ANOVA with Bonferroni correction and paired t -tests using SPSS (v.23.0). A statistical threshold of p < 0.05 was considered significant. According to the previous hypothesis, it was observed that the space–time mapping relationships between the target key and the motion time under both the distance task and the orientation task with cooperative intention were prolonged in equal proportions with the changes in T1, T2, T3, and T4. Therefore, a larger SNR indicated that the way of communicating information was more aligned with the research hypotheses, resulting in improved quality of the message communication ( Vesper, Schmitz & Knoblich, 2017 ). Motion trajectory. It has been established in prior studies that sensorimotor communication by message senders not only alters motion time but may also adjust the maximum motion ( Candidi et al., 2015 ). To comprehensively examine the sensorimotor communication of message senders, this study processed and analyzed motion capture data. Initially, trials featuring incorrect key presses and those lacking recorded motion capture markers were excluded. Subsequently, the motion capture data were preprocessed using Cortex 7.0 software to obtain the motion trajectory of the motion capture marker under each experimental condition, represented as 3D coordinates. Next, a self-programmed script in MATLAB (2019a) was employed to calculate the maximum motion height (MAX MH ) between the participants’ starting key press and the target key lift for each experimental condition. Finally, a 2 (task characteristic: distance, orientation) × 2 (cooperative intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) repeated-measures ANOVA with Bonferroni correction and paired t -tests were conducted using SPSS (v.23.0). A statistical threshold of p < 0.05 was considered significant. Questionnaire The strategies within the distance task with cooperative intention and the orientation task with cooperative intention in the questionnaire were categorized. Furthermore, a data-driven approach was used to cluster analyze the SNR of the most effective indicators in the orientation task with cooperative intention. This was done to investigate whether participants established a space–time mapping relationship between task targets and participant actions at the subjective level of consciousness. Furthermore, to ensure the reliability of the statistical results, this study conducted Bayesian repeated-measures ANOVA ( Wang et al., 2023 ) on the aforementioned indicators using JASP (0.17), as outlined in the Supplemental Information . The results of the two statistical analyses mentioned above were found to be relatively consistent.
Results Data preparation Trials that did not align with the experimental requirements were excluded, encompassing two specific criteria: (1) trials in which the key was not pressed in accordance with the target information presented on the screen, and (2) trials in which the target key was pressed before the starting tone (“bee”) appeared. Invalid data, amounting to 0.25% of the total, were discarded. Furthermore, data that adhered to the experimental requirements but fell beyond the range of ±3 standard deviations from the mean of the conditions were categorized as extreme data. These extreme data points, which ranged from 0% to 2.88% for each indicator, were replaced with the mean value. Keystroke responses Whole indicator analysis A 2 (task characteristic: distance, orientation) × 2 (cooperative intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) repeated-measures ANOVA was conducted on both holistic and localized indicators. Significantly, the third-order interaction of task characteristic, cooperative intention, and the target was observed solely in total motion time and target key dwell time, as illustrated in Fig. 3 . This implied that both total motion time and target key dwell time served as indicators of the message sender’s sensorimotor communication performance. As the questionnaire strategy indicated that participants conveyed messages through target key dwell time, the subsequent analysis primarily focused on presenting the results related to target key dwell time. Detailed results for motion time and total motion time were provided in the Supplemental Information . Dwell time of target keys A 2 (task characteristic: distance, orientation) × 2 (cooperative intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) repeated-measures ANOVA was conducted on the dwell time of the target key, and the results were displayed in Fig. 4 . The analysis revealed the following significant effects and interactions: The main effect of cooperative intention was significant, F (1, 64) = 164.77, p < 0.001, partial η 2 = 0.72. The main effect of target was significant, F (1.56, 100.05) = 59.19, p < 0.001, partial η 2 = 0.48. The interaction of cooperative intention and target was significant, F (1.56, 100.06) = 59.58, p < 0.001, partial η 2 = 0.48. The interaction of task characteristic and target was significant, F (2.09, 133.42) = 10.10, p < 0.001, partial η 2 = 0.14. The triple interaction of task characteristic, cooperative intention, and the target was significant, F (2.07, 132.61) = 11.08, p < 0.001, partial η 2 = 0.15. Further analysis revealed specific patterns: Target key dwell times for T1, T2, and T3 were greater than for T4 ( ps < 0.001) under the distance task without cooperative intention. T1 target key dwell time was smaller than T2, T3, and T4 ( ps < 0.001) under the orientation task without cooperative intention. Target key dwell time for T1, T2, T3, and T4 increased sequentially ( ps < 0.001) under the distance task with cooperative intention. Target key dwell time for T1, T2, T3, and T4 showed a trend of sequential increase under the orientation task with cooperative intention. But there was no significant difference between T3 and T4 ( p > 0.05), while the other two-by-two differences were significant ( ps < 0.05) under the orientation task with cooperative intention. Comparisons between different conditions also yielded significant findings: The target key dwell time of T1 under the distance task without cooperative intention was greater than T1 under the orientation task without cooperative intention ( p < 0.001). The target key dwell time of T4 under the distance task without cooperative intention was smaller than T4 under the orientation task without cooperative intention ( p = 0.004). The target key dwell time of T1 under the distance task with cooperative intention was smaller than T1 under the orientation task with cooperative intention ( p = 0.006). The target key dwell time of T4 under the distance task with cooperative intention was greater than T4 under the orientation task with cooperative intention ( p < 0.001), while the remaining differences between experimental conditions were not significant ( ps > 0.05). The results of the Bayesian repeated-measures ANOVA were generally consistent with these findings. Variability of target key dwell time To thoroughly investigate the sensorimotor communication performance of message senders, this study further calculated the variability of target key DT (SD DT ) under different conditions and analyzed it using a repeated-measures ANOVA with a 2 (task characteristic: distance, orientation) × 2 (cooperation intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) design. The results revealed: A significant main effect of cooperative intention, F (1, 64) = 195.92, p < 0.001, partial η 2 = 0.75. A significant main effect of the target, F (2.24, 143.18) = 40.00, p < 0.001, partial η 2 = 0.39. A significant interaction between cooperative intention and target, F (2.25, 144.18) = 40.16, p < 0.001, partial η 2 = 0.39. The interaction of task characteristic and target was significant, F (2.05, 130.92) = 3.11, p = 0.047, partial η 2 = 0.05. The triple interaction of task characteristic, cooperative intention, and target was significant, F (2.01, 128.53) = 3.43, p = 0.04, partial η 2 = 0.05. Subsequent simple effect analyses indicated: that SD DT for T1, T2, T3, and T4 was not significant ( ps > 0.05) for the distance task without cooperative intention and the orientation task without cooperative intention. SD DT for T1, T2, T3, and T4 increased sequentially ( ps < 0.05) for the distance task with cooperative intention. SD DT for T1 was smaller than T2, T3, and T4 for the orientation task with cooperative intention, and T2’s SD DT was smaller than T4 ( ps < 0.05), as shown in Fig. 5 . However, the Bayesian repeated-measures ANOVA did not find an interaction between task characteristic and target, and a triple interaction between task characteristic, cooperative intention, and target, and the rest of the findings were consistent with the above results. Combining Figs. 4 and 5 , it was observed that the longer the dwell time of the target key, the greater the variability observed in both distance and orientation tasks with cooperative intention. Quality of the message communication for target key dwell time The SNR DT of target key dwell time was analyzed by a 2 (task characteristic: distance, orientation) × 2 (cooperation intention: Coop, No-coop) repeated-measures ANOVA. The results indicated: A significant main effect of cooperative intention, F (1, 64) = 90.41, p < 0.001, partial η 2 = 0.59. A significant main effect of task characteristic, F (1, 64) = 11.89, p = 0.001, partial η 2 = 0.16. A significant interaction between cooperative intention and task characteristic, F (1, 64) = 23.39, p < 0.001, partial η 2 = 0.27. Subsequent simple effects analyses revealed that: SNR DT was greater under the distance task with cooperative intention than without cooperative intention ( p < 0.001). SNR DT was greater under the orientation task with cooperative intention than without cooperative intention ( p < 0.001). For the distance task without cooperative intention, SNR DT was smaller than for the orientation task without cooperative intention ( p < 0.001). SNR DT under the distance task with cooperative intention was greater than under the orientation task with cooperative intention ( p < 0.001), as depicted in Fig. 6 . The results of the Bayesian repeated-measures ANOVA were in perfect agreement with these findings. Movement trajectory A repeated-measures ANOVA with a 2 (task characteristic: distance, orientation) × 2 (cooperative intention: Coop, No-coop) × 4 (target: T1, T2, T3, T4) design was conducted on the maximum motion height (MAX MH ) from starting key press to target key lift. The results, as presented in Fig. 7 , revealed the following: A significant main effect of cooperative intention, F (1, 63) = 13.50, p < 0.001, partial η 2 = 0.18. A significant main effect of target, F (2.69, 169.17) = 163.07, p < 0.001, partial η 2 = 0.721. A significant interaction between cooperative intention and target, F (2.52, 158.54) = 5.72, p = 0.002, partial η 2 = 0.08. A significant interaction between task characteristic and target, F (2.52,158.54) = 5.72, p = 0.002, partial η 2 = 0.71. A significant triple interaction between task characteristic, cooperative intention, and target, F (2.93, 184.26) = 3.57, p = 0.016, partial η 2 = 0.05. Subsequent simple effects analyses revealed: that MAX MH for T1, T2, T3, and T4 all sequentially increased ( ps < 0.05) under both the distance task with and without cooperative intention. MAX MH for T1, T2, and T3 with cooperative intention was greater ( ps < 0.05) than without cooperative intention. Under the orientation task, both with and without cooperative intention, the MAX MH of T3 was smaller than T1, T2, and T4 ( ps < 0.05). The MAX MH of T1 was smaller than T4 with cooperative intention ( p < 0.001), and the MAX MH of T2, T3, and T4 was larger with cooperative intention than without cooperative intention ( ps < 0.05). The results of the Bayesian-based analysis largely corroborated these findings. Questionnaire Through the organization of the questionnaire, 76.92% of the participants (50 persons) under distance task with cooperative intention extended their dwell time in proportion to the spatial distance information of the target key, drawing upon their previous sensory-motor experiences to establish a space–time mapping relationship between the task’s spatial distance and their motion characteristic (target key dwell time). This resulted in a sequential increase in the target key dwell time for T1, T2, T3, and T4. However, in the orientation task with cooperative intention, 47.69% of the participants (31 individuals) connected the four target locations in the order of left-up, right-up, left-down, and right-down according to embodied simulation and verbal metaphors. They increased the target key dwell time of T1, T2, T3, and T4 sequentially to establish the space–time mapping relationship. This strategy was defined as Strategy 1 (as shown in Fig. 8A ). Additionally, 15.38% of the participants (10 individuals) employed a strategy where they established space–time mapping in clockwise order, connecting the four target positions in clockwise order and increasing the target key dwell time in turn. This strategy was labeled as Strategy 2 ( Fig. 8B ). Meanwhile, 9.23% of the participants (6 individuals) employed a counterclockwise order strategy, connecting the four target positions in counterclockwise order and sequentially increasing the target key dwell time. This strategy was defined as Strategy 3 ( Fig. 8C ). The remaining participants (23.08%, 15 individuals) used other strategies. Through K-center clustering analysis of target key dwell time SNR DT , it was found that the index could be divided into four categories, corresponding to 12, 26, 16, and 11 cases, respectively. The corresponding clustering centers were 5.48, 2.98, 0.06, and −3.40, respectively, and the differences between the four categories were statistically significant ( F (3,61) = 222.41, p < 0.001). Combining the questionnaire results with the cluster analysis, it was observed that 87.10% of the participants who chose Strategy 1 in the questionnaire were clustered into categories 1 and 2.
Discussion Building upon prior research, this study devised two asymmetric joint action tasks characterized by distinct spatial characteristics. It aimed to investigate the factors that drive sensorimotor communication in message senders by comparing different conditions. The findings revealed the following insights. (1) Compared to conditions without cooperative intention, participants with cooperative intention exhibited significant increases in target key dwell time, motion time, total motion time, and maximum motion height. However, sensorimotor communication was primarily demonstrated through enhancements in target key dwell time. (2) In the distance task without cooperative intention, the dwell time of T4 is smaller than T1, T2, T3, and in the orientation task without cooperative intention, the dwell time of T1 is smaller than T2, T3, T4. Regardless of whether the distance task or orientation task was completed, there were no differences in the variability of dwell times of the four target keys without cooperative intention. Regardless of whether distance tasks or orientation tasks under cooperative intention, however, the dwell time of the target keys and their variability for T1, T2, T3, and T4 displayed a sequential increasing trend. In essence, a longer dwell time for the target key was associated with greater variability. (3) The quality of message communication related to target key dwell time and total motion time was superior with cooperative intention compared to conditions without cooperative intention in both distance and orientation tasks. Notably, the results were significantly more pronounced in the distance task with cooperative intention than in the orientation task with cooperative intention. (4) In the distance task with cooperative intention, nearly 80.00% of message senders established a space–time mapping based on sensorimotor experiences, characterized by “near-small, far-large”. Conversely, in the orientation task with cooperative intention, nearly 50.00% of the message senders extended the dwell time of the target key in the order of “left-up, right-up, left-down, right-down”. Sensorimotor communication for message senders with cooperative intention conditions Prior research has shown that sensorimotor communication is widely present in cooperation ( Vesper & Sevdalis, 2020 ). This aligns with the current study’s discovery of significant disparities in the temporal characteristics (target key dwell time, motion time, and total motion time) and trajectory characteristics (maximum motion height) of message senders’ actions when cooperative intention is present compared to when it is absent. However, it is essential to note that the dissimilarity in motion induced by cooperative intention does not necessarily equate to sensorimotor communication. For instance, the current study did not identify a third-order interaction among cooperative intention, task characteristics, and the target in terms of motion time. However, this interaction was observed in relation to the target key dwell time, total motion time, and maximum motion height of the target key. This suggests that sensorimotor communication by message senders may be reflected in these three motion characteristics. In this study, the condition of cooperative intention was designed as a pseudocooperative task. It was explicitly conveyed to the message senders that the message receivers could hear their key presses but could not observe their motions. Notably, the message senders were unable to convey messages to their partners by altering the maximum motion height. The results indicated that message senders with cooperative intention exhibited higher maximum motion height compared to those without cooperative intention in both the distance and orientation tasks. However, the patterns of change in the four target locations with and without cooperative intentions were very similar. These differences may therefore stem from variations in the instrumental actions associated with the target key as well as generalized effects arising from cooperative intention. Consequently, sensorimotor communication by message senders is primarily expressed through the dwell time and total motion time of the target key. However, it is worth noting that total motion time might not be the most accurate indicator of sensorimotor communication. Total motion time is a holistic metric that encompasses multiple phases of motion and is influenced by various factors. An examination revealed that when the proportion of target key dwell time in total motion time was removed, the results closely resembled the patterns observed in maximum motion height. This implies that sensorimotor communication within total motion time is mainly reflected in the target key dwell time. Additionally, the findings from the strategy questionnaire further corroborated the finding that message senders primarily rely on target key dwell times for communication. In summary, it is evident that sensorimotor communication is indeed contingent on cooperative intention, but it is not evident across all phases of motion. Previous studies, such as those conducted by Vesper & Richardson (2014) and Laroche et al. (2022) , have primarily explored the dissociation of specific motion phases induced by sensorimotor communication. In contrast, the present study offers a comprehensive evaluation of sensorimotor communication performance by message senders, encompassing local and holistic as well as temporal and trajectory perspectives. Consequently, the relationship between sensorimotor communication and cooperative intention is more robust and dependable. Sensorimotor communication performance of message senders in different task characteristics Sensorimotor communication performance of message senders in a distance task with cooperative intention The current study revealed that in a distance task with cooperative intention, message senders extended their target key dwell time proportionally to the spatial distance of the task target, in alignment with Hypothesis 2a. These findings were in line with prior research ( Vesper, Schmitz & Knoblich, 2017 ; Castellotti et al., 2022 ; Chen et al., 2021 ). The theoretical framework of embodied cognition suggests that an individual’s understanding of the world commences with bodily perception. The construction and comprehension of abstract concepts rely on sensorimotor experiences and involve an automated perceptual simulation process ( Wang et al., 2020 ; Ye, Zeng & Yang, 2019 ; Di Paolo, Cuffari & De Jaegher, 2018 ; Li, 2008 ). When individuals process abstract concepts, their prior sensorimotor experiences are automatically activated, potentially influencing their current action performance. Therefore, in a distance task with cooperative intention, when message senders engaged with the task target, the distance information associated with the target triggered previous sensorimotor experiences. This, in turn, prompted individuals to simulate their action performance, resulting in prolonged dwell time for the target key as the distance increased. Consequently, they effectively conveyed task target information to others. Furthermore, the present study demonstrated that the variability in message senders’ target key dwell time progressively increased from T1 to T4 in both distance and orientation tasks. This finding was consistent with prior research ( Castellotti et al., 2022 ). Notably, individual differences in estimating shorter durations were significantly smaller than for longer durations ( Huang, 2022 ). Performance of sensorimotor communication by message senders in an orientation task with cooperative intention In the orientation task with cooperative intention, message senders extended the target key dwell time proportionally to the orientation sequence of target positions, left-up, right-up, left-down, and right-down, to effectively convey their message. This observation aligned with Hypothesis 2b. The questionnaire responses further indicated that 47.69% of the participants consciously established this space–time mapping relationship, providing support for the hypothesis. Furthermore, the variability in target key dwell time also increased as the dwell time was extended, which was consistent with previous research findings ( Castellotti et al., 2022 ; Huang, 2022 ). Previous studies in the field of the Space-Time Association of Response Codes (STARC) have identified mental timelines associated with the “left–right” and “up-down” orientations ( Casasanto & Bottini, 2014 ; He et al., 2021 ). However, these investigations primarily explored the space–time mapping relationship from a one-dimensional spatial perspective. The current study extended this understanding by providing empirical evidence for a two-dimensional STARC effect. Specifically, individuals perceived time as passing least quickly in the left-up position, followed by the right-up and the left-down, with the longest duration in the right-down position. Prior research has also noted that individuals exhibit a more pronounced STARC effect in the horizontal direction than in the vertical direction. In this context, the mental timeline effect associated with the horizontal direction tended to dominate between the two mental timelines ( Yang & Sun, 2016 ). Researchers have observed that individuals typically associate the left-up position with shorter durations and the right-down position with longer durations ( Sun et al., 2022 ). Comparison of sensorimotor communication for message senders with different task characteristics. Previous research has indicated that various factors, such as gender and emotional state ( Zhao et al., 2020 ) as well as role ( Candidi et al., 2015 ), influence the dynamics of sensorimotor communication among interacting parties. The current study extended this body of research by revealing that task characteristics also exerted an impact on individuals’ sensorimotor communication. Specifically, the study showed that target key dwell time, exhibited by message senders during both distance and orientation tasks with cooperative intention progressively increased from T1 to T4. However, a subtle distinction emerged between these two task types. Notably, for T1, the target key dwell time was significantly shorter during the distance task than during the orientation task. Conversely, for T4, the opposite trend was observed. These differences underscore the influence of task characteristics on sensorimotor communication. Furthermore, the quality of the message communication for target key dwell time was higher in the distance task with cooperative intention compared to the orientation task with cooperative intention. Specifically, the distance-time mapping relationship established by individuals based on their sensorimotor experiences appeared to be relatively clear during the distance task with cooperative intention and was characterized by more consistent and proportionally varying temporal responses across different target distances. In contrast, during the orientation task with cooperative intention, although an orientation-time mapping relationship was evident and exhibited a gradual increase from left-up, right-up, left-down, to right-down, it lacked a specific representation of different orientations, resulting in a less clear and proportionally varying temporal response. Strategies for sensorimotor communication by message senders in different tasks The strategies employed by message senders with cooperative intention differed depending on the task at hand. In the distance task with cooperative intention, 76.92% of message senders prioritized conveying target information through sensorimotor experience. This manifested as a sequential increase in the target key dwell time for T1, T2, T3, and T4. Conversely, in the orientation task with cooperative intention, message senders utilized a more varied set of strategies to convey information. Three additional strategies emerged in this task: associating the orientation of the four target keys with dwell time in the sequence of left-up, right-up, left-down, right-down, following either a clockwise or counterclockwise order, and increasing the target key dwell time accordingly. Among these strategies, the most frequently used strategy was the first, which combined sensorimotor experience and verbal metaphors, accounting for approximately 50%. This indicated that at the group level when the task allowed for it, message senders typically established space–time mappings rooted in their sensorimotor experiences. Importantly, the formation of these space–time mapping relationships by message senders was not predetermined with message receivers but emerged spontaneously within group dynamics ( Grasso et al., 2022 ). This finding underscored the substantial influence of previous sensorimotor experiences on group behavior ( Zhang et al., 2022 ).
Conclusions (1) Compared to situations without cooperative intention, when cooperative intention is present, message senders tend to exaggerate certain kinematic characteristics during various motion phases as a means to facilitate sensorimotor communication. Notably, the primary aspect through which sensorimotor communication is expressed is the dwell time of the target key. (2) Sensorimotor communication primarily relies on the mapping relationship between the task target and the message sender’s motion characteristics, as exemplified by: In the distance task with cooperative intention, message senders predominantly utilize the sensorimotor experience of “near-small, far-large” to convey task information. Conversely, in the orientation task with cooperative intention, message senders primarily utilize a combination of “left-up, right-up, left-down, right-down” sensorimotor experiences along with verbal metaphors to convey task information.
Background Sensorimotor communication is frequently observed in complex joint actions and social interactions. However, it remains challenging to explore the cognitive foundations behind sensorimotor communication. Methods The present study extends previous research by introducing a single-person baseline condition and formulates two distinct categories of asymmetric joint action tasks: distance tasks and orientation tasks. This research investigates the action performance of 65 participants under various experimental conditions utilizing a 2 (cooperative intention: Coop, No-coop) × 2 (task characteristic: distance, orientation) × 4 (target: T1, T2, T3, T4) repeated-measures experimental design to investigate the cognitive mechanisms underlying sensorimotor communication between individuals. Results The results showed that (1) target key dwell time, motion time, total motion time, and maximum motion height in the Coop condition are more than in the No-coop condition. (2) In the distance task without cooperative intention, the dwell time of T4 is smaller than T1, T2, T3, and its variability of T1, T2, T3, and T4 were no different. In the distance task with cooperative intention, the dwell time and its variability of T1, T2, T3, and T4 displayed an increasing trend. (3) In the orientation task without cooperative intention, the dwell time of T1 is smaller than T2, T3, T4, and variability of the target keys T1, T2, T3, and T4 had no difference. In the orientation task with cooperative intention, the dwell time and variability of the target keys T1, T2, T3, and T4 had increasing trends. Conclusions Those findings underscore the importance of cooperative intention for sensorimotor communication. In the distance task with cooperative intention, message senders establish a mapping relationship characterized by “near-small, far-large” between the task distance and the individual’s action characteristics through sensorimotor experience. In the orientation task with cooperative intention, message senders combined sensorimotor experience and verbal metaphors to establish a mapping relationship between task orientation and action characteristics, following the sequence of “left-up, right-up, left-down, right-down” to transmit the message to others.
Limitations and Outlook This study successfully controlled for the objective difficulty of different target keys according to Fitts ’ law ( 1954 ). However, it is worth noting that specific performance variations emerged in motion time between the four target positions in both the distance and orientation tasks without cooperative intention. These differences might be attributed to variations in the ease of pressing the actual target keys. Consequently, future research should consider not only the objective difficulty of key presses but also the influence of individual physical limitations. In addition, the present study only examined the space–time mapping relationship of sensorimotor communication in Mandarin-speaking participants. Culture may have an impact on the space–time mapping relationship. Future studies could also examine the space–time mapping relationship of sensorimotor communication across cultures. Furthermore, the neural underpinnings of sensorimotor communication remain largely unexplored. Future investigations could utilize advanced techniques such as functional nuclear magnetic resonance (fMRI) to pinpoint the specific brain regions or networks involved in sensorimotor communication. Additionally, employing methods such as event-related potential (ERP) and functional near-infrared spectroscopy (fNIRS) could shed light on the interbrain mechanisms underlying sensorimotor communication within real communication contexts. These advancements will contribute to a more comprehensive understanding of the phenomenon. Supplemental Information
We would like to thank all the participants who took part in the experiment. Additional Information and Declarations
CC BY
no
2024-01-16 23:45:36
PeerJ. 2024 Jan 12; 12:e16764
oa_package/89/11/PMC10789189.tar.gz
PMC10789193
38126491
A avaliação de risco na insuficiência cardíaca (IC) é muito desafiadora, abrangendo muitos dados, como classe NYHA, história clínica, comorbidades, parâmetros de testes clínicos, marcadores bioquímicos, adesão e tolerância aos medicamentos recomendados pelas diretrizes. 1 , 2 A avaliação do risco é fundamental na IC avançada para apoiar a decisão de fornecer a terapia mais adequada para um determinado paciente, desde o transplante cardíaco até o DAVi de longa duração ou cuidados paliativos. 1 - 3 Vários sistemas de escore, como o Heart Failure Survival Score (HFSS), Seattle Heart Failure Score (SHFM), Metabolic Exercise Cardiac Kidney Index (MECKI) e Meta-analysis Global Group Chronic Heart Failure (MAGGIC), demonstraram ser insatisfatórios, particularmente no grupo de pacientes de alto risco. Os parâmetros do teste de exercício cardiopulmonar (TECP) são considerados no HFSS (pico VO 2 ) e no escore MECKI (pico previsto VO 2 e VE/VCO 2 slope; a classe NHYA integra SHFM e MAGGIC. 4 - 6 Pedro Engster et al. em “ Papel Incremental da Classificação da New York Heart Association e dos Índices do Teste de Exercício Cardiopulmonar para Prognóstico na Insuficiência Cardíaca: um Estudo de Coorte”, 7 publicado nesta edição, avaliou o valor agregado para avaliação de risco da classificação subjetiva NHYA à classificação objetiva de Weber, com base no valor do VO 2 pico. Estudaram uma população adulta com IC (n=834), avaliada em um centro terciário brasileiro, com fração de ejeção (FE) inferior a 50% (FE mediana = 32%), 30% com etiologia isquêmica, sob os medicamentos para IC recomendados nas diretrizes, bem equilibradas entre ambos os sexos (42% mulheres) e classes NHYA, exceto classe IV da NYHA (apenas 29 pacientes). Encontraram um ganho na avaliação prognóstica para o risco de mortalidade por todas as causas quando os dois tipos de dados são considerados em conjunto. A classe NYHA atribuída pelo médico e a classe Weber derivada do TECP foram estratificadas em “favorável” (NYHA I ou II e Weber A ou B) ou “adversa” (NYHA III ou IV e Weber C ou D). Pacientes com uma classe favorável e uma classe adversa foram definidos como “discordantes”. Eles também estudaram o impacto das classificações favoráveis e adversas para o VE/VCO 2 slope e o percentual previsto do VO 2 pico (PPVO 2 ), classificando os pacientes como favoráveis quando o VE/VCO 2 slope era inferior ou igual a 36 e o PPVO 2 era igual ou superior a 50%, e como adversos quando VE/VCO 2 e PPVO 2 , foram respectivamente superiores a 36 ou inferiores a 50%. Como esperado, descobriram que os pacientes com perfil favorável (classes I-II do NHYA e classes A e B de Weber) tinham melhores prognósticos do que os pacientes com perfil adverso (classes NYHA III-IV e classes C e D de Weber). Em uma análise multivariada, um aumento de uma classe da NYHA e uma diminuição de 3ml/Kg/min no pico VO 2 aumentaram significativamente a mortalidade em 50%. Nos 299 pacientes com classificação discordante foi encontrado prognóstico intermediário. O alargamento da análise aos valores do PPVO 2 e da VE/VCO 2 slope não alterou significativamente a avaliação prognóstica, ao contrário do que foi encontrado em muitos artigos publicados, nomeadamente no que diz respeito à VE/VCO 2 slope , a quem tem atribuído um elevado impacto prognóstico. Os autores concluíram que a classe NYHA e as medidas do TECP atribuídas pelo médico fornecem informações prognósticas complementares, mostrando que ambos os parâmetros têm impacto prognóstico independente. A classe NYHA, por ser subjetiva, é frequentemente criticada, mas mostrou-se neste manuscrito útil nos pacientes “discordantes”, onde um risco intermediário poderia ser definido. As conclusões deste manuscrito devem ser consideradas com cautela. A classe NYHA atribuída é resultado da estimativa subjetiva das limitações clínicas percebidas pelos pacientes e pelo médico. 8 Está sujeito à variabilidade interindividual (paciente) e interobservador (médico). Depende do psiquismo do paciente e do nível de atividade física habitual, que pode diminuir ou aumentar as queixas, e da percepção do médico sobre o caso. Por outro lado, os médicos, muitas vezes, têm dificuldade em escolher uma classe da NYHA para um determinado paciente. É comum encontrar classificações como I-II, II-III e III-IV em prontuários. A classificação das classes II e III da NYHA aos pacientes deste trabalho pode ter sofrido dificuldades e imposto erros de classificação. Em relação à classificação de Weber, 9 algum erro de classificação dos pacientes também pode ter ocorrido, uma vez que os autores não demonstraram que apenas os pacientes que atingiram um VO 2 máximo, confirmado pela obtenção de um platô ou queda de VO 2 no pico do exercício, ou um valor de pico da relação de troca respiratória acima de 1,10, um substituto do VO 2 máximo ou quase máximo foi incluído. Além disso, a classificação de Weber não leva em consideração o valor do PPVO 2 em função da idade, sexo e massa corporal magra, classificando, consequentemente, na mesma classe pacientes com diferentes graus de aptidão cardiorrespiratória (ACR). 10 Na verdade, a ACR é melhor definida pelo pico VO 2 , que é uma variável contínua (não categórica) reconhecida para estratificação de risco juntamente com outros parâmetros do TECP 11 e na IC avançada, particularmente quando um valor de pico VO 2 abaixo de 12 ou 14 mL/Kg/min foi alcançado, respectivamente para pacientes em uso ou não de betabloqueadores. 1 , 2 Em conclusão, Engster et al., 7 demonstraram que considerar em conjunto os dados das classificações NYHA e Weber pode ser um primeiro passo para estratificação de risco em insuficiência cardíaca reduzida ou levemente reduzida. Esta abordagem restritiva deve ser enriquecida pela inclusão de outros parâmetros e biomarcadores para ser mais preciso e clinicamente útil.
CC BY
no
2024-01-16 23:45:36
Arq Bras Cardiol. 2023 Dec 14; 120(11):e20230760
oa_package/2b/6c/PMC10789193.tar.gz
PMC10789196
38069570
INTRODUCTION The polarity of the scalp‐recorded EEG relates to the cytoarchitecture of the generative mechanism of EEG. In the field of computational neuroscience, these systems are modeled as follows (Neymotin et al., 2020 ): Pyramidal cells in cortical layer 2/3 (supra‐granular) and layer 5 (infra‐granular) are the main contributors of the extracellular electric fields. Lemniscal thalamic inputs to these neurons cause current flow up the dendrites toward supra‐granular layers, while non‐lemniscal or cortico‐cortical inputs to these neurons cause current flow down toward the infra‐granular layers. Thus, synaptic inputs to proximal regions cause source of the current, while those to distal regions cause sink of the current when seen from the cortical surface. When independent component analysis (ICA) (Bell & Sejnowski, 1995 ; Comon, 1994 ) is applied to scalp‐recorded EEG signals (Delorme et al., 2012 ; Makeig et al., 1996 ; Onton & Makeig, 2006 ), however, a known issue of indeterminacy of IC polarity occurs (Cong et al., 2008 ). The problem is that a result from a positive spatial weight times positive time series data (e.g., 1 × 1) cannot be distinguished from a negative spatial weight times negative time series data (e.g., −1 × −1). This polarity indeterminacy becomes a practical problem when averaging ICA‐decomposed ERPs across ICs because substantial amplitude reduction could happen if the ERP polarities are randomly determined. Since there is no mathematical solution to ultimately determine the “correct” IC polarities, how to determine the polarities is an engineering question in which analysts should choose the most reasonable solution for each application. Recently, our group reported one such solution using covariance maximization across ICs in the framework of generalized eigenvalue problems (Nakanishi & Miyakoshi, 2023 ). This solution is available for aligning polarities of multiple ICs, which may be useful at the stage of the group‐level analysis to minimize amplitude cancellations across the clustered ICs. However, the suggested solution does not address the issue of how the polarity of a single IC is determined when the computation of ICA converges. The issue of single‐IC polarity is, again, indeterminant by nature and must be solved as an engineering problem. However the common EEG analysis tools available today, such as EEGLAB which has been promoting the use of ICA on EEG (Delorme & Makeig, 2004 ), provide solutions without clear documentation of how the IC polarities are determined when iterative learning process is done. Thanks to the open‐source policy of the EEGLAB, we investigated the original code. We found that when the algorithm starts the iterative learning process, all the IC polarities are set to be positive: the polarities of the IC scalp topographies, which are columns of the mixing matrix (in the EEGLAB variables, EEG.icawinv) rendered on scalp electrode locations, are positive dominant, that is, the peak of the scalp topography is positive. However, the validity of this assumption has not been tested. In the current study, we investigated the relation between IC polarities calculated with initial all‐positive condition (EEGLAB's default behavior with no alternative) and IC qualities assessed by established metrics and methods, particularly class labels generated by ICLabel (Pion‐Tonachini et al., 2019 ). The main motivation of the study is to clarify the origin of the IC polarities and evaluate its influence in terms of physiological validity. Another agenda based on more personal observations is that high‐quality brain ICs almost always seem to show positive‐dominant scalp topographies. If this hypothetical conclusion is true, the mechanism must be explained. Most critically, it would be of great importance to know whether this tendency comes from artificial settings of ICA, or genuine physiology plays some role in the process. To answer to this question, we used an open‐source EEG database (Babayan et al., 2019 ) that provides over 210 datasets of 62‐channel scalp‐recorded EEGs to determine definitive observations to answer our questions.
MATERIALS AND METHODS Subjects We used the Leipzig Study for Mind–body‐Emotion Interaction dataset (Babayan et al., 2019 ). The exclusion criteria were as follows. Diagnosis of hypertension without intake of antihypertensive medication. Any other cardiovascular disease (current and/or previous heart attack or congenital heart defect). History of psychiatric diseases that required inpatient treatment for longer than 2 weeks, within the last 10 years (psychosis, attempted suicide, post‐traumatic stress disorder). History of neurological disorders (multiple sclerosis, stroke, epilepsy, brain tumor, meningoencephalitis, severe concussion). History of malignant diseases. Intake of one of the following medications (centrally active medication, beta‐ and alpha‐blocker, cortisol, any chemotherapeutic or psychopharmacological medication). Positive drug anamnesis (extensive alcohol, MDMA, amphetamines, cocaine, opiates, benzodiazepine, cannabis). MRI exclusion criteria (metallic implants, braces, nonremovable piercings, tattoos, pregnancy, claustrophobia, tinnitus, surgical operation in the last 3 months). Previous participation in any scientific study within the last 10 years. Previous or current enrollment in undergraduate, graduate, or postgraduate psychology studies. After further excluding cases of recording failures due to technical problems, a total of 212 datasets were imported and preprocessed. The demographic information of the subjects included is as follows: 134 males; Age, M = 39.3 years (SD 20.3); Handedness, 188 right‐handed, 20 left‐handed, 4 ambidextrous. Note that the age information was provided for every 5 years tier, so the center of the bin was used for the representative value. For example, a participant in a tier of 20–25 years old was registered as 22.5 years old. Ethics statement The original data collection by Babayan and colleagues was carried out in accordance with the Declaration of Helsinki and the study protocol was approved by the ethics committee at the medical faculty of the University of Leipzig (reference number 154/13‐ff). In downloading the dataset, we confirmed that the data were de‐identified. Task Resting‐state tasks with eyes open and closed were used. The recording session was divided into two blocks: The first 8 min of eyes closed block followed by the second 8 min of eyes open block. EEG recordings Scalp EEG was recorded from the following 64 locations according to the international 10–10 system (Oostenveld & Praamstra, 2001 ): Fp1, Fp2, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, AFz, P7, P3, Pz, P4, P8, PO9, O1, Oz, O2, PO10, AF7, AF3, AF4, AF8, F5, F1, F2, F6, FT7, FC3, FC4, FT8, C5, C1, C2, C6, TP7, CP3, CPz, CP4, TP8, P5, P1, P2, P6, PO7, PO3, Poz, PO4, PO8, and FCz (the initial reference, which will be recovered at the cost of VEOG; see below). The online EEG data were recorded with a band‐pass filter between 0.015 Hz and 1 kHz with a 2500 Hz sampling rate and 0.1 μ V resolution. EEG preprocessing EEG signals were downsampled to 250 Hz. The canonical electrode locations on the Montreal Neurological Institute head template were used (Collins et al., 1994 ; Evans et al., 1993 ). A high‐pass filter (FIR, Hamming, cut‐off frequency 1.5 Hz@‐6 dB, transition bandwidth 1 Hz) was applied. For the subsequent data cleaning stage, the EEG data were divided into the eyes open and closed data to be cleaned separately. EEGLAB plugin clean_rawdata() was applied with artifact subspace reconstruction with a cutoff threshold SD = 20 (Anders et al., 2020 ; Chang et al., 2018 ; Chang et al., 2020 ; Kothe & Jung, 2016 ; Kothe & Makeig, 2013 ). The separated data were combined again. The EEG data were re‐referenced to the average of the all the scalp electrodes plus the initial reference (i.e., continuous zeros) (Kim et al., 2023 ). In doing so, the initial reference electrode FCz was recovered while VEOG was discarded to keep the data ranked full. The adaptive mixture independent component analysis was applied (Palmer et al., 2016 ). At the first 15 iterations (max 2000), outlier data points larger than 3 SD were discarded for every iteration. EEGLAB plugin ICLabel (Pion‐Tonachini et al., 2019 ) was applied to probabilistically classify Ics into classes of brain, eye, muscle, heart, line noise, single channel noise, and others. The principle of how ICLabel works is as follows. First, over 200,000 ICs from more than 6,000 EEG sessions were collected to form a database. Then, these IC were manually labeled using an online crowd‐sourced solution. Finally, a weighted convolutional neural network learns the relation between the IC properties (IC scalp topography, power spectral density, and autocorrelation function) and human ratings to build a classifier that can generalize the learning results to a new input. Finally, equivalent current dipole models were fit to each IC scalp topographies (i.e., columns of ICA's mixing matrix rendered to scalp electrode locations) using Fieldtrip (Oostenveld et al., 2011 ) and bilateral symmetrical dipole fitter (Piazza et al., 2016 ). EEG analysis Figure 1 shows a schematic illustration of the data preprocessing pipeline. To determine whether the obtained IC scalp topographies are positive‐ or negative‐dominated, skewness of the data distribution across scalp electrodes was calculated for each IC. The skewness of the IC scalp topographies was calculated using MATLAB function skewness (). Positive skewness indicates the obtained IC scalp topographies are positive‐dominant. The radiality of the fitted equivalent current dipoles was quantified as follows. The radial axes were defined by vectors originating from [0 0 0] of MNI's template brain's coordinate system to the location of the fitted dipoles. Angular differences between the dipole moment and radial axes were calculated to evaluate the radiality of the fitted dipoles. A smaller angular difference indicates closer to radial orientation. Radial dipole orientation indicates the estimated current source is localized on the surface of continua of neocortical gyral crowns (Nunez & Srinivasan, 2006 ), which verifies the physiological validity of the decomposed ICs. Statistics The k‐means algorithm (Forgy, 1965 ; Lloyd, 1982 ) was used to classify the IC scalp topographies into 12 clusters according to their similarities. The number of clusters 12 was determined to produce a convenient coarse‐grain view in a 3 × 4 grid plot. The clustering was done separately for ICs with positive and negative dominance in IC topographies for comparison. No inferential statistics were used to draw conclusions. This was why we chose the EEG database with a relatively large number of datasets ( n = 212) to obtain robust observations.
RESULTS A total of 13,144 ICs (62 ICs × 212 subjects) and corresponding IC scalp topographies were generated. The distribution of the skewness is shown in Figure 2 . The descriptive statistics revealed that 90.9% of the ICs showed positive dominance and positive skewness, while 9.1% of the ICs showed negative dominance and negative skewness. About 3.5% of the ICs showed a mismatch between the signs of the skewness and dominance of IC scalp topography, which confirms that the strategy to use skewness as a metric to determine the dominant polarity was mostly successful. It became clear that more than 90% of ICs have positive dominance in their scalp topographies. This was expected as the initial conditions for these polarities are hard‐coded to be positive. Thus, only 9.1% of ICs flipped their polarities as a result of the full ICA process. In the next step, we compared the rate of the IC classes determined by the ICLabel algorithm (Pion‐Tonachini et al., 2019 ). The results are shown in Figure 3 . More than 50% of the positive‐dominant ICs were classified as ‘Brain’, while less than 35% of the negative‐dominant ICs were classified as ‘Brain’. In contrast, negative‐dominant ICs showed generally higher rates for non‐brain classes than positive‐dominant ICs. The result indicates that negative‐dominant ICs are more frequently associated with poor quality in brain signal decomposition. To visually confirm the differences in the scalp topographies between the positive‐ and negative‐dominant ICs, the obtained IC scalp topographies were clustered into 12 clusters using k‐means. Figure 4 shows the results. The noticeable difference in this visual comparison is the angle of the dipoles: the positive‐dominant ICs seem to have radial dipoles. In contrast, the negative‐dominant ICs seem to have tangential dipoles. To quantify the visual impression of the dipole angle difference between the positive‐ and negative‐dominant ICs, we designed a visualization of probability density on the plane defined by dipole angle defined as a deviation from a radial line and residual variance of the IC scalp topographies compared with theoretical projection from the estimated dipole. The results are shown in Figure 5 . As demonstrated in our previous study (Delorme et al., 2012 ), the residual variance is the measure of the dipolarity of ICs which has been interpreted as physiological validity. The probability density distribution of the positive‐dominant ICs shown in Figure 5 left showed a clear unimodal pattern: the peak probability resides below 5% of the residual variance and within the 10–30 degrees deviation range from the radial projection axis. In contrast, the probability density distribution of the negative‐dominant ICs showed broad distribution between the ranges of up to 30% of residual variance and between 80 and 170 degrees of deviation from the radially projecting axis. Note also that the area in which positive‐dominant ICs showed peak probability density is quiet in the corresponding area in the negative‐dominant ICs.
DISCUSSION Investigating the IC polarity patterns using the relatively large empirical EEG dataset yielded the following observations: (1) About 91% of ICs showed positive‐dominant IC scalp topographies when initial polarity is set to positive for all ICs; (2) Positive‐dominant ICs are more associated with brain‐originated signals; (3) Positive‐dominant ICs showed more radial (peaked at 10–30 degrees deviations) dipolar projection pattern with less residual variance from fitting the equivalent current dipole. These results support the general view that negative‐dominant ICs are minorities with poor signal quality in brain signal decomposition. The final results showed most ICs had the initial polarity values. If we set the initial polarities to all negative, we would see 91% of ICs with negative‐dominant topographies with polarity‐inverted IC activation time series data. As the initial polarities determine the final polarities for most ICs, it is meaningless to argue the absolute polarity. Instead, the critical finding is that while the majority of the ICs (91% in the current study) remain the same polarities as the initial values, the remaining ICs do flip the polarities during the process of decomposition, and the polarity flippers are associated with poor signal/decomposition quality. As far as we know, this property of ICA has never been documented. This observation adds a new criterion to evaluate ICs: high‐quality signals/decompositions show polarities consistent with the initial values. In the case of implementation in EEGLAB, the initial values are positive‐dominant, hence high‐quality signals/decompositions are more likely to show radial and dipolar projections with positive dominance. The result also confirms that the ICA‐based EEG decomposition primarily captures gyral sources. Given the fact that the total area of gyral crowns is about one‐third of that of the entire cortical surface (Standring, 2020 ), if ICA were equally sensitive to sulcal sources, the result in Figure 5 would have shown another peak, with even higher value, at around 90°. Our result justifies the view that sensitivity to sulcal sources may not be very important in analyzing human EEG because of (1) cancellation of the electric fields between the two cortices facing each other and (2) larger distance from the scalp (Nunez & Srinivasan, 2006 ). The peak density at around 90° with relatively low residual variance in the negative‐dominant ICs seems to point to genuine sulcal EEG sources whose scalp topographies should show both positive and negative peaks. The selection of the reference electrode affects EEG polarity in scalp recording case. This reference potential problem may be relatively reasonably addressed by using either an average reference for high‐density EEG systems or the REST algorithm (Yao, 2001 ) for re‐referencing (Nunez, 2010 ). Though ICA results are invariant to the choice of reference electrodes after subtracting mean values topography‐wise, to verify IC polarities, it seems required at this point to verify the polarities against known examples if they are available. For example, suppose an IC is identified as a significant contributor to classical P300 in terms of its latency and scalp distribution. In that case, the polarity should be set so that the waveform of the IC ERP also shows P300, not N300 (Nakanishi & Miyakoshi, 2023 ). This empirical workaround may be used as long as well‐established examples are available. In the case of continuous data decomposition, such as resting state, this approach does not work. Although IC polarities do not seem to matter very frequently for continual data analyses, it is a problem for which we do not have a solution, and we do not have a reasonable way to justify our default choice, such as starting from all‐positive‐dominant scalp topographies. It may be worth mentioning that ICA results are invariant to the choice of reference because re‐referencing and ICA are both linear operations. One exception is that mean values across all the electrodes in an IC scalp map can vary depending on the choice of reference. The average reference method forces every IC scalp topographies to approach close to zero mean. Technically, the deviation is controlled to be 1/(number of channels +1) of the mean value of each IC topography (Kim et al., 2023 ). For other choices of reference electrodes, scalp topographies could be dominated by general positivity or negativity that appears as “all red” or “all blue” using the conventional color scheme, respectively. Using average reference is one of the reasonable solutions to produce IC scalp topographies that are well‐balanced between positivity and negativity. We speculate why poor decompositions tend to have negative‐dominant scalp topographies as follows. Such poor decompositions do not have unimodal (or bimodal for the case of major tangential sources) scalp topographies; in other words, “residual variances” from fitting the radial (or tangential) equivalent current dipole become high (Delorme et al., 2012 ), which, in turn, leads to IC scalp topographies have multiple positive and negative local peaks. In this case, nonpositive dominance can be understood as an indicator of poor component quality. Usually, ICs with high variance accounting are more likely to reflect brain signals because in EEG signals high amplitude generally means high SNR (Nunez & Srinivasan, 2006 ). ICs with low variance accounting usually suffer from poor component quality and they are always there. However, they account for progressively smaller data variance, which may be understood as residuals from decomposing main signals. Perhaps ICA uses those low‐variance residuals to make ICA work as a complete linear decomposition; we can imagine ICA uses them to cancel out residuals to “make ends meet” in the process of linear decomposition. If our speculation is correct, using the relative dominance of ICs, either in amplitude or valiance, as an additional evaluation criterion for physiological validity seems possible. This viewpoint appears missing from conventional studies using ICA. Our study provides partial evidence that ICs with low variance are less reliable in terms of physiological validity and not purely procedural reproducibility, which may be used for future studies to test the validity of ICA and ICASSO (Artoni et al., 2014 ; Groppe et al., 2009 ; Himberg et al., 2004 ; Himberg & Hyvarinen, 2003 ; Hyvärinen et al., 2001 ). In the conventional ICA applications, there was no explicit consensus that ICs with trivial variance explained also have trivial physiological validity or significance (Delorme et al., 2012 ; Onton & Makeig, 2006 ). However, the current study demonstrated that ICs with low variance do not have the same level of physiological validity at least in terms of the dipole angle analysis. Because ICA can be also understood as a mode decomposition technique (Friston, 1998 ), investigating how the quality of decomposition relates to the variance of components in future studies seems to produce valuable insights. In conclusion, we clarified that EEGLAB's default ICA sets all the IC polarities to be positive, leading one of ten ICs to flip its polarity to negative. We found that negative‐dominant ICs are associated with poorer data quality. The positive‐dominant ICs show highly radial projection patterns with low residual variance from fitting equivalent current dipoles. This pattern does not fit the negative‐dominant ICs. Thus we determined EEGLAB biases toward positive polarity in decomposing high‐quality brain ICs.
Abstract Independent component analysis (ICA) is widely used today for scalp‐recorded EEG analysis. One of the limitations of ICA‐based analysis is polarity indeterminacy. It is not easy to find detailed documentations that explains engineering solutions of how the polarity indeterminacy is addressed in a given implementation. We investigated how it is implemented in the case of EEGLAB and also the relation between the outcome of the polarity determination and classification of independent components (ICs) in terms of the estimated nature of the sources (brain, muscle, eye, etc.) using an open database of n = 212 EEG dataset of resting state recordings. We found that (1) about 91% of ICs showed positive‐dominant IC scalp topographies; (2) positive‐dominant ICs were more associated with brain‐originated signals; (3) positive‐dominant ICs showed more radial (peaked at 10–30 degrees deviations from the radial axis) dipolar projection pattern with less residual variance from fitting the equivalent current dipole. In conclusion, using the EEGLAB's default ICA algorithm, one out of 10 ICs results in flipping its polarity to negative, which is associated with non‐radial dipole orientation with higher residual variance. Thus, we determined EEGLAB biases toward positive polarity in decomposing high‐quality brain ICs. Polarities of independent components (ICs) are indeterminant. We found about 9% of ICs flip the initial tentative positive polarities to negative polarities when the algorithm converges. The polarity‐flipping ICs show less physiological validity. The IC polarity could be a novel criterion for evaluating ICs. Miyakoshi , M. , Kim , H. , Nakanishi , M. , Palmer , J. , & Kanayama , N. ( 2024 ). One out of ten independent components shows flipped polarity with poorer data quality: EEG database study . Human Brain Mapping , 45 ( 1 ), e26540 . 10.1002/hbm.26540
CONFLICT OF INTEREST STATEMENT The authors declare no conflicts of interest.
ACKNOWLEDGEMENTS The authors thank Dr. Scott Makeig for suggesting using skewness to evaluate the polarity dominance of IC scalp topographies. MM and HK are supported by NSF BCS2011716 CRCNS US‐Japan Research Proposal: A computational neuroscience approach to skill acquisition and transfer from visuo‐haptic VR to the real‐world and NINDS 5R01NS047293‐16 “EEGLAB: Software for Analysis of Human Brain Dynamics.” MN, MM, and HK are supported by The Swartz Foundation (Old Field, New York). We express our gratitude to Dr. Michael Villanueva for editing English. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available in MPILMBB LEMON Data at https://www.gwdg.de/ . These data were derived from the following resources available in the public domain: Babayan et al. (2019), https://www.nature.com/articles/sdata2018308
CC BY
no
2024-01-16 23:45:36
Hum Brain Mapp. 2023 Dec 9; 45(1):e26540
oa_package/d7/49/PMC10789196.tar.gz
PMC10789208
38087950
INTRODUCTION Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease associated with progressive upper and lower motor neuron degeneration. ALS involves motor, cognitive and behavioural decline, and death typically occurs as a result of ventilatory failure within 3–5 years from first symptoms (Costello et al., 2021 ; Evans et al., 2015 ; Hardiman et al., 2017 ; Phukan et al., 2012 ). Up to 50% of people with ALS exhibit evidence of cognitive dysfunction and ~14% reach the threshold for ALS–frontotemporal dementia (FTD) diagnosis (Phukan et al., 2012 ). There is no effective treatment for ALS, and there remains an urgent need for cost‐effective, reliable biomarkers to quantitatively assess cognitive and motor decline. Whole‐brain resting‐state electroencephalographic (EEG) studies can provide robust evidence of motor and extra‐motor degeneration in ALS. The most recent findings of frequency domain and source localisation analyses include increased co‐modulation in the fronto‐parietal area (θ, γ‐band), and decreased synchrony in the fronto‐temporal areas (δ, θ‐band) (Dukic et al., 2019 ; Nasseroleslami et al., 2019 ). Although abnormal functional connectivity in both sensor and source‐space has been shown, there is limited understanding of the temporal dynamics of brain networks in ALS. Insights into the temporal dynamics of brain networks can be gained through analysing brain ‘microstates’. Microstates are defined as transient, quasi‐stable electric field configurations that repeat sequentially over time within an EEG recording. Microstate analysis involves identifying recurring topographical patterns of spontaneous neural activity across multiple time points and categorizing the EEG topography at each time point into one of these distinct microstate classes. Microstate transitions were originally attributed to changes in the coordination of synaptic activity (Lehmann et al., 1987 ). These distinct re‐occurring topographies of the scalp electrical potential (‘scalp maps’) have a duration spanning from milliseconds to seconds. Four canonical classes (labelled A–D) of microstates have been repeatedly described and have been associated with well‐established resting‐state networks (RSNs) in fMRI, based on the estimated brain regions generating each microstate (Michel & Koenig, 2018 ). Analysing these microstates allows us to investigate changes in the temporal dynamics of brain networks instead of changes in functional connectivity between networks, which is more typically examined in EEG studies (Gschwind et al., 2016 ). Changes in the properties of microstates have been previously associated with altered states of consciousness (Bai et al., 2021 ; Bréchet & Michel, 2022 ; Zanesco et al., 2021 ) and with neurological or neuropsychiatric conditions (Al Zoubi et al., 2019 ; Dierks et al., 1997 ; Faber et al., 2021 ; Gschwind et al., 2015 ; Koenig et al., 1999 ; Michel & Koenig, 2018 ; Nishida et al., 2013 ). Alterations in microstate characteristics are thought to represent alterations in the rhythm of neural processes. However, it is the microstates' temporal dependencies that can perhaps give us the greatest insight into how brain function is altered in neurodegenerative diseases like ALS. Neurological conditions seem to alter the brain's functional resting state transitions; forcing the brain to stay and/or change to specific functional networks. By examining the temporal dependencies between microstate sequences we can investigate how the transitions between functional brain networks are altered in disease. Temporal dependencies are modulated in mood or mental disorders, including FTD (Al Zoubi et al., 2019 ; Lehmann et al., 2005 ; Nishida et al., 2013 ). In Alzheimer disease, in particular, transition patterns appear random while in healthy controls transitions between specific classes are preferred (Nishida et al., 2013 ). These findings suggest that EEG microstates have strong potential as a tool for detecting and measuring neural abnormalities in individuals with ALS, particularly as a task‐free assessment of cognitive and behavioural function. Microstate computation exploits the activity that pertains to specific brain regions (by clustering EEG topographies) and therefore microstate classes are hypothesised to reflect specific functional networks, as evidenced by studies examining the relationship between resting‐state networks and microstates. By quantifying microstate properties, we gain the ability to investigate neural network activity. The purpose of this study was to test whether microstate properties can differentiate ALS and HC groups, in standard characteristics (e.g., frequency of occurrence, duration) and temporal dependencies (e.g., transition probabilities and entropy in microstate sequences). This study also examined whether patients exhibit changes in microstate properties over time and whether microstate properties correlate with clinical presentation. To preface our results, RS EEG microstate analysis suggests that ALS affects both sensory and ‘higher‐order’ networks, resulting in reduced dynamicity in brain state transitions. Microstate properties may be a useful ALS prognostic marker for cognitive decline and disease outcome.
METHODS Experiment Participants Individuals with ALS and ALS–frontotemporal dementia (ALS‐FTD) diagnoses were recruited from the Irish National ALS Clinic in Beaumont Hospital, Dublin, Ireland. ALS diagnoses were based on the revised El Escorial criteria (Ludolph et al., 2015 ) and the Strong criteria (Strong et al., 2017 ). Individuals diagnosed with primary lateral sclerosis, progressive muscular atrophy, flail arm/leg syndromes, other medical morbidities, neurological or neuropsychiatric symptomatology were excluded. Age‐matched healthy controls (HC), with neither diagnosed neurological nor neuropsychiatric conditions, were additionally recruited from an existing volunteer database (Burke et al., 2017 ). EEG data recorded from 129 individuals with ALS ( m : 77%; mean age: 60.89 ± 11.4) and 78 age‐matched healthy controls ( m : 36%; mean age: 60 ± 12) were analysed. Four follow‐up sessions were conducted for patients, ~5.4 ± 2.1 months apart. Patients attended an average of 2 ± 1.2 recording sessions. Detailed information about the demographic of the dataset can be found in Note 1 in Data S1 . Clinical assessments Individuals with ALS underwent cross‐sectional and longitudinal clinical assessments including the revised ALS functional rating scale (ALSFRS‐R) (Cedarbaum et al., 1999 ) ( N = 162), King's stagings ( N = 161, direct assessment; N = 170, with extrapolation from ALSFRS‐R scores; Balendra et al., 2014 ), and ALS‐specific behavioural and cognitive measurements ( N = 153) (Traynor et al., 2003 ). Functional clinical evaluation data were retrieved from the Irish Motor Neuron Disease Registry for the ALS cohort (O'Toole et al., 2008 ; Rooney et al., 2013 ; Ryan et al., 2018 ; Traynor et al., 2003 ). Edinburgh cognitive and behavioural ALS Screen (ECAS) (Abrahams et al., 2014 ) and Beaumont behavioural inventory (BBI) (Elamin et al., 2017 ) scores were collected as part of parallel ongoing research projects in the Academic Unit of Neurology (Costello et al., 2020 , 2021 ) (please see Note 2 in Data S1 ). EEG acquisition Resting‐state EEG recordings were conducted at the Clinical Research Facility in St James's Hospital, Dublin. The EEG recordings occurred in a dedicated recording room, shielded by a Faraday cage to protect from external electric fields. Electrode offsets were kept between ±25 mV. Participants were asked to rest with their eyes open while comfortably seated. A letter X (6 × 8 cm 2 , printed black on white) provided a gaze target. EEG signals were recorded at 512 Hz on a 128 channels BioSemi ActiveTwo system (Amsterdam, Netherlands) (Honsbeek et al., 1998 ), for three blocks of 2 min. The subject's wakefulness and well‐being were monitored between each recording during a quick visit by the experimenter. EEG pre‐processing Pre‐processing was performed using MATLAB R2019b software (The MathWorks, 2019 ). The EyeBallGUI toolbox (Mohr et al., 2017 ) was used for visual screening and quality inspection of recordings. The Fieldtrip Toolbox was used for the pre‐processing steps described below (version 20190905) (Oostenveld et al., 2011 ), and the Microstate EEGlab toolbox (Poulsen et al., 2018 ) was used to compute the microstates. The pre‐processing steps were implemented based on pipelines previously described in publications by our team (Dukic et al., 2019 , 2021 ; Nasseroleslami et al., 2019 ). Bad epochs were rejected based on an evaluation of the amplitude, the mean shift, the variance and the band‐variance of spectral power against a 3.5 Z ‐score threshold (Dukic et al., 2017 ). The EEG signals were downsampled from 512 to 256 Hz. After resampling, a band‐pass filter (one‐pass zero‐phase FIR: 1–97 Hz) and a notch filter (dual‐pass third‐order Butterworth: 50 Hz, stopband: 1 Hz) were applied. After baseline correction, noisy channels were removed using an algorithm based on both the PREPpipeline (2015) and the work of Kohe ( 2010 ) (Bigdely‐Shamlo et al., 2015 ; Kohe, 2010 ). Channels that were removed were interpolated from neighbouring electrodes. Recording sessions with more than 11 channels removed were excluded from the study as they were deemed unreliable. The average number of channels removed was 2.6 ± 6.6 for controls and 3.9 ± 8.6 for patients. A common average reference was applied to the remaining channels. Computation of the EEG microstates To compute microstates, EEG data were low‐pass filtered at 30 Hz (zero‐phase, Finite Impulse Response—‘Brickwall’ filter, applied in dual pass form), as commonly recommended in microstate studies (Michel & Koenig, 2018 ). The computation steps following data pre‐processing are represented in Figure 1 . The global mean‐field power (GFP; representing the spatial standard deviation) was calculated for each participant with a Gaussian weighted moving average as a smoothing method (window of five timepoints or around 10 ms) (Al Zoubi et al., 2019 ). Next, EEG topographies were extracted from the signals at 1000 randomly chosen instances of local maxima of the GFP curve (12% ∓ 2% of the total number of peaks, calculated using a peak‐finding algorithm). Only 1000, rather than all, peaks of GFP were used for each participant to facilitate computation with a relatively large dataset (Poulsen et al., 2018 ). These EEG topographies at GFP peaks were used to obtain the optimal signal‐to‐noise ratio, whereby peaks higher than 1.5 SDs from the mean were excluded from the selection. Very high GFP often represents non‐neural activity and therefore needs to be rejected. Peaks with <10 ms delay in between were also excluded (Poulsen et al., 2018 ), as this minimum peak distance guarantees that all peaks are distinct. The selected EEG topographies were submitted to a modified K‐means clustering algorithm, implemented in the Microstate EEGlab toolbox (Poulsen et al., 2018 ). The algorithm initially defines K microstate prototypes randomly selected from the EEG data. Each EEG sample is assigned to a cluster by minimising the Euclidean distance between the selected EEG maps and the associated prototype. New cluster prototypes are iteratively defined until convergence or a maximal number of repetitions (50 repetitions in our case) is reached. The algorithm models the signal strength and applies a constraint to only have one microstate active at a time. This differs from the original K ‐means algorithm by being polarity invariant (assigning opposite maps to the same cluster). The rationale for this approach is that the scalp potentials measured by EEG are generated by fluctuations in the synchronous firing of neurons; therefore an inverse polarity of the scalp potential field may happen while the same neuronal sources generate oscillations in the brain (Brodbeck et al., 2012 ; Michel & Koenig, 2018 ). The K ‐means algorithm was chosen over the agglomerative hierarchical clustering (AAHC) as it has a shorter computational time and both algorithms have been shown to result in similar microstates (Murray et al., 2008 ). The optimal number of clusters (or microstate classes) was selected using a K = 3 cross‐validation approach on a subset of 3–11 maps. Microstate prototypes were identified from two‐thirds of the concatenated GFP peaks and backfitted to the remaining data points (the remaining third of the GFP peaks was the test set), allowing for evaluation of the prototypes' performance on the test set using measures of fit like global explained variance and cross‐validation criterion (Pascual‐Marqui et al., 1995 ). The cross‐validation method ensures the stability of the results, i.e. not getting microstate cluster representing noise. To derive sequences of microstates, the grand mean across groups prototypes were then back‐fitted to the original EEG recordings for both the HC and the ALS groups. Each EEG sample was associated with a prototype class using global map dissimilarity. Microstate time courses underwent temporal smoothing (using rejection of small segments) to minimize the influence of fast fluctuations, which may be caused by noise. Short microstate of <23 ms (or 6 timepoints) were modified to the next most probable microstate class (Poulsen et al., 2018 ). While temporal smoothing is beneficial for reducing noise‐related artefacts, it is not suitable when investigating temporal dependencies within the microstate sequence. For this aspect of the study, temporal smoothing was intentionally omitted to preserve the inherent temporal structure of the microstate sequence, following the recommendation by von Wegner et al. ( 2017 ). EEG microstates analysis After the microstate sequences were computed, two types of analysis were conducted. First, the standard microstate characteristics were extracted, including the global explained variance, occurrence, duration and transition probabilities. Three categories of statistical analyses were conducted on those properties: (1) pairwise comparisons between HC and ALS groups, (2) longitudinal analysis in individuals with ALS over the progression of the disease and (3) cross‐sectional and longitudinal characteristics of the microstate sequences were analysed with respect to the clinical scores. Second, the temporal dependencies between microstate classes were examined using Shannon entropy and transition probabilities to quantify the predictability and randomness of the microstate sequence (von Wegner et al., 2017 ). The sequences of microstates were tested for Markovianity of order 0–2. The time‐lagged mutual information between microstates, as well as the stationarity and symmetry of the transition probability matrices were also assessed. These properties have the advantage of being independent of the method used to compute the microstates (von Wegner et al., 2018 ). Standard properties of microstates The global explained variance (GEV) measures how well each microstate class can explain the variance in the EEG signal. Basic temporal parameters were determined, including the average duration (ms) of a microstate class, its frequency of occurrence (s −1 ), and the fraction of time it is active during the recording (i.e., coverage). Transition probabilities were also derived from the sequences of microstates to quantify how often one class precedes another. The probabilities were not adjusted for class occurrences or durations, as we chose to report them both independently (Poulsen et al., 2018 ). Therefore, any observed effects in transition probabilities result from a combination of systematic transition disparities and potential biases explained by occurrences. Statistical analysis Cross‐sectional pairwise comparisons . Mann–Whitney U tests were computed for each microstate parameter (coverage, occurrence, duration and transition probability) to compare the HC and ALS cohorts. A 10% adaptive False Discovery Rate (FDR) correction was used to account for the four microstate classes (or 12 transitions between classes), which was based on the Benjamini and Krieger method (Benjamini et al., 2006 ) as implemented in the Empirical Bayesian inference (EBI) toolbox (Nasseroleslami, 2018 ). The effect sizes were derived from the U ‐statistics using the rank‐biserial correlation coefficient (Cureton, 1956 ): r = 2 U n 1 ∙ n 2 , as well as the area under the receiver operating characteristic curve (AUROC) (Hajian‐Tilaki, 2013 ): AUROC = U n 1 ∙ n 2 . A post‐hoc EBI‐based estimation of the statistical power was then calculated (Nasseroleslami, 2018 ). Longitudinal changes . In the ALS group, mixed‐effects models were used to examine the changes in microstate parameters and clinical scores (from ALSFRS‐R, ECAS and BBI tests) over time as the disease progressed. Mixed‐effects models were implemented with an intercept and a time‐related slope, reflecting the rate of change per month (from 5 to 113 months after onset). Mixed‐effects models of the microstate parameters included microstate classes as a predictor. Subject‐specific random‐effects were included in all models: a random intercept was chosen for the longitudinal model to allow for different baseline values across subjects and a random slope was chosen to allow for different rates of change over time. Age, gender and site of onset as random‐effects did not improve the model fit (likelihood ratio test) and were therefore not included in the final models. Education as a random‐effect was deemed relevant for the ECAS model only. A specific deviation from intercept and slope, representing the level of education, was added (as random‐effect) to the model of cognitive performance. The longitudinal model of cognition also contained an additional fixed‐effect term to account for the three different versions of the ECAS questionnaire. The mixed‐effects parameters were estimated using restricted maximum likelihood. The assumptions of normal distributions, independence, and constant variance of the residuals were checked (using the Kolmogorov–Smirnov test [ q < 0.05]; Ljung‐Box Q ‐test [ q < 0.05]; Engle's ARCH test [ q < 0.05] or diagnostic plots). A rank‐based inverse normal transformation was applied in cases where the residuals did not follow a normal distribution (Beasley et al., 2009 ). To evaluate the linearity of the parameters' progressions over time, quadratic polynomial regression models were estimated per subject (when data from at least three recordings were available). The quadratic coefficients did not significantly differ from zero ( q < 0.05), so only first‐order models were kept for further analyses. All patients were included in the final models, regardless of the number of recording sessions they attended as mixed‐effects models can adjust for missing data. To assess the repeatability of the models, the variances of the linear mixed‐effects models were analysed and decomposed to determine the proportion of variance attributed to various sources, including within‐person and between‐person measures (Rights & Sterba, 2021 ; Schielzeth & Nakagawa, 2022 ). Correlations with clinical measures . Spearman rank correlations were computed between the microstate parameters and cross‐sectional physical and cognitive clinical scores in the ALS group (survival, ALSFRS‐R and ECAS scores at the first timepoint). The correlation between the variables that describe the microstate properties and clinical scores over time was also estimated. We evaluated correlations separately for those with cognitive impairment (ALSci; based on ECAS score), behavioural impairment (ALSbi; based on BBI scores) and those without cognitive or behavioural impairment, as people with ALS that have extramotor impairments exhibit different changes in functional connectivity (Temp et al., 2021 ; van der Burgh et al., 2020 ). An adaptive FDR correction was applied and the statistical power was estimated using EBI (Nasseroleslami, 2018 ) to account for the multiple clinical measures. Information–theoretical properties to assess temporal dependencies We performed an information–theoretical analysis of the temporal dependencies between microstate classes using Shannon entropy and by interrogating the transition probabilities (extracting their Markov properties, stationarity and symmetry; Figure 2 ) (von Wegner et al., 2017 , 2018 ). Studying entropy‐related properties is a way to determine the predictability of the next microstate class. A sequence with only one microstate class appearing (amongst the four classes labelled A, B, C and D) would represent maximum predictability and therefore minimum entropy (e.g., only B). We then derived the auto‐information function (AIF) from the entropy values. AIF measures the time‐lagged mutual information between microstates (it is an approximation of the auto‐correlation function for nonmetric data). The AIF measures the time‐lagged mutual information between microstates with time lag τ , which can be estimated as the difference between the marginal and conditional entropies: I τ = H M t + τ − H M t + τ M t ) . The less ‘uncertainty’ about the time‐lagged microstate M t + τ , when M t is known, the more information is shared between the states and the higher the AIF is. The AIF was evaluated for all microstate classes as well as the contribution to AIF by each microstate class (the time‐lagged mutual‐information for each microstate class separately). Then we examined the features of the transition probabilities. We first tested for Markovianity order 0–2, to check whether the transition probabilities rely on the current class, the previous class, or two previous classes of the sequence of microstates: with the null hypothesis of no memory effect. The stationarity of the transition probability matrix was then evaluated based on the homogeneity of non‐overlapping blocks of varying lengths. Stationarity means that the frequency of any transition between two classes does not depend on time and would not be significantly different in different blocks (von Wegner et al., 2017 , 2018 ). Finally, the symmetry of the transition matrix was assessed to check whether the probability to transition from a class M i to another class M j was equivalent to the probability of passing from M j to M i . Statistical significance for symmetry, stationarity and the Markovianity was estimated using G ‐tests (i.e., maximum‐likelihood significance tests) and chi‐squared distributions (Al Zoubi et al., 2019 ; von Wegner et al., 2017 , 2018 ).
RESULTS Four microstate prototypes identified in HC and ALS cohorts The topographies of the microstate prototypes and the optimal number of clusters identified in both HC and ALS groups (Figure 3 ) were similar to those conventionally reported in the literature (Michel & Koenig, 2018 ). The portion of recordings explained by the four microstate prototypes (i.e., explained variance) was 58% for HC and 54% for the ALS group. The four topographies demonstrated spatial correlation between the ALS and the HC groups (Pearson's correlation coefficient: ρ > 0.9 ) and the distributions of the explained variance did not differ between the two groups (2‐sample Kolmogorov–Smirnov test, p = .7, respectively). Modulation of microstate properties by ALS disease Distinct microstate properties between HC and ALS cohorts There were no differences in the GEV distributions of the microstate classes between ALS (measured at the first timepoint) and control groups after FDR correction ( q < 0.1). Microstate class B, in particular, seems to be most affected by ALS. The occurrences of both microstate classes A and B were higher in the ALS group (Figure 4 , occurrence A: p = .03, r = −.2, 1 − β = 0.50, AUC = 0.59; occurrence B: p = .008, r = −.2, 1 − β = 0.65, AUC = 0.60). The coverages of classes A and B were also significantly higher in the ALS group (Figure 4 , coverage A: p = .02, r = −.2, 1 − β = 0.53, AUC = 0.59, coverage B: p = .03, r = −.2, 1 − β = 0.48, AUC = 0.59). There was an imbalance between microstate classes – the duration of microstate A was significantly higher in the ALS cohort, whereas the duration of class D was significantly lower (Figure 4 , duration A: p = .04, r = −0.2, 1 − β = 0.41, AUC = 0.58; duration D: p = .02, r = .2, 1 − β = 0.48, AUC = 0.60). The transition probabilities were significantly different between groups for 7 out of 12 transitions (Figure 5 ). The largest difference between HC and ALS groups was observed for the transition of microstate C to microstate D ( p = .004 , r = .3 , 1 − β = 0.74 , AUC = 0.63 ). The transition C → D was more frequent in healthy controls. Longitudinal changes of microstate properties in ALS The longitudinal analysis of the microstate properties in the ALS group revealed a significant decrease in class B duration (5% increase) and GFP over time (2% increase) (Figure 6 ). The results emphasized the importance of taking into account different baseline values (using a random intercept model) between individuals and different rates of change over time (using a random slope model). This approach allows to effectively discern the sources of variability. In the longitudinal model of class B duration, the random slope variation accounts for ~1% of the total variance. In addition, roughly 60% of the outcome variance is attributable to person‐specific differences at baseline. Similarly, for class B GFP, 4% of the total variance is attributed to random‐time effects, while 70% is attributed to intercept variation. A summary of the linear mixed‐effects models can be found in Table 1 . Gender, age or medication did not have a significant effect on the observed cross‐sectional differences in microstate properties between ALS and HC groups or longitudinal effects in the ALS group (Note 3 in Data S1 ). Longitudinal changes of clinical measures in ALS The clinical scores were also modelled using a linear mixed‐effects model to investigate individual differences in progression (Note 2 in Data S1 ). As expected, significant time effects were observed for each ALSFRS‐R subscore ( p < .001) (bulbar, lower limbs, upper limbs, respiratory), with a 0.1 to 0.2 points decline per month. The ECAS Total scores also significantly increased over time ( p = .02, 0.2 points increase per month) but no increase was observed in the BBI scores ( p = .05). Changes in microstate properties are associated with cognitive decline and prognosis We found that microstates episodes are not only affected by the disease but their characteristics are also associated with the level of cognitive decline. People with ALS who had shorter durations of microstate class B tended to have faster lower motor declines. Individuals with a faster decrease in microstate C coverage had a slower decline in gross motor skills (Figure 7 ). Cognitively and behaviourally impaired participants (ALScbi, n = 69) with lower transition probabilities in microstate A to D and C to D showed a slower increase in ECAS total scores. ECAS scores generally increase over time due to non‐random dropout or practice effect. At subject level, if these participants showed less of a practice effect it can be interpreted as a sign of cognitive decline. A lower transition probability between microstates C and B was also associated with shorter survival and faster increase in ECAS total scores (Figure 7 ). Influence of ALS on temporal dependencies in microstate sequences Memory effects in the sequences of microstates For both HC and ALS groups, there were no long‐range memory effects in the microstate sequences, as typically observed (Al Zoubi et al., 2019 ; von Wegner et al., 2017 , 2018 ). This can be seen in the decay of the periodic peaks of the AIF for time lags larger than 1 s (Note 4 in Data S1 ). The AIF inspection showed that the temporal predictive information in previous time points dependence is less than 1‰ of that in the current time point (>1 s lag). The Markovianity tests were not significant for any order between zero and two, showing no Markov property (or ‘memoryless’ property, meaning the past is not important as long as the present is known) in the microstate sequences for ALS or HC groups (order 0: p ~ 0 ; order 1: p < 3.6 × 10 − 72 ; order 2: p < 1.6 × 10 − 26 ). Information from the current microstate is not enough to define the transition probability to the next microstate (Markov order 0). Information from the current and previous microstates is not enough to define the transition probability (Markov order 1). Information from the current and two previous microstates is still not enough to define the transition probability (Markov order 2). The rejection of the null hypotheses in the G ‐tests for low‐order Markov property reveals memory effects stored at least two microstates in the past. Reduced dynamicity of microstate transitions in late‐stage ALS In controls and individuals with early‐stage ALS, the percentage of people with predominantly non‐stationary transition matrices decreased at a similar rate as the block length was increased (where block length is the time window over which the transition probabilities were studied) (Figure 8 ). Participants in the late stage of ALS (King's stage 4) were more likely to have stationary transition matrices. The frequency of a transition between two classes is staying the same in different blocks, thus becoming independent of time. In ~4 s blocks, significantly more individuals with late‐stage ALS (8%) have stationary transition matrices than individuals from earlier stages (1%) (Mann–Whitney U test, p = .0032, FDR at 0.05). Higher stability in the transitions between microstate classes has been interpreted as a reduction in the dynamicity of neuronal connectivity (Al Zoubi et al., 2019 ; von Wegner et al., 2017 ). No significant difference was observed between the King's stages <4 and the HC group. For 54% of the HC and 58% of the individuals with ALS, the likelihood of passing from microstate class M i to class M j was not statistically equivalent to the likelihood of transitioning from M j to M i . In most participants, the transition matrices were asymmetric. However, only 49% of individuals with late‐stage ALS had asymmetric transition matrices.
DISCUSSION The results of this study demonstrate that the properties of EEG microstates can provide insight into ALS prognosis, particularly the degree of cognitive decline over time. The EEG microstates have been examined in a large cohort of people with ALS ( n = 129) and healthy controls ( n = 78), enabling a cross‐sectional analysis. This analysis revealed that the standard properties of microstate classes A, B and D differ between ALS and control groups (Figure 4 ), which may indicate dysfunction in the somatosensory and attention networks. There were also significant differences in microstate transitions between ALS and control groups, Figure 5 , suggesting that the normal fluctuations in neural activity are altered in ALS. We also demonstrated that as ALS progresses, the neural dynamics undergo further changes. This is shown by longitudinal changes we observed in the standard properties of microstates (Table 1 , Figure 6 ) and their temporal dependencies (Figure 8 ). Participants with late‐stage disease showed more symmetry and stationarity in their transition matrices (Figure 8 ), which could reflect reduced neuronal flexibility (dynamicity in switching between brain microstates). Finally, the correlations between microstate properties and ALS prognosis revealed that higher duration of class B and faster increase of class C coverage over time are associated with a slower decline in gross motor skills in ALS. For cognitively and behaviourally impaired patients, lower transition probabilities from A to D, C to B and C to D are specifically associated with cognitive decline. This suggests that the microstate parameters have particular potential for development as prognostic biomarkers for ALS. Changes in microstate properties in ALS We found that four cluster prototypes (Figure 3 ) explained ~60% of the variance and exhibited similar topographies in healthy controls and ALS groups (they were also similar to the maps described in the literature, see review; Michel & Koenig, 2018 ). In studies including more topographies, the four maps initially found in 1999 (Koenig et al., 1999 ) are usually observed along other topographies, independently of ages, mental states or neurological conditions (Al Zoubi et al., 2019 ; Custo et al., 2014 ; Faber et al., 2021 ; Zanesco et al., 2020 ). The original A‐D labels were kept based on topographical similarity to the initial maps. Consistent with previous studies (Michel & Koenig, 2018 ), we observed that four microstate prototypes explained at best the variance of topographical patterns in unrelated data. This cross‐validation check of the optimal number of microstates ensured the microstate prototypes were not representing recording noise (Poulsen et al., 2018 ). Distinct microstate properties between HC and ALS cohorts The statistically significant increase in microstate class A duration and microstate class B coverage in the ALS group, when compared with healthy controls, is similar to what was observed in Parkinson disease (Chu et al., 2020 ), and in multiple sclerosis studies (Gschwind et al., 2016 ). The increase in class A and B coverage has also been demonstrated in Huntington's disease (Faber et al., 2021 ), and an increase in class A occurrence has been documented in both schizophrenia (Lehmann et al., 2005 ) and spastic diplegia (Gao et al., 2017 ). The results of previous fMRI‐EEG studies suggest that class A originates from the bilateral temporal gyri (Britz et al., 2010 ), occipital and posterior cingulate areas (Pascual‐Marqui et al., 2014 ) or the sensorimotor cortex (Yuan et al., 2012 ). Diverse interpretations of microstate class A's functional role have been reported; initially linked with the auditory network (Britz et al., 2010 ; Custo et al., 2017 ), a broader involvement including visual processing has been suggested, due to its increased coverage during visualization‐oriented tasks compared with verbalization tasks (Milz et al., 2017 ). Both of these interpretations would situate the sources of class A microstate within the sensory network, which is known to be affected in ALS. A recent review analysis conducted by Tarailis et al. has further proposed a potential link between microstate class A and varying levels of brain arousal or alertness (Tarailis et al., 2023 ). Class B is thought to originate in the occipital lobe and is associated with visual function (Britz et al., 2010 ). Both microstates A and B appear to reflect the activation of sensory networks, as indicated by their modulations in multiple sclerosis (Gschwind et al., 2016 ) and movement disorders in general (ALS, Huntington, Parkinson and spastic disorder). Microstate class D occurrence was higher in the ALS cohort than in HC. A high contribution of fronto‐parietal areas and anterior/posterior cingulate cortices (Britz et al., 2010 ; Pascual‐Marqui et al., 2014 ) was observed during microstate class D, which altogether suggest an association of microstate D with the attention network. Microstates classes C and D have been associated with ‘high‐order functional networks’ (as opposed to somatosensory or motor networks) (Michel & Koenig, 2018 ). The balance between such microstate classes was observed to be affected by neuropsychiatric conditions like schizophrenia or FTD (Nishida et al., 2013 ). While ALS is not primarily classified as a psychiatric disorder, the condition can often present with cognitive and behavioural symptoms. Taken together the cross‐sectional comparisons of microstate properties between ALS and HC cohorts echo the dual impairment of sensorimotor and cognitive functions in ALS. Longitudinal changes of microstate properties in ALS For individuals with ALS, the duration and the GFP of class B significantly increased by 0.05 and 0.02 points per month (Table 1 ). Neither of those properties was significantly different between the HC group and ALS group at the first recording session. The microstate properties showing significant differences between ALS and HC groups did not reveal any longitudinal change. This finding suggests the presence of important neuronal changes early in the disease, leading to distinct microstate properties in the ALS and HC groups. There may be slower or delayed continuous mechanisms causing changes in other microstate properties. Since early degeneration is usually compensated by remaining neuronal networks in neurodegeneration, such slower mechanisms may be compensatory. In ALS, symptoms only become apparent when a resilience threshold is crossed (Benatar et al., 2022 ; Keon et al., 2021 ). Altered microstate dynamics in ALS Previous literature has shown that there are differences in microstate transition probabilities in mood or mental disorders (Al Zoubi et al., 2019 ; Lehmann et al., 2005 ) and FTD (Nishida et al., 2013 ), and we hypothesised that the transition probabilities would also be altered in participants with ALS that exhibited cognitive and behavioural symptoms. As expected, we observed significant differences in microstate dynamics between ALS and HC groups in 7 out of 12 of the transition probabilities ( q < 0.1, FDR correction) (Figure 5 ). More specifically, we observed that patients switch less frequently from microstate C to microstate D Figure 5 . The results of previous studies on stroke, which reported no significant difference in transition probabilities compared with controls (Hao et al., 2022 ), suggest that the temporal dynamics of neural networks are not solely due to structural changes. In this study, we employed the information–theoretical analysis proposed by von Wegner et al. to further investigate the dynamics of EEG microstates (von Wegner et al., 2017 ). Our findings align with their results indicating that the microstate sequence does not adhere to a low‐order Markov property, suggesting that microstate labelling is influenced by not only the current state or the current and last two states, but also previous states. Furthermore, our analysis of the auto‐information function revealed non‐Markovian behaviour for time lags of up to 2 s, consistent with previous research (Al Zoubi et al., 2019 ; von Wegner et al., 2017 ), indicating the presence of extended short‐range memory effects in the microstate sequences. For the majority of the subjects (HC and ALS cohorts with King's stages <4), the transition matrices were asymmetric. This has been previously interpreted as a sign of ‘non‐equilibrium’ of the neural networks (von Wegner et al., 2017 ). A lack of symmetry in transition matrices has been interpreted as a positive property, implying the existence of a ‘driving force’ (if there were no ‘driving force’, and the neural networks were at equilibrium, the transition from one state to a second state would be equal to the transition from the second state to the first one). It is not surprising therefore that the late‐stage group (King's stages 4) tended to have more patients with symmetric and stationary transition matrices, Figure 8 . The increased number of symmetric and stationary transition matrices observed in late‐stage ALS may correspond to the dysfunction of this ‘driving‐force’. The thalamus, in particular, has been described as a key relay of energy, and could represent a hypothetic ‘driving‐force’ (von Wegner et al., 2017 ) (thalamic involvement has been demonstrated in motor neuron diseases [Chipika, Christidi, et al., 2020 ; Chipika, Finegan, et al., 2020 ; Deymeer et al., 1989 ]). The observed change in microstate transitions in late‐stage disease could also be explained by the distress individuals with ALS may experience toward the end of their life. A higher ratio of symmetrical and stationary matrices in individuals with mood and anxiety disorders compared with healthy controls has been similarly shown by (Al Zoubi et al., 2019 ), which they interpreted as arising from ‘ruminative thoughts’. Increased equilibrium could additionally arise due to a reduction in the flexibility of brain dynamics in ALS. A previous study has shown that the incidence of ‘neuronal avalanches’, a measure of brain dynamics determined by quantifying aperiodic bursts of neuronal activity diffusing across the brain, was reduced in ALS compared with healthy controls cohort and was associated with disease stage (Polverino et al., 2022 ). Clinical relevance of EEG microstates The main finding from the analysis of the correlation between microstate parameters and clinical measures was that lower duration of microstate class B and slower change in coverage of class C were significantly associated with faster functional decline in the lower limbs (Figure 7 ). These measures, therefore, have potential utility in prognostic prediction of motor function. We evaluated correlations with clinical scores specifically for subgroups of ALS patients with distinct cognitive profiles as altered microstates characteristics have been specifically associated with impaired cognition and mental health (Al Zoubi et al., 2019 ; Dierks et al., 1997 ; Nishida et al., 2013 ; Tait et al., 2020 ). In cognitively and behaviourally impaired patients, the lower transition probabilities A to D, and C to D are additionally associated with cognitive decline. This decline is suggested by the gradual improvement in cognitive performance (measured by ECAS Total scores), which is slower when compared with the average practice effect. Additionally, a lower transition rate from C to B was associated with shorter survival (Figure 7 ). The transition probability C → B appear to be a key potential biomarker of ALS prognosis. Higher transition probabilities from C to B seem to represent signs of slower decline in ALS. This supports our hypothesis that changes in microstates dynamics could predict the progression of ALS, including cognitive decline. Limitations and future directions The EEG microstate analysis is based on a repeatedly observed phenomenon representing ongoing thought processes. However, there remains a lack of understanding of the neural mechanisms leading to the presence of microstates and their transitions. It remains unclear how microstates actually reflect conscious thoughts, despite new insights on microstates in various states of consciousness (e.g., sleep, anaesthesia, wakefulness) (Bréchet & Michel, 2022 ) and rough estimations of the brain sources each microstate class originate from (Bréchet et al., 2020 ; Britz et al., 2010 ; Custo et al., 2017 ; Milz et al., 2017 ; Musso et al., 2010 ; Pascual‐Marqui et al., 2014 ). The interpretation of microstates' characteristics often relies heavily on estimated brain sources. Previous studies of the brain sources underlying different microstates have reported diverse findings, possibly as a result of differences in methodology and/or lack of temporal independence (difficulty of dissociating microstate sources as microstates are a continuous process). This complicates the interpretation of microstate changes (Britz et al., 2010 ; Mishra et al., 2020 ; Yuan et al., 2012 ). Microstates are fundamentally defined based on sensor space analysis. Therefore, for a precise association with brain sources, other methods can provide more information, such as examining patterns of activation directly in brain networks' functional connectivity. In this study, over‐interpretation was carefully avoided by cross‐examining microstates' hypothetic generators with paradigm‐based studies. One important consideration is the possible non‐random dropout within the ALS cohort over time, wherein individuals with greater impairments are more likely to be lost to attrition. In the case of longitudinal ECAS scores, the observed increase may not solely be attributed to the practice effect but could also be influenced by artificial inflation of cognitive scores due to the dropout of more impaired participants. However, this potential bias is mitigated when examining correlations between EEG and clinical measures progressions at the subject level, as both are expected to be similarly affected by non‐random dropout. A limitation of the present study is the heterogeneity of onsets and cognitive/behavioural ALS profiles. In future studies, a more continuous collection of data should help to account for a greater number of clinical profiles and we envisage that a comparison of microstates in different ALS subphenotypes will be possible.
CONCLUSION These RS EEG microstate results indicate that ALS impacts both sensory and higher‐order networks. These findings are consistent with the range of motor, respiratory, and cognitive impairments observed in ALS clinical presentations. Temporal dynamics of resting state EEG enable us to further quantify the multidimensional impairments. Importantly, we found reduced dynamicity in brain state transitions, which may occur as a result of declining cognition, repetitive thoughts, anxiety, or neuronal loss. We have shown that changes in microstate properties are associated with cognitive decline and prognosis, making them a promising prognostic marker for ALS.
Abstract Recent electroencephalography (EEG) studies have shown that patterns of brain activity can be used to differentiate amyotrophic lateral sclerosis (ALS) and control groups. These differences can be interrogated by examining EEG microstates, which are distinct, reoccurring topographies of the scalp's electrical potentials. Quantifying the temporal properties of the four canonical microstates can elucidate how the dynamics of functional brain networks are altered in neurological conditions. Here we have analysed the properties of microstates to detect and quantify signal‐based abnormality in ALS. High‐density resting‐state EEG data from 129 people with ALS and 78 HC were recorded longitudinally over a 24‐month period. EEG topographies were extracted at instances of peak global field power to identify four microstate classes (labelled A‐D) using K‐means clustering. Each EEG topography was retrospectively associated with a microstate class based on global map dissimilarity. Changes in microstate properties over the course of the disease were assessed in people with ALS and compared with changes in clinical scores. The topographies of microstate classes remained consistent across participants and conditions. Differences were observed in coverage, occurrence, duration, and transition probabilities between ALS and control groups. The duration of microstate class B and coverage of microstate class C correlated with lower limb functional decline. The transition probabilities A to D, C to B and C to B also correlated with cognitive decline (total ECAS) in those with cognitive and behavioural impairments. Microstate characteristics also significantly changed over the course of the disease. Examining the temporal dependencies in the sequences of microstates revealed that the symmetry and stationarity of transition matrices were increased in people with late‐stage ALS. These alterations in the properties of EEG microstates in ALS may reflect abnormalities within the sensory network and higher‐order networks. Microstate properties could also prospectively predict symptom progression in those with cognitive impairments. In amyotrophic lateral sclerosis, both static and dynamic properties of resting‐state EEG microstates were found to be disrupted, indicating abnormalities within the sensory network as well as higher‐order networks. These alterations in microstate properties hold the potential to serve as predictive indicators for symptom progression, particularly in individuals with cognitive impairments. Metzger , M. , Dukic , S. , McMackin , R. , Giglia , E. , Mitchell , M. , Bista , S. , Costello , E. , Peelo , C. , Tadjine , Y. , Sirenko , V. , Plaitano , S. , Coffey , A. , McManus , L. , Farnell Sharp , A. , Mehra , P. , Heverin , M. , Bede , P. , Muthuraman , M. , Pender , N. , ... Nasseroleslami , B. ( 2024 ). Functional network dynamics revealed by EEG microstates reflect cognitive decline in amyotrophic lateral sclerosis . Human Brain Mapping , 45 ( 1 ), e26536 . 10.1002/hbm.26536
AUTHOR CONTRIBUTIONS Marjorie Metzger, Bahman Nasseroleslami, Orla Hardiman, Niall Pender, Muthuraman Muthuraman, Peter Bede: Conceptualisation. Marjorie Metzger, Stefan Dukic, Roisin McMackin, Eileen Giglia, Matthew Mitchell, Saroj Bista, Emmet Costello, Colm Peelo, Yasmine Tadjine, Vladyslav Sirenko, Serena Plaitano, Amina Coffey, Prabhav Mehra: Investigation (Data acquisition). Marjorie Metzger, Bahman Nasseroleslami: Methodology. Marjorie Metzger: Formal Analysis. Roisin McMackin, Lara McManus, Mark Heverin, Bahman Nasseroleslami, Orla Hardiman: Project Administration. Bahman Nasseroleslami, Orla Hardiman: Resources. Bahman Nasseroleslami, Orla Hardiman, Niall Pender, Muthuraman Muthuraman, Peter Bede: Funding Acquisition. Marjorie Metzger, Stefan Dukic, Bahman Nasseroleslami: Software. Bahman Nasseroleslami, Orla Hardiman: Supervision. Marjorie Metzger: Validation. Marjorie Metzger: Visualisation. Marjorie Metzger: Writing‐original draft. Marjorie Metzger, Lara McManus, Bahman Nasseroleslami, Orla Hardiman: Writing‐review and editing. FUNDING INFORMATION Funding for this study was provided by the Thierry Latran Foundation (Project award to Orla Hardiman), the Health Research Board of Ireland (HRA‐POR‐2013‐246; MRCG‐2018‐02), the Irish/UK Motor Neurone Disease Research Foundation (IceBucket Award; MRCG2018‐02 and McManus/Apr22/888‐791 to Lara McManus and McMackin/Oct20/972‐799 to Roisin McMackin), Irish Research Council (Government of Ireland Postdoctoral Research Fellowship GOIPD/2015/213 to Bahman Nasseroleslami and Government of Ireland Postdoctoral Postgraduate Scholarship GOIPG/2017/1014 to Roisin McMackin) and Science Foundation Ireland (16/ERCD/3854 and Royal Society/SFI URF\R1\221917 to Lara McManus). Peter Bede and the Computational Neuroimaging Group are supported by the Health Research Board of Ireland (Emerging Investigator Award HRB‐EIA‐2017‐019), the Irish Institute of Clinical Neuroscience (IICN) – Novartis Ireland research grant, The Iris O'Brien Foundation and The Perrigo clinician–scientist research fellowship. Muthuraman Muthuraman is supported by the German Collaborative Research (DFG‐CRC‐1193 and CRC‐TR‐128). CONFLICT OF INTEREST STATEMENT No conflict of interest to disclose. Supporting information
ACKNOWLEDGEMENTS We would like to thank the Wellcome HRB Clinical Research Facility at St James's Hospital, as well as all the participants, their families and the staff involved in the study. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author on reasonable request from qualified investigators. Data sharing is subject to the participant's consent and approvals by the Data Protection Officer and the Office of Corporate Partnership and Knowledge Exchange in Trinity College Dublin. The code used to compute the microstates for the analyses described in this article can be found at: https://github.com/atpoulsen/Microstate-EEGlab-toolbox . We additionally adapted the Python code freely available at https://github.com/Frederic-vW/eeg_microstates to MATLAB.
CC BY
no
2024-01-16 23:47:16
Hum Brain Mapp. 2023 Dec 13; 45(1):e26536
oa_package/b4/3f/PMC10789208.tar.gz
PMC10789210
38224539
INTRODUCTION Post‐stroke aphasia, a language impairment that affects approximately a third of stroke survivors (Code & Petheram, 2011 ; Wu et al., 2020 ), is among the most debilitating cognitive consequences of stroke (Berthier, 2005 ; Gottesman & Hillis, 2010 ). While research has shown that multiple neurological, demographic and health‐related factors explain some of the variability in language recovery (Johnson et al., 2022 ; O'Sullivan et al., 2019 ; Watila & Balarabe, 2015 ), they do not reliably account for the spectrum of clinical recovery observed. In some individuals with stroke, the quantifiable status of pre‐stroke brain tissue integrity may serve as an indicator of susceptibility to more severe impairments, or to diminished recovery potential in the presence of stroke pathology (Appleton et al., 2020 ; Evans et al., 2022 ; Umarova, 2017 ). For example, it has been shown that characteristics of the spared brain tissue such as an increased total brain age (Kristinsson et al., 2022 ), a decreased hippocampal volume (Schevenels et al., 2022 ) or a high number of hyperintense vessels, a marker of abnormal hemodynamic function (Bunker et al., 2022 ), are negatively associated with language recovery in post‐stroke aphasia. These studies corroborate that inter‐individual variation in at least some neuroimaging markers that quantify common brain alterations in ageing are associated with functional stroke outcomes and can therefore be included in stroke prediction models. The most prevalent aging pathology of presumed vascular origin, white matter hyperintensities (WMH), has been identified as a promising proxy measure of cognitive outcomes in ageing and in disease (Prins & Scheltens, 2015 ). WMH are a radiological marker associated with ageing and cerebrovascular risk factors (Debette et al., 2018 ; Grosu et al., 2021 ; Wardlaw et al., 2013 ) that have been negatively associated with cognitive skills (Camerino et al., 2020 ; Duering et al., 2014 ; Hamilton et al., 2021 ), including those skills requiring language in healthy individuals (Hilal et al., 2021 ; Jiang et al., 2018 ) and individuals with mild cognitive impairment (Jiang et al., 2018 ). Given that the incidence of WMH increases with age (Longstreth Jr. et al., 1996 ), they often coincide with aging‐associated cerebral pathologies, for example, stroke (Georgakis et al., 2019 ; Kim et al., 2020 ; Schwartz et al., 2018 ). A recent meta‐analysis of over 100 studies showed that the presence and the severity of premorbid WMH lesions in individuals with stroke is associated with an increased risk of dementia, functional impairment, stroke recurrence, and mortality after stroke (Georgakis et al., 2019 ). Fewer studies ( N = 5) have examined the relationship between WMH burden and aphasia outcomes after stroke. Wright et al. ( 2018 ), Johnson et al. ( 2022 ), and Vadinova et al. ( 2023 ) identified a negative association between the degree of WMH and aphasia outcomes at different stages of recovery. Varkanitsa et al. ( 2020 ) also found that a greater amount of WMH is negatively associated with response to language therapy, and a study by Basilakos et al. ( 2019 ) also showed that a greater amount of WMH predicts a more rapid decline in language skills in chronic aphasia. Together, these studies suggest that WMH severity, assessed on clinical severity scales [e.g., Fazekas scale (Fazekas et al., 1987 ); Cardiovascular Health Study scale (Manolio et al., 1994 )], could represent a clinically meaningful risk factor for poor language outcomes after stroke. While WMH burden is assessed in clinical practice using qualitative severity scales (Fazekas et al., 1987 ; Manolio et al., 1994 ; Scheltens et al., 1993 ), WMH can also be assessed quantitatively by considering their volume (Hairu et al., 2021 ; Silbert et al., 2008 ) and anatomical distribution (Clancy et al., 2022 ; Hawe et al., 2018 ). Quantitative WMH measures have the potential to produce more precise and systematic assessment of observed WMH radiological lesions within the entire brain or within specific white matter tracts (Camerino et al., 2020 ; Duering et al., 2014 ). Similar to other cerebral lesions (e.g., stroke, multiple sclerosis), the quantitative assessment of WMH lesions (i.e., volume, anatomical distribution) has revealed associations between cardiovascular pathology and cognition, and the variance in cognitive outcomes in stroke cohorts (Bonkhoff et al., 2022 ; Clancy et al., 2022 ; Hawe et al., 2018 ; Röhrig et al., 2022 ). With respect to the critical role of WMH spatial distribution, converging evidence from lesion‐symptom mapping (LSM) studies have shown that some white matter tracts, such as the thalamic radiations and corpus callosum (CC) segments, are most commonly associated with cognitive deficits when affected by WMH (Biesbroek et al., 2016 ; Biesbroek et al., 2020 ; Camerino et al., 2020 ; Duering et al., 2011 ; Duering et al., 2014 ; Hilal et al., 2021 ; Jiang et al., 2018 ; Lampe et al., 2019 ; Zhao et al., 2018 ) (for a review, see Biesbroek et al., 2017 ). More recent studies specifically investigating the role of WMH lesion loads within the CC provide further evidence that callosal WMH lesions are independently associated with cognitive impairment in individuals with VCI and healthy ageing individuals (Freeze et al., 2022 ; Vemuri et al., 2021 ), suggesting that callosal connections are a strategic location within the white matter network. The CC is a major interhemispheric tract supporting several critical functions, including excitation, inhibition and modulation of neuronal activity (for a review, see Bloom & Hynd, 2005 ; Innocenti et al., 2022 ). It is therefore not surprising that damage to this tract can impair any of these critical functions. Callosal connections are not considered to form part of the core left‐lateralised language network (Hagoort, 2014 ; Hickok & Poeppel, 2004 ; Price, 2012 ; Saur et al., 2008 ; Vigneau et al., 2011 ). However, given that the right hemisphere (RH) has been shown to contribute to language function when the primary network is perturbed (Brownsett et al., 2014 ; Geranmayeh et al., 2017 ; Schneider et al., 2022 ), communication between the hemispheres most likely occurs via the interhemispheric white matter tracts such as the CC, and others. While the precise contribution of the RH in language recovery remains unclear and unspecified (for a review, see Gainotti, 2015 ; Turkeltaub, 2015 ), neuroimaging evidence suggests that either language homologues and/or RH nodes within domain‐general networks play a role in supporting language recovery (Brownsett et al., 2014 ; Chang & Lambon Ralph, 2020 ; Geranmayeh et al., 2017 ; Hope et al., 2017 ; Stefaniak et al., 2022 ; Xing et al., 2016 ). Therefore, interhemispheric connections, such as the CC, may play a vital role in enabling RH compensation or upregulation in aphasia. WMH within these structures may impact on the effectiveness of compensatory or upregulatory processes reliant on these connections. Given this potential role of interhemispheric connections in enabling RH engagement during language recovery processes (Brownsett et al., 2014 ; Geranmayeh et al., 2017 ; Stefaniak et al., 2022 ), the contribution of callosal WMH to aphasia recovery warrants further investigation. Present study In this study, we investigated quantitative measures of early subacute WMH lesions to see if they explained variation in aphasia outcomes after stroke. We then considered if these measures could serve as a surrogate measure of pathological processes contributing to diminished structural brain integrity impacting recovery potential in aphasia. WHM burden has been shown to increase slowly after stroke (Clancy et al., 2022 ). A recent meta‐analysis found an average progression of 1.74 ml in WMH volume over 2.7 years (Jochems et al., 2022 ). In our study, the WMH was measured on average 27 days after the stroke, and so increases in WMH burden from pre‐stroke levels would be negligible. As a result, we propose that early subacute WMH burden, acquired within 6 weeks of the stroke event, reflects premorbid WMH levels. We considered the contribution to spoken comprehension and production outcomes of the total WMH volume and the WMH lesion load within callosal segments including, forceps minor (CC‐Fmin), forceps major (CC‐Fmaj) and the body (CC‐Body). We hypothesized that volumetric assessment would explain a proportion of variation observed in post‐stroke aphasia recovery. Second, we hypothesized that WMH lesion load within callosal segments would be negatively associated with language outcomes in aphasia.
METHODS Participants This study retrospectively analysed data from two post‐stroke aphasia studies. Inclusion criteria were (a) a single left‐hemisphere stroke (ischaemic or haemorrhagic), confirmed on the radiologist report, (b) the presence of aphasia, diagnosed using the WAB (Kertesz & Raven, 2007 ), (c) English as primary language, (d) availability for an initial assessment at 2–6 weeks post‐stroke onset, and (e) able to provide informed consent. Exclusion criteria were (a) history of neurological disorder, mental illness, head trauma, alcoholism, or cerebral tumour, (b) contraindications to magnetic resonance imaging (MRI), (c) severity of deficits precluding informed consent, (d) severe dysarthria or apraxia of speech (determined by a speech pathologist), and (e) severe hearing impairment. Apraxia of speech was assessed on the Apraxia Battery for adults (Dabul, 2000 ). The study received approval from the University of Queensland Medical Research Ethics Committee and the Queensland Health Human Research Ethics Committee. Language assessment Participants underwent language assessment at early subacute assessment (mean 27 days, range: 17–47 days). For each participant, spoken language comprehension (SpoComp) and spoken language production (SpoProd) performance were measured. The SpoComp score was derived from a combined score of the Auditory Word, Sentence, and Paragraph comprehension subtests from the Comprehensive Aphasia Test (CAT) (Swinburn et al., 2004 ). The SpoProd score was derived by combining the Fluency and Naming (nouns and verbs) CAT (Swinburn et al., 2004 ) subtests and a picture description task (Kertesz & Raven, 2007 ) (see Supplementary material, for details of the picture description task and analysis). Three individuals with aphasia were excluded from the SpoProd analysis as their picture description task was not administered at the early subacute stage. CAT (Swinburn et al., 2004 ) subtests (i.e., comprehension, fluency, naming) were double scored by experienced speech pathologists blinded to neurological and demographic data. Interrater reliability, available for speech production scores (fluency, naming) for one of the included studies, was 70% (calculated as a percentage of identical t scores). Neuroimaging Neuroimaging protocol Early subacute neuroimaging data were acquired between 2 and 6 weeks post‐stroke. Data from the first study ( N = 13) (Roxbury et al., 2019 ) were collected using Siemens 3 Tesla Trio scanner (Siemens, Erlangen) with a 12‐channel head coil. During the same scanning session, a high‐resolution 3D T1‐weighted anatomical image [MP‐RAGE; TR 1900 ms; TE 2.4 ms; TI 900 ms; (0.9 mm) 3 resolution] and 2D T2‐weighted FLAIR image (TE 87 ms, TR 9000 ms, TI 2500 ms, 36 3 mm slices, 0.9 × 0.9 mm in‐plane resolution) were acquired for each subject. Data from the second study ( N = 24) were collected using a Siemens 3 Tesla MAGNETOM Prisma scanner (Siemens, Erlangen) using a 20‐channel head coil. A high‐resolution 3D T1‐weighted anatomical image [MP2RAGE; Marques et al., 2010 ; TR 4000 ms; TE 2.91 ms; TI1 700 ms; TI2 2220 ms; FA1 6°; FA2 7°; (1 mm) 3 resolution] and 3D T2‐weighted FLAIR image [TE 386 ms, TR 5000 ms, TI 1800 ms, (1 mm) 3 resolution] were acquired for each subject. Neither inspection of neuroimaging data, nor radiologist' report documented cases of midline shift. Corrected stroke lesion volume and lesion load to cortical language ROIs Stroke lesion masks were manually delineated on high‐resolution T1‐weighted sequences in patient space using MRIcron ( https://www.nitrc.org/projects/mricron ) by two authors (K.G. and V.V.) and verified by two senior authors (K.M. and S.B.), blinded to behavioural and demographic data. The stroke lesion volume was calculated in native space for subsequent analyses. T2‐weighted FLAIR images were used to verify lesion location, particularly for haemorrhagic stroke. Stroke lesion load was defined as the ratio of lesion volume to intracranial volume, assessed for each patient before normalization. For region of interest (ROI) analyses, T1‐weighted scan sequences and lesion masks were normalized to MNI space using enantiomorphic normalization (Nachev et al., 2008 ) in Clinical Toolbox (2012, https://www.nitrc.org/projects/clinicaltbx/ ) within SPM (version 12, https://www.fil.ion.ucl.ac.uk/spm/ ) in Matlab (version 2017, https://www.mathworks.com/ ). Normalized lesion masks were manually revised (K.G. and V.V.) where necessary and once again verified by two authors (K.M. and S.B.). Disagreements during the manual drawing process were resolved through group discussion (K.G., V.V., S.B., and K.M.). Cortical language ROI masks in MNI space were created in DSI studio software ( https://dsi-studio.labsolver.org ), using the automated anatomical labelling atlas 3 (Rolls et al., 2020 ). Given the limited number of participants, the number of cortical ROIs was restricted to the four most frequently identified cortical regions associated with aphasia; Broca's area ( pars triangularis + pars opercularis ), insula , superior temporal gyrus (STG) and a combined region of both angular gyrus (AG) and supramarginal gyrus (SMG). Finally, the lesion ‘load’, or proportion damaged of each cortical ROI, was calculated by inclusively masking each cortical ROI with the normalized stroke lesion, and dividing it by the total volume of the ROI. WMH volume and proportion WMH load to callosal ROIs For WMH volume and WMH lesion load to critical callosal ROIs, T2‐weighted FLAIR sequences were resliced and co‐registered to T1‐weighted sequences and normalized applying the T1‐weighted transformation matrix using fourth degree B‐spline interpolation (see above for T1‐weighted normalization). Upon evaluation of the accuracy of normalization in the ventricular region, we determined that seven participants' imaging data was not accurately normalized in this region. In three patients with asymmetrical ventricles, re‐normalization that excluded the entire left hemisphere resulted in improved normalization. In four patients, a ‘younger brain – age profile’ was noted compared to the rest of the cohort, and so their data was re‐normalized using a standard MNI template derived from a younger cohort from Clinical toolbox (2012, https://www.nitrc.org/projects/clinicaltbx/ ). This adjustment resulted in a modest improvement in registration in two cases. Re‐normalization to this younger standard template was not sufficient to improve normalization in the other two cases. All statistical analyses were therefore repeated after excluding these two participants, with no changes in the overall results. We have included these sub analyses in the Supplementary material. WMH lesions were manually delineated on normalized T2‐weighted FLAIR sequences ( https://www.nitrc.org/projects/mricron ) by three authors (V.V., F.W., S.B.) and verified and amended as required by a radiologist (L.Z.), blinded to demographic and behavioural information. WMH lesions were only traced on the RH given the challenges of accurately tracing in the left hemisphere due to the extension of stroke lesion and associated pathological processes into the ventricular area. WMH volume (i.e., volume of total WMH lesion mask) was calculated for subsequent analyses. See Supplementary material for detailed description of WMH lesion delineation. Callosal ROI analyses employed masks created in DSI studio software ( https://dsi-studio.labsolver.org ), based on the HCP1065 atlas (Yeh et al., 2018 ). Three callosal ROIs were selected and revised to include only their RH portions, CC‐Fmin, CC‐Fmaj, and CC‐Body. Finally, the proportion of each callosal ROI affected by the WMH was calculated by finding the volume common to both the WMH lesion mask and the callosal ROI (inclusive masking), then dividing by the total volume of the callosal ROI. Statistical analyses Relationships between the demographic and imaging characteristics were explored using Spearman correlations. Given that two cohorts were included, Wilcoxon signed‐ranked test was used to test for any differences between the datasets. First, in Step 1, we used two stepwise linear regressions with forward selection to test which neuroimaging variables explained performance across both dependent variables (SpoComp and SpoProd) (i.e., one regression analysis for each dependent variable). The forward selection process starts with an empty model and iteratively adds and removes variables that contribute the most to improving the model fit until no additional variables significantly enhance the fit. The neuroimaging variables, including stroke lesion variables (i.e., corrected stroke lesion volume, lesion load Insula, lesion load Brocas, lesion load AG + SMG, lesion load STG) and WMH lesion variables (i.e., WMH volume, lesion load Fmin, lesion load Fmaj, lesion load Body), served as independent variables in both regression analyses. Due to the skewed nature of both stroke and WMH neuroimaging variables, these were transformed using square root transformation. The neuroimaging variables that demonstrated significant explanatory power for the outcomes identified in Step 1 were retained for inclusion in Step 2. In Step 2, we performed a standard multiple linear regression to test the relative importance of the significant neuroimaging variables (Step 1) and stroke‐related demographic variables (i.e., age, sex) to identify the best combination of predictors for the two language outcomes (SpoComp and SpoProd). Our analyses included two apriori steps (Steps 1 and 2) for two outcome measures (SpoComp and SpoProd), as such the alpha level of each model was set to p = .012 (0.05/4, Bonferroni‐correction). The assumptions of linearity of the data, normality of residuals, independence of residuals and homoscedasticity were all met (see Supplementary material for diagnostic tests and plots). All analyses were conducted in IBM SPSS Statistics for Windows, Version 22.0.
RESULTS All demographic, stroke lesion, and WMH variables and outcomes can be found in Table 1 . 1 Overlay maps of all stroke lesions can be found in Figure 1 , and WMH lesions can be found in Figure 2 . Corrected stroke lesion volume did not correlate with WMH volume ( p = .762) or any of the callosal ROIs (CC‐Fmin: p = .482, CC‐Fmaj: p = .971, CC Body: p = .997). WMH lesion load within callosal ROIs strongly correlated with total WMH volume (CC‐Fmin: r = .88, CC F‐maj: r = .82, CC‐Body = .93). Figure 3 illustrates the distribution of transformed stroke lesion loads and transformed WMH lesion loads in relevant ROIs. Approximately 20% of patients experienced a haemorrhagic stroke. Univariate linear regression analysis indicated that there was a difference in SpoComp scores between ischaemic and haemorrhagic stroke subgroups ( p = .01), but not SpoProd scores ( p = .08). As such, stroke type was included as a predictor in analyses as a co‐variate (Step 2), together with age and sex. Given the two cohorts had different imaging parameters, we investigated for any significant differences between behavioural or neuroimaging measures. Non‐parametric Wilcoxon signed‐rank test was used to assess between population differences due to skewed distribution. Significant between‐dataset differences were observed for Education, WMH lesion volume and WMH lesion load CC‐Fmaj (see Table 1 ). SpoComp score First, we conducted a stepwise multiple linear regression with forward selection to determine which neuroimaging variables accounted for the variance in SpoComp scores. The corrected stroke lesion volume and WMH lesion load within CC‐Fmin were significant predictors of the SpoComp score, together accounting for 40% variance in SpoComp scores (see Table 2 ). 2 No other neuroimaging variables explained the variance (see Supplementary material for detailed results of Step 1). Next (Step 2), we conducted a standard regression analysis that included the significant neuroimaging variables (i.e., corrected stroke lesion volume, WMH CC‐Fmin) as well as demographic variables (i.e., age, sex, stroke type). This approach was conducted to test the relative importance of individual variables of interest associated with SpoComp scores (see Table 3 for statistics). Here, corrected stroke lesion volume, stroke type and WMH lesion load within CC‐Fmin significantly predicted SpoComp scores (model statistics: R 2 = .66, F (5, 31) = 11.98, p < .001). Patients with an ischaemic stroke had, on average, 9.78 lower early subacute SpoComp scores than patients with haemorrhagic stroke ( t = −3.64, p < .001). The model further indicated that higher WMH lesion load in CC‐Fmin was associated with a decrease in SpoComp scores ( β = −5.37, t = −2.59, p = .01). Finally, a higher corrected stroke lesion load was associated with a decrease in SpoComp score ( β = −8.30, t = −4.50, p < .001). Age and sex did not independently explain variance in outcomes. SpoProd score To determine which neuroimaging variables accounted for the variance in SpoProd scores, we conducted a stepwise linear regression with forward selection with all neuroimaging variables. Only corrected stroke lesion volume emerged as a significant predictor of the SpoProd score, accounting for only 22% variance in SpoProd scores (see Table 2 for statistics). No other neuroimaging variables explained the variance (see Supplementary material for detailed results of Step 1). Next, we conducted a standard regression analysis that included the significant neuroimaging variables (i.e., corrected stroke lesion volume) as well as demographic variables (i.e., age, sex, stroke type) to test their relative importance associated with SpoProd scores. Here, only stroke lesion volume significantly predicted SpoProd scores (model statistics: R 2 = .38, F (4, 29) = 4.60, p = .005) accounting for a considerable proportion of the variability in SpoProd scores. A higher corrected stroke lesion load was associated with a decrease in SpoProd score ( β = −16.57, t = −3.42, p = .001). Age, sex and stroke type did not explain variance in outcomes.
DISCUSSION In this study, we investigated the impact of early subacute WMH metrics, a surrogate of premorbid volume and distribution, on the inter‐individual variability observed in post‐stroke aphasia outcomes. We probed the contribution of the total volume of WMH within the contralesional RH and the WMH lesion load within an empirically motivated tract, the corpus callosum. We illustrate, for the first time, that premorbid WMH distribution negatively impacts early subacute aphasia outcomes after stroke. A key finding of this research is the association between premorbid WMH load within the CC‐Fmin and early subacute comprehension impairments, when considered with other stroke lesion and demographic variables. This negative impact of callosal WMH on language in aphasia is consistent with converging evidence from ageing (Camerino et al., 2020 ; Freeze et al., 2022 ; Vemuri et al., 2021 ) and other stroke populations (Zhang et al., 2014 ) suggesting that WMH disrupts neural networks that underpin a range of cognitive functions, resulting in behavioural consequences. We found no relationship between the total volume of WMH and aphasia outcomes. Our results indicate that rather than total WMH volume, localization of WMH burden, that is, the extent of the damage within the CC‐Fmin, may be a more sensitive biomarker of structural brain health. Furthermore, we provide novel evidence that WMH anatomical distribution affects language skills differently, with the WMH lesion load within CC‐Fmin negatively impacting language comprehension, but not language production. This increased vulnerability of spoken comprehension to WMH and other brain ageing markers highlights that the different domains of language need to be examined separately in order to derive clinically meaningful and sensitive neurobiological biomarkers of language recovery after stroke (Wilson et al., 2023 ). Finally, our results indicated that individuals that had an ischaemic stroke had higher subacute SpoComp scores than those that had a haemorrhagic stroke. Although somewhat underpowered, this observation likely reflects the nature of the stroke and differences in lesion neuroanatomy between the two groups. Furthermore, we found no evidence of an association between stroke type and subacute SpoProd scores. Impact of WMH within CC‐Fmin In research on cognitive ageing, the distribution of WMH within several segments of the CC, particularly the CC‐Fmin and the CC‐Fmaj, has been shown to independently explain the cognitive sequelae of WMH pathology, suggesting that callosal WMH may act as a surrogate marker of cognitive ageing processes (Freeze et al., 2022 ; Petersen et al., 2022 ; Vemuri et al., 2021 ). Our findings indicate that this may also be the case in pathological populations, such as post‐stroke aphasia. The CC‐Fmin traverses and connects several regions that lie within the bilateral prefrontal cortices (PFCs), including the orbitofrontal, cingulate, and the superior frontal cortices. The contribution of the frontal white matter in healthy and pathological ageing has been a topic of extensive research (Brickman et al., 2012 ; Fjell et al., 2017 ; Schneider et al., 2022 ). Recent analyses of microstructural indices in large cohort samples confirmed that frontal white matter is singularly vulnerable to microstructural changes as we age and that these changes are predictive of worse cognitive performance and further cognitive decline (Poulakis et al., 2021 ; Saboo et al., 2022 ; Vemuri et al., 2021 ). From a network perspective, segments of the frontal white matter may constitute ‘key hubs’ (Stam, 2014 ) or ‘bottlenecks’ (Griffis et al., 2017 ), that is, brain structures that are preferentially afflicted across disorders, with damage to them being disproportional associated with psycho‐neurological disturbance (van den Heuvel & Sporns, 2019 ). It is worth noting that the median percentage WMH lesion load within CC‐Fmin was 0.61% (range 0.01–8.41%) (see Table 1 ), indicating that the distribution of WMH lesions within frontal callosal connections is generally modest, with only a small sub‐set of patients exhibiting more pronounced lesion within the tract. Coupled with the absence of an identified relationship between WMH lesion load and other CC segments, this finding suggests that frontal callosal connections may play a strategic role in cognitive networks, with minor disruptions resulting in behavioural consequences. In the case of post‐stroke aphasia, the exact neurobiological mechanisms by which callosal WMH burden predisposes individuals with aphasia to suboptimal recovery from aphasia remains unclear. The functional roles of the CC segments in language recovery remain to be determined. CC is seldom affected by ischaemia given its rich blood supply from various arteries (Chrysikopoulos et al., 1997 ) and aphasia research has almost exclusively focused on delineating the intrahemispheric white matter of the language network, identifying tracts such as the superior longitudinal fasciculus and the inferior frontal longitudinal fasciculus as vital to successful language recovery after stroke (Ivanova et al., 2016 ; Zhang et al., 2021 ). Conversely, more recent studies that have used whole‐brain analyses identified associations between CC microstructure, unaffected by the stroke injury directly, and language outcomes after stroke (Dresang et al., 2021 ; Hula et al., 2020 ; Pani et al., 2016 ), further highlighting the need to consider how the structural integrity of the CC contributes to aphasia recovery. Regions within the PFC have been robustly implicated in many large‐scale neural networks such as the fronto‐parietal network, the salience network, the cingulo‐opercular network, and the default mode network (for a review, see Menon & D'Esposito, 2022 ). Several of these neural networks have, in turn, been associated with language abilities in aphasia (Brownsett et al., 2014 ; Geranmayeh et al., 2016 ; Geranmayeh et al., 2017 ). Our findings suggest that callosal WMH may act to influence compensatory processes, or the upregulation of neural networks supporting the recovery of language. If language abilities in aphasia are, in part, contingent on effective domain‐general compensatory or upregulatory processes reliant on spared interhemispheric connections, the disruption that may occur with the presence of WMH lesions within these connections may contribute to suboptimal recovery of language. Lack of impact of total WMH volume In clinical practice, WMH were first discerned as regions of diffuse hyperintense signal on T2‐weighted FLAIR images and therefore assessed using qualitative clinical severity scales (Fazekas et al., 1987 ; Scheltens et al., 1993 ). Despite the robust evidence that qualitative WMH scales capture clinically pertinent differences in WMH burden (for reviews and meta‐analyses, see Georgakis et al., 2019 ; Hamilton et al., 2021 ), recent research highlights the advantage of precise and systematic volumetric analyses in quantifying WMH burden and its relationship with cognition in ageing (Hawe et al., 2018 ; Kaskikallio et al., 2019 ). The role of WMH volume in populations with other primary neurological disorders, such as dementia or post‐stroke aphasia, is less well characterised. In line with a handful of studies investigating other domains of cognition after stroke (Ferris et al., 2022 ; Röhrig et al., 2022 ), we failed to observe an association between premorbid WMH volume and stroke outcomes. Conversely, several larger studies reported significant associations between WMH volume and cognition after stroke (Clancy et al., 2022 ; Hawe et al., 2018 ). These inconsistent findings may partially reflect the well‐accepted challenge of extreme heterogeneity in clinical stroke cohorts which can conceal significant predictors of impairment and recovery, especially when complex and partially multi‐collinear relationships exist between predictors (Boyd et al., 2017 ). Furthermore, a recent large study by Bonkhoff et al. ( 2022 ) ( n > 1100) showed that severe WMH predisposed individuals with stroke to worse acute functional outcomes only if the stroke lesions were located in a subset of specific brain territories, which included frontal language network regions, suggesting a complex interaction of stroke and WMH variables. Combined with our results, these findings challenge the assumption that the stroke lesion volume and the WMH lesion volumes contribute to the observed behavioural impairment in a simple linear and additive fashion for all individuals with aphasia across all timepoints. Future studies must endeavour to minimize variance in patient cohorts, by considering not only the volume of the stroke lesion but also its location, and thus increase the power to detect sensitive biomarkers of recovery (Boyd et al., 2017 ). Increased vulnerability of spoken comprehension to WMH The unique effect that WMH may have on the impairment and the recovery of different language skills is an exciting avenue for future research. In line with previous studies (Basilakos et al., 2019 ; Varkanitsa et al., 2020 ), this study found no effect of WMH on language production skills. Conversely, Wright et al. ( 2018 ) identified an association between naming and fluency and WMH burden. These discordant results are not restricted to aphasia research, with similar discrepancies within other cognitive domains, such as memory or attention (Liang et al., 2019 ; Nakamori et al., 2020 ). These inconsistencies could partially reflect the range of challenges faced by the field including; insufficient statistical power to detect the effect of WMH burden, and large methodological variation across studies (e.g., differences in stroke lesion distribution and severity, different behavioural language measures, different covariates), but could also be partially explained by emerging evidence indicating that some individual cognitive domains may be more vulnerable to WMH burden than others (Hamilton et al., 2021 ). When compared to production skills, longitudinal changes in spoken comprehension was identified as more vulnerable to the effects of WMH pathology and other brain ageing markers in a previous study (Kristinsson et al., 2022 ). The contribution of executive processing to sentence level comprehension tasks has been shown in behavioural studies with healthy participants (Caplan et al., 2013 ; Gajardo‐Vidal et al., 2018 ; Key‐DeLyria Sarah & Altmann Lori, 2016 ; Yoon et al., 2015 ). From an anatomical perspective, some regions within the bilateral PFC have been implicated in both sentence level comprehension and executive processing tasks in both healthy participants (Gajardo‐Vidal et al., 2018 ; Key‐DeLyria & Altmann, 2016; Seeley et al., 2007 ; Walenski et al., 2019 ) and people with aphasia (Brownsett et al., 2014 ; Stefaniak et al., 2021 ; van Oers et al., 2010 ). Given the evidence demonstrating that WMH lesions compromise connections that project to the PFC, any functional relationship between language comprehension and executive processing is likely to be more susceptible to the cumulative effects of premorbid WMH burden. Limitations While we present a well‐controlled investigation of the relationship between both WMH volume and its distribution and post‐stroke aphasia, our findings are limited by the sample size and the merging of two different datasets. This may have impacted on the identification of additional associations between volumetric WMH variables and language measures. Post‐stroke aphasia represents a network disorder arising from injury within multiple cortical, white matter, and subcortical structures (Thiel & Zumbansen, 2016 ). Conducting an extensive analysis involving numerous language ROIs falls beyond the scope and statistical feasibility of the present investigation. Specific lesion topography within the language network has been shown to serve as a predictor of different language impairments in aphasia (Crinion et al., 2013 ; Fridriksson et al., 2018 ; Wilson et al., 2023 ). Consequently, larger‐scale investigations are required to establish whether more nuanced assessment of primary stroke lesion (i.e., assessment of a higher number of cortical and subcortical language network ROIs) can refine our understanding of the impact of premorbid WMH. Second, WMH are a radiological manifestation of global white matter disease that simultaneously affects multiple white matter connections (ter Telgte et al., 2018 ). WMH damage restricted to a single tract is rarely observed and as such it is challenging to assign functional roles to specific WMH affected tracts. However, despite the diffuse nature of WMH, WMH specifically within the CC‐Fmin have been frequently associated with cognitive decline in pathological ageing (Biesbroek et al., 2016 ; Biesbroek et al., 2020 ; Camerino et al., 2020 ; Duering et al., 2011 ; Duering et al., 2014 ; Freeze et al., 2022 ; Hilal et al., 2021 ; Jiang et al., 2018 ; Lampe et al., 2019 ; Vemuri et al., 2021 ; Zhao et al., 2018 ), making frontal callosal connections an ideal target for future investigations. This study included two datasets with distinct neuroimaging parameters which undeniably introduces noise into the derived metrics and subsequent analyses. However, it is important to consider this limitation within the wider context of the substantial benefits that can be gained from the combination of datasets to create adequately large groups for the identification of potential imaging biomarkers. Individual research groups face tremendous challenges in acquiring sufficiently large datasets within heterogeneous phenotype groups. Most aphasia cohorts, particularly those with early (acute, subacute) behavioural and neuroimaging timepoints, have had limited cohort sizes, seldom surpassing 25–30 participants (Stefaniak et al., 2022 ; Stockert et al., 2020 ). The concept of combining datasets has emerged as a potential research direction to reduce research waste and harness the availability of smaller existing datasets (Hayward et al., 2022 ). Future single or multi‐research centre studies with identical neuroimaging parameters are required to confirm and likely refine our findings. Finally, our study comprised a patient cohort with a notable chronological age range (range 42–86 years), which exhibited distinct ‘brain‐age profiles’. The ventricular region is susceptible to substantial inter‐individual variations in ageing which can lead to challenges in accurate normalization. As a result, the application of standard template ROIs, such as the callosal ROIs used in this study, some overestimation and underestimation of true WMH burden is unavoidable. There is no consensus on the inclusion of the midline section when considering WMH (Duering et al., 2011 ; Röhrig et al., 2022 ). We decided to include the midline section within our analysis due to our specific interest in the WMH lesion distribution, particularly within the corpus callosum. Future research Research into the role of white matter health in aphasia outcomes is in its infancy and further research is needed to (a) improve the sensitivity and reliability of WMH as biomarkers of stroke outcomes, and (b) gain insights into the mechanisms by which the recovery processes may be disrupted by the presence and the severity of WMH lesions. From a biomarker perspective, the ubiquity of WMH in older age suggests that WMH are not exclusively pathological (Raja et al., 2019 ), so it is essential to identify quantitative criteria that can more reliably differentiate healthy and pathological WMH burden and distribution in the context of disruption to function in combination with stroke injury. The disproportionate involvement of the CC‐Fmin across white matter health research (Biesbroek et al., 2016 ; Biesbroek et al., 2020 ; Camerino et al., 2020 ; Duering et al., 2011 ; Duering et al., 2014 ; Hilal et al., 2021 ; Jiang et al., 2018 ; Lampe et al., 2019 ; Zhao et al., 2018 ) suggests that WMH within frontal callosal connections may constitute a reliable cross‐diagnostic proxy of pathological WMH profiles. Our study did not consider long association fibres of the bilateral left‐asymmetric language network (Forkel & Catani, 2019 ), because these tracts are less frequently affected by radiographic WMH lesions (Biesbroek et al., 2017 ). Our study, along with previous research in pathological aging (Camerino et al., 2020 ), revealed significant associations between language comprehension and verbal‐executive skills and callosal WMH lesions. We suggest that this demonstrates that pathology outside the core language network can contribute to language dysfunction in aging and in stroke. However, we cannot exclude the possible contributions to outcome of WMH lesions within the language‐network itself and future research should specifically investigate the distribution of WMH lesion loads within both callosal connections and strategic language‐network association fibres. Given that WMH were identified as localized structural lesions, the exact neurobiological mechanisms by which WMH burden predisposes individuals with aphasia and other stroke survivors (Georgakis et al., 2019 ) to less favourable recovery remains unknown. Conceptually, WMH load may be inferred to reflect reduced structural brain health which likely weakens optimal recovery processes (Kristinsson et al., 2022 ; Umarova, 2017 ). However, this has not been empirically corroborated and we are not aware of any study that has complemented structural WMH data with functional network engagement in stroke recovery research to target this hypothesis more explicitly. It has been proposed that bilateral domain general compensation may underpin language recovery (Brownsett et al., 2014 ; Geranmayeh et al., 2017 ; Schneider et al., 2022 ) and future research can feasibly investigate whether the supportive role of domain‐general neural networks is modulated by total or tract‐specific WMH burden. Finally, a crucial aspect that warrants additional investigation is the potentially unique relationship between spoken comprehension and ageing neuroimaging biomarker, such as WMH burden. In a seminal lesion‐symptom study, encompassing the largest cohort of individuals with aphasia to data ( n > 200), age was identified to contribute to comprehension outcomes in aphasia more than any other language skill, albeit with a modest effect (Wilson et al., 2023 ). Given that age is consistently linked to neuroimaging markers of aging, including WMH burden (Prins & Scheltens, 2015 ), these findings further suggest that language comprehension may be particularly susceptible to the effects of aging (Wilson et al., 2023 ).
CONCLUSION This study builds on previous findings reliant on qualitative assessments of WMH burden by presenting the first investigation of the relationship between quantitative measures of early subacute WMH, a surrogate of premorbid levels, and comprehensive measures of language in post‐stroke aphasia. We extend the robustly replicated finding that callosal WMH plays a critical role in pathologically ageing groups (Biesbroek et al., 2016 ; Biesbroek et al., 2020 ; Camerino et al., 2020 ; Duering et al., 2011 ; Duering et al., 2014 ; Hilal et al., 2021 ; Jiang et al., 2018 ; Lampe et al., 2019 ; Zhao et al., 2018 ) and confirm that measures of frontal callosal WMH volume reliably improves explaining variability of outcomes across another pathological group, namely post‐stroke aphasia. While WMH topography is rarely considered in stroke (Röhrig et al., 2022 ) and the assessment of the entire extent of WMH pathology is the most prevalent WMH measure (Basilakos et al., 2019 ; Varkanitsa et al., 2020 ; Wright et al., 2018 ), our findings argue in favour of tract‐specific WMH lesion load indices in explaining variance in outcomes in post‐stroke aphasia. From a clinical perspective, frontal callosal WMH may constitute a vital cross‐diagnostic imaging biomarker of reduced structural brain health and therefore index suboptimal recovery of language after stroke. The inclusion of callosal WMH, along with additional neuroimaging biomarkers that impact aphasia recovery, may contribute to more reliable predictions, and therefore the provision of more meaningful prognoses. Future large‐scale studies are required to confirm the predictive role of frontal callosal WMH in the recovery of language comprehension, and the differential susceptibility of some language skills over others.
Abstract White matter hyperintensities (WMH) are a radiological manifestation of progressive white matter integrity loss. The total volume and distribution of WMH within the corpus callosum have been associated with pathological cognitive ageing processes but have not been considered in relation to post‐stroke aphasia outcomes. We investigated the contribution of both the total volume of WMH, and the extent of WMH lesion load in the corpus callosum to the recovery of language after first‐ever stroke. Behavioural and neuroimaging data from individuals ( N = 37) with a left‐hemisphere stroke were included at the early subacute stage of recovery. Spoken language comprehension and production abilities were assessed using word and sentence‐level tasks. Neuroimaging data was used to derive stroke lesion variables (volume and lesion load to language critical regions) and WMH variables (WMH volume and lesion load to three callosal segments). WMH volume did not predict variance in language measures, when considered together with stroke lesion and demographic variables. However, WMH lesion load in the forceps minor segment of the corpus callosum explained variance in early subacute comprehension abilities ( t = −2.59, p = .01) together with corrected stroke lesion volume and socio‐demographic variables. Premorbid WMH lesions in the forceps minor were negatively associated with early subacute language comprehension after aphasic stroke. This negative impact of callosal WMH on language is consistent with converging evidence from pathological ageing suggesting that callosal WMH disrupt the neural networks supporting a range of cognitive functions. We investigated the contribution of both the total volume of white matter hyperintensities (WMH), and the extent of WMH lesion load in the corpus callosum to the recovery of language comprehension and language production in aphasia after first‐ever stroke. WMH lesion load in the forceps minor, but not the total volume, was negatively associated with language comprehension after stroke. Frontal callosal WMH lesions may constitute a valuable cross‐diagnostic biomarker of poor cognitive outcomes after stroke. Vadinova , V. , Sihvonen , A. J. , Wee , F. , Garden , K. L. , Ziraldo , L. , Roxbury , T. , O'Brien , K. , Copland , D. A. , McMahon , K. L. , & Brownsett , S. L. E. ( 2024 ). The volume and the distribution of premorbid white matter hyperintensities: Impact on post‐stroke aphasia . Human Brain Mapping , 45 ( 1 ), e26568 . 10.1002/hbm.26568
FUNDING INFORMATION Financial support for the work was provided by the National Health and Medical Research Council (#1104194), NHMRC‐funded Centre of Research Excellence in Aphasia Recovery and Rehabilitation (#1153236), Finnish Cultural Foundation (#191230), Maire Taponen Foundation, Orion Research Foundation sr, and Signe and Ane Gyllenberg Foundation. CONFLICT OF INTEREST STATEMENT The authors declare that there is no conflict of interest. Supporting information
ACKNOWLEDGEMENT The authors express their gratitude to the study participants and staff of the participating hospitals in the southeast Queensland region. Open access publishing facilitated by The University of Queensland, as part of the Wiley ‐ The University of Queensland agreement via the Council of Australian University Librarians. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
CC BY
no
2024-01-16 23:47:16
Hum Brain Mapp. 2024 Jan 15; 45(1):e26568
oa_package/26/18/PMC10789210.tar.gz
PMC10789216
38140712
INTRODUCTION Skill learning is a critical part of our existence and is even instrumental to survival. It refers to an internal process, which benefits from experience or practice and leads to relatively permanent changes in the capability of skilled movement production (Schmidt et al., 2019 ). To effectively study motor learning, it is important to distinguish between the terms “motor performance,” which is an observable behavior, and “motor learning,” which is not immediately observable but can indirectly be inferred from the observation of performance under certain circumstances (Magill & Anderson, 2017 ). It is important to note that the observed performance during practice may overestimate or underestimate the actual amount of learning achieved (Magill & Anderson, 2017 ). For example, the transient changes in behavior during training with various types of augmented feedback (such as verbal or visual information about performance) may alter or vanish in the absence of such feedback (FB) and overestimate learning. Alternatively, mental and physical fatigue may suppress performance levels temporarily during practice and can thus underestimate the amount of learning. Such temporary changes in behavior do not reflect motor learning, because they are not sufficiently permanent (Salmoni et al., 1984 ). A possible solution is to set up test conditions in which performance is assessed while these temporary effects have vanished. Although there is no gold standard to characterize the different stages of motor learning, a division between initial and later learning is often made (Coynel et al., 2010 ; Gentile, 1972 , 1987 , 2000 ). During initial learning, the performer explores the most effective strategies and builds neuromuscular patterns. The later stage is characterized by subtle adjustments while the movements become more efficient and consistent, leading to automaticity. The learning rate is typically high during this initial stage and reduces during the later stage when performance reaches a plateau. In spite of this, learning unfolds as a continuous transition from early to later stages. Various types of FB have been studied to facilitate or optimize motor skill acquisition (Hantzsch et al., 2022 ; St Germain et al., 2022 ; Swinnen, 1996 ). It is commonly agreed that different features of FB, such as its frequency and timing, can be used to guide performance and learning (Salmoni et al., 1984 ). Specifically, performance is typically better in the presence of concurrent FB (delivered during ongoing movement) as it provides direct online guidance to adjust performance and minimize errors. Conversely, instant performance is relatively poorer when it is provided after the completion of the task (i.e., terminal FB), as no direct guidance is provided during the task execution itself. But this reflects performance and not necessarily learning effects. Moreover, when the augmented FB is withdrawn, performance may deteriorate. This has been coined as the “guidance hypothesis of information FB,” suggesting that FB may have a dual function: FB presence can boost performance instantaneously but it can also hamper learning, as assessed under FB withdrawal (FBW) conditions (Salmoni et al., 1984 ; Schmidt, 1991 ). While enormous investments have been made during the past decades to understand and optimize motor learning through FB and variations of the training context, recent research has focused on the role of neurometabolites in relation to (motor) learning. This is inspired by evidence showing the practice‐induced formation of new neural connections and/or the strengthening of existing ones through a process known as synaptic plasticity (Carcea & Froemke, 2013 ). Synaptic plasticity highly relies on the balance between excitation and inhibition in the brain (Carcea & Froemke, 2013 ; Dorrn et al., 2010 ). As the primary inhibitory and excitatory neurotransmitters, gamma‐aminobutyric acid (GABA) and glutamate (Glu), play critical roles in synaptic plasticity and learning. The advent of magnetic resonance spectroscopy (MRS) has allowed accurate and in vivo quantification of the concentrations of brain neurometabolites, such as GABA and Glu (Puts & Edden, 2012 ). Because the MRS‐measured GABA levels contain a significant contribution from macromolecules (GABA+ macromolecules), we refer to the resulting measure as GABA+ levels. Additionally, because it is often not possible to distinguish Glu from glutamine in 3 T MRI systems, we refer to the resulting measure targeted at Glu as Glx (Glu + Glutamine). So far, numerous studies have investigated whether MRS‐measured baseline (resting state) levels of GABA+ and Glx are linked to various behavioral “performance” metrics pertaining to cognitive, perceptual, and motor tasks (Li et al., 2022 ; Pasanta et al., 2023 ). Depending on the type of task, neurochemical levels can be positively (Mikkelsen et al., 2018 ; Puts et al., 2015 ) or negatively (Marsman et al., 2017 ; Takei et al., 2016 ) related to behavioral performance. However, studies addressing the relationship between baseline GABA+ levels and “learning” are still very scarce. Studies on the associations between MRS‐assessed baseline neurometabolite levels and motor learning gain have reported mixed findings. One study that used a serial finger tapping task revealed that lower baseline levels of GABA+ in the primary motor cortex (M1) were associated with greater subsequent motor learning (Kolasinski et al., 2019 ). However, other motor learning studies making use of a finger sequencing task (Stagg et al., 2011 ) and a bimanual tracking task (BTT; Chalavi et al., 2018 ) demonstrated lower baseline M1 GABA+ levels to be related to better initial performance, even though no significant correlations were observed between baseline levels of M1 GABA+ or Glx and learning measures. Our goal was to determine whether baseline levels of neurometabolites are associated with learning gain during different stages of motor learning. This prompted questions about which brain regions to select. Motor skill relies on a distributed network of cortical and subcortical regions and practice leads to functional changes in these areas, including increases, decreases, or no consistent changes in brain activity (Debaere et al., 2004 ; Doyon et al., 1998 ; Puttemans et al., 2005 ; Rémy et al., 2008 ). Moreover, the learning‐related brain changes also depend on the type of FB that is made available during task performance (Beets et al., 2015 ; Debaere et al., 2003 ; Ronsse et al., 2011 ). This complicates the choice of brain regions for neurochemical investigation of motor learning. In relation to bimanual skill learning, some brain regions are more activated during the initial stage of learning while others become more active during the later stages (Debaere et al., 2004 ; Rémy et al., 2008 ). For example, the primary motor (M1) and secondary motor areas are involved in movement planning and production and remain active throughout learning. In interaction with M1, the somatosensory (S1) cortex processes task‐specific information about self‐movement and body position (also known as proprioception). Other regions play a more temporary role, such as the dorsolateral prefrontal cortex (DLPFC) which is typically activated during the initial learning stage while showing a reduction of activity at later stages (Debaere et al., 2004 ; Rémy et al., 2008 ). Conversely, the striatum and cerebellum contribute to the automatization process and show enhanced subregional activity during the later learning stages. Furthermore, when augmented visual FB (VFB) is provided during task performance and/or learning, occipital areas kick in, such as the primary visual cortex (V1) and associated regions, but also parietal and medial temporal cortex, including MT/V5, a region specialized in motion processing (Debaere et al., 2004 ; Ronsse et al., 2011 ). Thus far, MRS studies on motor function have primarily focused on the sensorimotor cortex while other task‐related areas have yet to be investigated. Here, we used a bimanual task consisting of several subtasks. Our working hypothesis was that neurometabolite levels may predict skill learning capability. Our principal aims were to (1) investigate the relationship between baseline (resting state) levels of neurometabolites from a selection of task‐related brain areas and motor learning gain during different stages of motor learning, and (2) assess whether the learning gain obtained under different types of augmented VFB is related to baseline levels of neurometabolites in the FB‐processing brain areas. Specifically, one group was provided with concurrent augmented VFB (CA‐VFB group), implying that participants could see the real‐time visual FB of their movement together with the template of the correct movement on a PC screen during task execution (externally‐ or visually‐generated movement). The second group was provided with Terminal Augmented VFB (TA‐VFB group). Hence, participants could only see their movement trajectory on a PC screen on top of the ideal typical trajectory after trial completion, that is, they relied on the emerging proprioceptive information from actual task performance (internally‐ or proprioceptively‐generated movement). Irrespective of the training group, all participants were subjected to the same tests before the start of the training (pre‐test) and after the end of the training (post‐test) on each day. In order to eliminate the temporary effects of augmented VFB, these tests were always performed in the absence of any augmented VFB (either during or after task performance) in both groups to assess true learning gains across days under comparable conditions for both groups. First, from a behavioral perspective , we hypothesized that the CA‐VFB group would improve rapidly and show better task performance during training than the TA‐VFB group because the former could continuously adjust performance based on the real‐time FB during task execution. However, from a learning perspective, we anticipated that the CA‐VFB group would perform worse during the no‐FB test conditions than the TA‐VFB group because participants of the former group had become dependent on the visual FB such that their performance became vulnerable when weaning from this FB. Second, from a neurometabolite‐behavioral perspective, we hypothesized that both groups would benefit from higher GABA+ levels in the M1 to lay down the distinct subtask representations in motor memory across learning. Similarly, we predicted that higher baseline DLPFC GABA+ levels would support the built‐up of these representations, particularly during initial learning. In view of the differential FB manipulation according to the group, we hypothesized that higher GABA+ levels in the MT/V5 would support performance in the CA‐VFB group because of the dominant role of concurrent visual FB during training (vision‐based learning). Alternatively, we hypothesized that the TA‐VFB group would show higher learning gains with higher GABA+ levels in the S1 because the absence of concurrent visual FB would leave them with somatosensory input from S1 as the only source of reliable information for generation of the movement representations (proprioception‐based learning). In general, we anticipated that the role of these neurometabolite levels would be more critical during the early than late stage of learning because of the critical role of sensory information during early learning. Finally, we also measured baseline Glx levels in the same brain regions to determine their role in motor learning in comparison with GABA.
MATERIALS AND METHODS Participants Here, 57 young adults (28 females, aged 18–34 years, mean ± SD = 25.53 ± 4.04) initially participated in this study. Participants had normal or corrected‐to‐normal vision and reported no history of neurological disease or psychiatric disorders. Participants were randomly assigned to one of the two groups: CA‐VFB group or TA‐VFB group. The demographic information of both groups, including age, gender, and handedness, is presented in Table 1 . Groups did not significantly differ with respect to age (independent t‐test: t = −0.15, df = 55, p = .88), gender ( χ 2 = 0.15, df = 1, p = .90), and handedness (Wilcoxon rank‐sum test: w = 378, p = .64) (Oldfield, 1971 ). Informed consent was obtained from all participants before they entered the research. The study protocol was in accordance with the declaration of Helsinki (1964) and was approved by the Ethics Committee Research of UZ/KU Leuven (study number S58333 and its amendment). Overview of experiment sessions This study consisted of one screening session and five behavioral training sessions which were spread across a time window of 7.9 ± 0.5 days (mean ± SD) (Figure 1a ). During the screening session (Day 0), participants' handedness was assessed and contra‐indications to MRI were determined. Then, they were familiarized with the behavioral task as well as the MR scanner environment. Day 0 was followed by 5 days of training on a bimanual task (i.e., Day 1 to Day 5). Each training day started with a pre‐test, followed by a training part, and ended with a post‐test. The pre‐test and post‐test were performed in the absence of any type of augmented VFB (i.e., the total FBW condition) while the training part was performed in the presence of visual FB (i.e., the FB condition). The FBW condition was determined as the critical test of learning and was the same for both groups. As such, the latter test conditions constituted a possible advantage for participants of the TA‐VFB group who were more familiar with such a context during training as compared to the CA‐VFB group who were deprived of the real‐time augmented VFB during the tests. Figure 1b illustrates the stimuli that were presented to the participants during FB (training) and FBW (pre−/post‐test) conditions. On the first (Day 1) and last (Day 5) days of training, the trials were performed inside the actual MRI scanner while during the remaining training days, the trials were performed inside a mock scanner, in order to mimic the MR scanner environment. We categorized the timing of the experiment into two segments: morning (experiment initiated before 12:00 PM) or afternoon (experiment initiated at or after 12:00 p.m.). There was no significant difference in experimental time across the five training sessions between the groups (Table 1 ). Bimanual tracking task BTT description Participants laid in a supine position inside the actual or mock MR scanner. The task device, which consisted of two dials (diameter of 5 cm) for movement recording, was positioned over the participants' laps and fixated in the lateral ramps of the MRI table (Sisti et al., 2011 ). Visual stimuli, which consisted of a white dot moving along a blue straight target template, were projected onto either a double mirror placed in front of the participants' eyes (inside the actual MR scanner) or a screen in front of the participants' eyes (inside the mock MR scanner). Participants were instructed to closely track the white dot on the screen, by rotating the two dials simultaneously. The left dial controlled the displacement along the y‐axis (clockwise: upward, counter‐clockwise: downward) and the right dial along the x‐axis (clockwise: right, counter‐clockwise: left). Each trial consisted of a preparation phase (2 s), an execution phase (8 s) and an inter‐trial interval (ITI) of 2 s (Figure 1b ). During the preparation phase, the target template was visualized but no movement was required, and the participants were instructed to plan their movement. The start of the execution phase was marked by the appearance of a white target dot that started moving along the target template. During the execution phase, participants were instructed to track the white target dot as accurately as possible both spatially and temporally. BTT schedule and FB conditions On Day 0, the target template consisted of a straight line with an angle of 45°, requiring a rotation of both dials at equal speed. Participants completed a familiarization block of eight trials including two in each movement orientation (upward‐right, upward‐left, downward‐right and downward‐left). From Day 1 to Day 5, participants in both groups practiced five movements with different frequency ratios (Figure 1c ), that is, 3:1, 2:1, 1:1, 1:2, and 1:3 (left hand:right hand), only in one orientation (upward‐right, both hands clockwise) during the training and pre‐ and post‐tests. The frequency ratio of 1:1 refers to the same rotation speed of both hands (less challenging) while the remaining frequency ratios refer to different cooperative speeds of the two hands (more challenging). Accordingly, learning of this skill required participants to generate five distinct subtask representations. During the training part, which was performed in the presence of augmented FB ( FB condition ), participants received a specific type of visual FB according to their group assignment. Specifically, in the CA‐VFB group, augmented VFB was provided by displaying the ongoing trajectory of participants' movement in real time on a PC screen (red dot) such that they could adjust their trajectory based on the visual FB. However, in the TA‐VFB group, no concurrent visual FB was provided during actual task performance but participants observed their full movement trajectory after completion of each trial (i.e., during ITI) to support task acquisition. During the training part, participants completed 120 trials on Day 1 and Day 5, and 240 trials on Day 2 to Day 4, with equal numbers of trials in each ratio. The number of trials was limited during the actual scanning days (Day 1 and Day 5) because of the time constraint. On each training day, a pre‐test and a post‐test (10 trials, including 2 trials/ratio) were administered before and after training. During pre‐test and post‐tests, no augmented FB was provided during or after the trial to either of the groups ( FBW condition ). This served as the ultimate test condition for the assessment of learning because temporary effects of FB provision were eliminated and both groups were tested under exactly the same conditions. BTT analysis Behavioral data were recorded and analyzed with LabVIEW. The x‐ and y‐coordinates of the target dot and the participants' cursor positions were sampled at 100 Hz. Offline data analysis was carried out using MATLAB 2021. The performance accuracy level of each trial was assessed by calculating the tracking deviation (TD), based on the average track deviation. That is, for each trial, the track deviation was measured as the distance between the target dot and participants' cursor position at each point in time and subsequently averaged. The behavioral data from six participants were excluded since either they were not able to complete the whole behavioral training program because of personal reasons ( n = 4) or the behavioral performance was identified as an outlier ( n = 2). Accordingly, for the behavioral analysis, we proceeded with the complete datasets obtained from 51 participants (CA‐VFB group [ n = 25]; TA‐VFB group [ n = 26]). Among these 51 participants, there was no difference between the CA‐VFB and TA‐VFB groups with respect to age (independent t test: t = 0.04, p = .96), gender ( χ 2 = 0.02, p = .89) and handedness (Wilcoxon rank‐sum test: w = 290, p = .49). Magnetic resonance spectroscopy MRI data acquisition MRI data were acquired using a 3 Tesla Philips Achieva scanner with a 32‐channel receiver head coil (University Hospital Leuven, Gasthuisberg). At the beginning of the MR session, a high‐resolution T 1 ‐weighted anatomical image was acquired using a chemical shift three‐dimensional turbo field echo (3DTFE) (TE = 4.6 ms, TR = 9.7 ms, 1 mm 3 voxel size, field of view = 256 × 242 × 182 mm 3 , 182 sagittal slices, scan duration = ~6 min). MRS data were acquired using the MEGA‐PRESS sequence (Edden & Barker, 2007 ; Mescher et al., 1998 ) (TE = 68 ms, TR = 2 s, 2 kHz spectral width). In light of the pivotal role of the left hemisphere in controlling bilateral movements (Merrick et al., 2022 ), all of the MRS voxels were placed in the left hemisphere for all participants. Considering the shape and dimensions of each region of interest and based on previous studies, voxel dimensions were set to 30 × 30 × 30 mm 3 for the M1 volumes of interest (VOI) (Maes et al., 2018 ), whereas the dimensions of the DLPFC and MT/V5 voxels were set to 40 × 25 × 25 mm 3 (Greenhouse et al., 2016 ) and the dimensions of the S1 voxels were set to 25 × 40 × 25 mm 3 . For the DLPFC, S1, and MT/V5 voxels, 160 averages were acquired (scan time = 11 min 12 s). However, since the number of averages can be reduced without affecting the data quality for the M1 voxel (Mikkelsen et al., 2018 ), 112 averages were acquired for the left M1 VOI voxel (scan time = 8 min). ON and OFF spectra were acquired in an interleaved fashion, corresponding to an editing pulse at 1.9 or 7.46 ppm, respectively. Prior to each MRS acquisition, an automatic shimming procedure was performed. For each MRS VOI, 16 unsuppressed water averages were acquired within the same VOI using identical acquisition parameters. MRS VOIs were identified on a subject‐to‐subject basis using anatomical landmarks (Figure 2 ). The M1 VOI was placed over the hand knob of the motor cortex and in line with the cortical surface in the sagittal plane (Yousry et al., 1997 ). Similarly, the S1 VOI was first placed over the hand knob of the motor cortex and then moved in a posterior direction until it covered the postcentral gyrus. This voxel was also aligned with the cortical surface in the coronal plane. For the DLPFC voxel, first the center of the voxel was positioned in the axial slice above the superior margin of the lateral ventricles. In this slice, the DLPFC voxel was placed at one third of the anterior‐to‐posterior distance of the brain, centered in between the lateral and medial wall of each hemisphere (Maes et al., 2021 ; O'Gorman et al., 2011 ). Afterward, it was also visually inspected whether the DLPFC voxel properly covered the middle frontal gyrus. For placement of the MT/V5 VOI, the anatomical slices were first screened from lateral to medial on the sagittal view, and then the center of the voxel was placed at the end of the medial temporal gyrus (the conjunction between temporal and occipital cortex) while ensuring that the lateral sides are in line with the cortical surface in the sagittal and axial planes. Figure S1 shows the heatmap of the locations of MRS VOIs and the MRS spectra obtained from these VOIs in two groups. Of note, MRS requires the use of relatively large voxels to ensure an acceptable signal‐to‐noise ratio (SNR) (Mullins et al., 2014 ). Therefore, since the M1 and S1 VOIs are localized in the vicinity of each other, some overlap was expected. Importantly, the center of each VOI was placed independently, ensuring that the overlap between VOIs was reduced. Following the completion of data collection, we quantitatively inspected the extent of the S1‐M1 overlap rate in each participant by calculating the volume of the S1‐M1 overlap and dividing it once by the volume of the M1 voxel and once by the volume of the S1 voxel. Results of this analysis revealed that the mean overlap rate with the M1 VOI was 46% in the TA‐VFB group and 44.5% in CA‐VFG group, and the mean overlap rate with the S1 VOI was 49.9% in the TA‐VFB group and 48.2% in the CA‐VFG group. Importantly, the overlap rates were not statistically different between groups (Table 2 and Figure S2 ). During Day 1 and Day 5, MRS data from S1 and MT/V5 were collected at three timepoints: before (resting state), during (task‐related), and after the training (resting state). Additionally, on Day 1, before the start of the training, additional MRS data were acquired from the M1 and DLPFC during the resting state. Please note that the MRS data obtained during and after the behavioral task on Day 1 as well as the MRS data obtained on Day 5 will not be discussed in the current manuscript. MRS data processing MRS data were analyzed using the Gannet toolkit (version 3.2.1) (Edden et al., 2014 ). In the first step, data were frequency‐and‐phase‐corrected by applying spectral registration (Mikkelsen et al., 2020 ). The ON spectra were subtracted from the OFF spectra, and the resulting difference spectrum was fitted between 4.2 and 2.8 ppm using a three‐Gaussian function. The water signal was fitted using a Lorentz‐Gaussian model and it was used as the reference metabolite. Subsequently, MRS voxels were co‐registered to the individual anatomical image, and statistical parametric mapping (version 12) was used to segment brain tissues inside the VOIs into different tissue fractions (gray matter, white matter, and cerebrospinal fluid). These tissue fractions were used to correct the obtained GABA+ levels for partial volume effects, with the assumption that GABA is absent in cerebrospinal fluid and has a concentration that is twice as high in gray as compared to white matter (Harris et al., 2015 , equation 5). Finally, as there was no reason to assume any differences in the brain parameters between the two groups of young adults, GABA+ levels were normalized to the average voxel composition of both groups combined (Harris et al., 2015 , equation 6). Data quality was assessed by visual inspection of the spectra for lipid contamination, poor water suppression and by examining the fit error and SNR (Mullins et al., 2014 ). Ultimately, GABA+ levels were obtained from all four brain VOIs of 57 participants. However, for the Glx levels, measurement from the left M1 voxel from one participant was excluded due to insufficient data quality. An overview of the MRS data quality measures is provided in the supplementary Table S1 . Statistical analysis Statistical analyses were carried out using R (4.1.2). First, we assessed whether the assumptions of parametric statistical tests, such as normality and homogeneity, were met. If so, parametric statistical tests were used. If not, nonparametric alternatives were used. BTT data analysis BTT performance First, we used a t‐test to investigate whether initial performance (Day 1 pre‐test) was different between groups. Then we assessed whether behavioral performance improved with training and whether the two groups improved differently. To do so, the BTT performance data during the FB‐supported condition (training) were analyzed using a nonparametric alternative of two‐way mixed 2 (Groups) × 5 (Days) ANOVA model. Furthermore, to test learning gains, the data during the FBW condition (pre‐ and post‐tests) were analyzed using a nonparametric alternative of three‐way mixed ANOVA as 2 (Groups) × 5 (Days) × 2 (Pre − Post) model using “nparLD” in R ( http://www.R205 project.org ) (Noguchi et al., 2012 , 2020 ). In addition, to investigate whether final performance in the short‐term or long‐term (Day 1 post‐test or Day 5 post‐test) differed between groups, additional t ‐tests or a nonparametric alternative of the t ‐test were performed. BTT learning We established different measures of learning, referring to performance during FBW conditions only. First, initial learning gain was calculated as the performance difference between the pre‐test and post‐test on the first day. Second, later learning gain was calculated as the performance difference between the post‐test on the first day and the post‐test on the last day. Finally, long‐term (or total) learning gain was calculated as the performance difference between the pre‐test on the first day and the post‐test on the last day. To investigate whether learning gains at each learning stage differed between groups, additional t ‐tests or a nonparametric alternative of the t ‐test were performed. To assess the associations among initial, later and long‐term learning gains, Pearson or Spearman correlation analyses were carried out between the learning measures in each group. Bonferroni correction was used to correct for the six comparisons ( p corr = p ‐value × 6, alpha level set at p corr = .05). MRS data analysis To investigate whether baseline GABA+ levels were different between groups and across brain regions, we used a 4 × 2 (VOI × Group) two‐way mixed ANOVA in which VOI served as a within‐subject factor and Group served as a between‐subject factor. Since the Glx levels were not normally distributed, we used the nonparametric alternative of the two‐way mixed ANOVA. MRS‐BTT regression analysis We used multiple linear regression analyses to investigate whether the baseline (resting state) levels of the neurometabolites (obtained on Day 1 prior to training) could predict the behavioral progress under FBW conditions in the combined groups as well as in each group separately. It has been recommended that, if an interaction between a continuous variable and another variable (continuous or categorical) is being tested in a regression analysis, the continuous variable(s) should be centered to avoid multicollinearity issues, which could result in inflated standard errors. Therefore, we used the mean‐centered neurometabolite levels in the multiple regression analyses in the combined groups.
RESULTS Behavioral findings Behavioral data Initial performance (i.e., Day 1 pre‐test) was not significantly different between the groups ( t = 0.023, p = .98) (Figure 3c , Table 3 ), indicating that the groups started at the same level before training. Table 4 lists the mean and standard deviation of the behavioral performance of each group on different days. Table 5 lists results of the statistical analyses on the behavioral data obtained under the FB and FBW conditions. FB‐supported performance condition During the FB condition ( with FB manipulation ), the nonparametric alternative of the 2 × 5 (Group × Day) mixed ANOVA revealed a main effect of Day ( p < .001), suggesting an overall improvement of performance over the course of training. Additionally, the significant main effect of Group ( p < .001) indicated that the overall performance was better in the CA‐VFB group as compared with the TA‐VFB group (Figure 3a ). This was anticipated because the CA‐VFB group could rely on the concurrent real‐time FB to steer performance online. Furthermore, the significant Group × Day ( p < .001) interaction reflected that the performance of the CA‐VFB group was better than that of the TA‐VFB group at the beginning of training, but this difference became less pronounced toward the end of the training because the TA‐VFB group showed further improvement over the course of training (Figure 3a ). Performance under FBW condition as a test of learning In the FBW condition ( absence of augmented FB ), the nonparametric alternative of the 2 × 2 × 5 (Group × Pre‐Post × Day) mixed ANOVA revealed a main effect of Day ( p < .001), indicating an overall improvement of performance over the course of training (Figure 3b ) and a main effect of Pre‐Post ( p < .001), indicating that the overall performances were better at post‐test as compared with the pre‐test (Figure 3d ). The interaction effect of Day × Pre‐Post ( p < .001) was also significant, indicating that the performance improvement reduced over training days (Figure 3d ). Additionally, a main effect of Group ( p < .001) was observed, indicating that the overall performance of the TA‐VFB group was better than that of the CA‐VFB group during the FBW condition (Figure 3b ). This is not surprising because the TA‐VFB group members were more familiar with such performance conditions during training, which better resembled the test conditions under FBW. Furthermore, we found a significant Group × Day interaction ( p < .001), suggesting that the rate of performance improvement across training days was different between groups, that is, at the beginning there was a sharp progress in the TA‐VFB group, while the difference in performance improvement between groups got smaller with the relative greater progress in the CA‐VFB group in the later phase (Figure 3b,d ). A Group × Pre‐Post interaction ( p < .001) was also observed, indicating a greater performance improvement in the TA‐VFB group as compared to the CA‐VFB group after training. We also observed a significant interaction effect of Pre‐Post × Day × Group ( p < .03) (Figure 3d ), reflecting that the daily progress (i.e., the difference between pre‐test and post‐test) across training days differed between groups. More specifically, as compared with the CA‐VFB group, the TA‐VFB group showed greater daily progress during the first three training days but did not further improve performance during the last two training days. However, the CA‐VFB group showed slower but more continuous progress over all five training days. Results of the post‐hoc analyses comparing the behavioral performance of each group across five training days are reported in the supplementary Table S2 . Performance and learning measures during different stages of learning The statistics comparing behavioral performance at pre‐ and post‐tests and learning gains between groups are reported in Table 3 . The initial learning outcome (i.e., Day 1 post‐test) was not significantly different between the groups ( t = −0.879, p = .384) (Figure 3c ), indicating that the groups ended at a similar level after initial training. The final learning outcome (i.e., Day 5 post‐test) was significantly different between the groups ( w = 90, p < .001) (Figure 3c ), indicating that the TA‐VFB group outperformed the CA‐VFB group after long‐term training. Additionally, the initial learning gain (difference between Day 1 pre‐test and post‐test) was not significantly different between groups ( w = 386, p = .257). The later learning gain ( t = 2.45, p = .018) and long‐term learning gain ( w = 439, p = .03) were significantly larger in the TA‐VFB group as compared to the CA‐VFB group. No correlations were found between the initial and later learning gains in each separate group: in CA‐VFB group ( r = −.27, p corr > .05) and in TA‐VFB group ( r = −.15, p corr >.05). However, significant correlations were found between the initial and long‐term learning gains in each separate group: in CA‐VFB group ( r = .83, p corr <.001) and in TA‐VFB group ( r = .82, p corr <.001). These results imply that the participants who made the most behavioral progress on the first training day also improved the most over long‐term training. Additionally, no correlations were found between the later and long‐term learning gains in each separate group: in CA‐VFB group ( r = .18, p corr >.05) and in TA‐VFB group ( r = .39, p corr >.05). MRS results Differences in regional baseline neurometabolite levels Table 6 lists mean and standard deviation of the baseline GABA+ and Glx levels in the four investigated MRS VOIs and also reports results of the 4 × 2 (VOI × Group) two‐way mixed ANOVA comparing the neurometabolite levels between the two groups and VOIs. GABA+ levels Results of the 4 × 2 (VOI × Group) two‐way mixed ANOVA revealed a significant main effect of VOI. Post‐hoc analyses indicated lower GABA+ levels in the DLPFC as compared to the M1 and S1, and lower GABA+ levels in the MT/V5 as compared to the other voxels of interest. However, there was no significant difference between GABA+ levels in the M1 and S1 (Figure 4a and supplementary Table S3 ). Furthermore, the main effect of Group was not significant, indicating that baseline GABA+ levels were not significantly different between groups. Additionally, the interaction effect of Voxel × Group was also not significant, indicating that the difference in the GABA+ levels between VOIs was not significantly different between groups (Table 6 ). Glx levels Results of the 4 × 2 (VOI × Group) nonparametric alternative of the two‐way ANOVA revealed a significant main effect of VOI. Post‐hoc analyses indicated lower Glx levels in the S1 as compared to the DLPFC and MT/V5, and lower Glx levels in the M1 as compared to the other voxels of interest. However, no significant difference was found between Glx levels in the DLPFC and MT/V5 (Figure 4b and supplementary Table S3 ). Furthermore, the main effect of Group was not significant, indicating that baseline Glx levels were not significantly different between groups. Additionally, the interaction effect of Voxel × Group was also not significant, indicating that the difference in the Glx levels between VOIs was not significantly different between groups (Table 6 ). Predicting learning gains based on the baseline GABA+ levels To investigate whether resting‐state GABA+ levels in the four VOIs could predict subsequent learning gains and whether the predictive value of baseline GABA+ levels on learning is dependent on the FB type, a series of multiple regression analyses were built with the following factors: (1) Group (TA‐VFB group, CA‐VFB group); (2) GABA+ levels in four VOIs (i.e., DLPFC, M1, S1, MT/V5); and (3) the interactive effects between GABA+ levels and Group. Initial learning gain Table 7 summarizes the results of the multiple regression analysis predicting initial learning gain under the FBW condition. We observed that GABA+ levels in the M1 ( β = .50, t = 2.99, p = .005) and the DLPFC ( β = −.51, t = − 2.30, p = .027) independently contributed to predicting the initial learning gain in the TA‐VFB group. Furthermore, as shown by the interaction effects, there was a significant group difference with respect to the effect of GABA+ levels in the M1 ( p = .038), DLPFC ( p = .011), and S1 ( p = .009) on predicting the initial learning gain. Figure 5 shows the significant interaction effects between neurometabolite levels and Group, obtained from the multiple regression analyses. Therefore, we further investigated the predictive value of baseline GABA+ levels in each group separately (Table 7 ). Results of the multiple regression analysis in the TA‐VFB group revealed that baseline GABA+ levels in the M1 positively predicted initial progress ( β = .68, t = 3.64, p = .002), whereas GABA+ levels in the DLPFC negatively predicted initial progress ( β = −.53, t = −2.80, p = .011). This suggested that higher GABA+ levels in the M1 were associated with higher initial learning gains while higher GABA+ levels in the DLPFC were associated with lower initial learning gains in the TA‐VFB group. Results of the multiple regression analysis in the CA‐VFB group showed that GABA+ levels in the S1 could positively predict initial learning gain in this group, suggesting that higher GABA+ levels in the S1 were associated with higher learning gains ( β = .44, t = 2.20, p = .04). The associations between the GABA+ levels in different brain areas and initial learning gain are visualized in Figure 6 (left side). Later learning gain The multiple regression analysis predicting later learning gain (under the FBW condition) is summarized in Supplementary Table S4 . These results indicated that later learning gain was not significantly predicted by the proposed factors ( F (9,41) = 1.128, p > .05). Long‐term learning gain Table 8 summarizes the results of the multiple regression analysis predicting long‐term learning gain. Moreover, this model suggested that GABA+ levels in the M1 and DLPFC voxels contributed independently to long‐term learning gain ( p = .007 and p = .017, respectively) in the TA‐VFB group. Furthermore, as shown by the interaction effects, there was a significant group difference with respect to the effect of GABA+ levels in the M1 ( p = .01), DLPFC ( p = .012), and S1 ( p = .015) on long‐term learning gain. Figure 5 shows the significant interaction effects between neurometabolite levels and Group, obtained from the multiple regression analyses. Therefore, we further investigated the predictive value of baseline GABA+ levels in these voxels in each group separately (Table 8 ). Results of the multiple regression analysis in the TA‐VFB group revealed that baseline GABA+ levels in M1 positively predicted long‐term learning ( β = .62, t = 3.21, p = .0042), whereas GABA+ levels in DLPFC negatively predicted long‐term learning ( β = −.54, t = − 2.78, p = .011). Results of the multiple regression analysis in the CA‐VFB group showed that GABA+ levels in the S1 positively predicted long‐term learning ( β = .48, t = 2.26, p = .036). The associations between the GABA+ levels in different brain areas and long‐term learning gain are visualized in Figure 6 (right side). The findings obtained for initial and overall long‐term learning gain show a converging pattern. Predicting learning gains based on baseline Glx levels To investigate whether baseline Glx levels in the four VOIs predicted learning gains and whether this depended on the FB type, a series of multiple regression analyses were built including the following factors: (1) Group (TA‐VFB group, CA‐VFB group); (2) Glx levels in four VOIs (i.e., DLPFC, M1, S1, MT/V5); and (3) the interactive effects between Glx levels and Group. Initial learning gain The multiple regression analysis indicated that initial learning gain could not be significantly predicted by the proposed factors ( F (9,40) = 1.518, p > .05) (Table 9 ). Later learning gain The multiple regression analysis revealed that later learning gain could not be significantly predicted by the proposed factors ( F (9,40) = 1.524, p > .05) (supplementary Table S5 ). Long‐term learning gain The multiple regression analysis revealed that overall long‐term learning gain could not be significantly predicted by the proposed factors ( F (9,40) = 1.268, p > .05) (Table 10 ).
DISCUSSION MRS measures of neurometabolite levels were obtained to investigate the relationship between baseline levels of GABA+ and Glx in four motor learning‐related brain areas and the behavioral progress made at different stages of motor learning. Over the course of 5 days, participants were trained on a bimanual task and received augmented VFB either during the execution of the task (CA‐VFB group) or after the completion of the trial (TA‐VFB group). At the behavioral level, participants who were trained with after‐trial visual FB (i.e., TA‐VFB group) outperformed those who were trained with online visual FB (i.e., CA‐VFB group) when assessing learning (FBW). At the neurochemical‐behavioral level, initial and long‐term motor learning progress was positively predicted by GABA+ levels in the M1 but negatively predicted by GABA+ levels in the DLPFC in the TA‐VFB group. In the CA‐VFB group, however, learning was positively predicted by GABA+ levels in the S1. Glx levels did not significantly predict the behavioral progress at any stage. FB and motor learning Motor performance in the CA‐VFB group, receiving online visual FB, outperformed the TA‐VFB group when augmented FB was available during training (the FB condition ). In contrast, performance in the TA‐VFB group, only receiving visual FB after the end of each trial during training, outperformed the CA‐VFB group when weaned from augmented FB ( FBW condition , during pre‐ and post‐tests ). The latter condition served to assess learning progress. Consequently, whereas receiving concurrent/online FB boosted performance during the training, it hampered learning progress due to overreliance on concurrent FB. On the contrary, although participants of the TA‐VFB group faced a greater challenge during training because no concurrent FB was made available to them, they ultimately learned the skill better than in the CA‐VFB group. The TA‐VFB group was better prepared to FB‐deprival conditions because they likely developed a more advanced internal error evaluation and correction strategy based on somatosensory input (Schmidt & Wulf, 1997 ; Vander Linden et al., 1993 ; Winstein et al., 1996 ). These findings support the “guidance hypothesis of information FB,” which suggests a supportive and guiding role of FB on performance as long as it is present, but a detrimental role that becomes apparent under FBW conditions (Salmoni et al., 1984 ; Schmidt, 1991 ). Numerous studies have manipulated different properties of FB (such as their timing and frequency) to assess their effect on motor learning (for reviews, see Newell, 1976 and Swinnen, 1996 ). Studies have also reported that concurrent, as compared with terminal, FB can impair learning (Ranganathan & Newell, 2009 ; Schmidt & Wulf, 1997 ; Swinnen et al., 1990 ). Concurrent FB makes performers increasingly dependent on the strong guidance provided by the FB, thus hampering internal error evaluation based on proprioceptive input. This dependence is reduced when terminal FB is provided (for a review, see Schmidt, 1991 ). This should be considered when designing training protocols to maximize the learning outcome. Regional specificity of MRS ‐measured levels of GABA + and Glx Our results demonstrated that concentrations of GABA+ and Glx varied across different VOIs. For GABA+, we observed the highest levels in the M1 and S1, followed by the DLPFC and MT/V5. For Glx, the highest levels were measured in the DLPFC and MT/V5, followed by the S1 and then the M1. This is consistent with converging evidence that GABA+ and Glx concentrations are not homogenously distributed across brain regions (Grachev & Apkarian, 2000a , 2000b ; Maes et al., 2021 ; Rodríguez‐Nieto et al., 2023 ). That GABA+ levels in the DLPFC are lower than in the S1 and M1 area is consistent with a previous study (Maes et al., 2021 ). Additionally, previous MRS studies have provided support for an anterior–posterior gradient in GABA+ levels, with greater GABA+ levels in the posterior regions (Chalavi et al., 2018 ; Hermans et al., 2018 ; Maes et al., 2018 ; Mikkelsen et al., 2018 ; Porges et al., 2017 ; Takei et al., 2016 ) even though inconsistent results have also been reported. For example, some studies reported no significant differences in GABA levels between the frontal and occipital cortex (Hermans et al., 2018 ; Marsman et al., 2017 ) or even higher GABA+ levels in the frontal as compared to the parietal cortex (Gao et al., 2013 ). Moreover, we investigated for the first time GABA+ levels in the MT/V5 (at the conjunction of the parietal and occipital cortex) and showed that these were lower than those in the S1/M1 area and DLPFC. With respect to Glx, we showed lower levels in the DLPFC as compared to the S1 and M1 area, consistent with previous research (Grachev & Apkarian, 2000a ). Moreover, a study showed no significant difference in Glx levels between the frontal and the posterior midline voxels (Gao et al., 2013 ). Our results could not establish a significant difference in Glx levels between the DLPFC and MT/V5. Baseline levels of GABA +, not Glx, predict behavioral progress The brain–behavior analyses did not reveal any associations between Glx levels and learning. However, we did observe that initial and long‐term motor learning progress, as indexed by performance under FBW conditions, were positively predicted by GABA+ levels in the M1 and negatively predicted by GABA+ levels in the DLPFC in the TA‐VFB group using terminal visual FB during training. Conversely, motor learning progress was positively predicted by GABA+ levels in the S1 area in participants of the CA‐VFB group who received concurrent visual FB during training. These findings appear only partially consistent with the proposed hypotheses, as discussed next. Positive associations between baseline M1 GABA+ levels and performance have been demonstrated in previous studies for various types of sensorimotor tasks (Cassady et al., 2019 ; Chalavi et al., 2018 ; Stagg et al., 2011 ). Less evidence is available for positive associations between baseline M1 GABA+ levels and learning. Here, we observed that participants with higher M1 GABA+ levels were more successful in acquiring the bimanual skill during the initial short‐term learning stage as well as during the longer term. We tentatively suggest that the higher M1 GABA+ levels may be linked to better construction of distinct memory representations in the M1 for the different subtasks. However, while we expected this association to be present in both groups, it was only observed in the TA‐VFB group. While the TA‐VFB group experienced more difficulties improving motor performance during training as compared to the CA‐VFB group (using concurrent visual FB), they performed better than the CA‐VFB group when weaned from augmented FB during tests of learning (FBW conditions). This may have promoted the building of more advanced motor memory representations in M1 in the TA‐VFB group. Positive associations between baseline GABA+ levels and learning have also been reported using other paradigms. Heba et al. measured perceptual improvements by comparing tactile sensitivity of the index finger before and after repetitive somatosensory stimulation on the right hand (pre‐ and post‐test measurements). They showed that both the tactile sensitivity learning gains and final learning outcomes were positively associated with baseline SM1 GABA+ levels (Heba et al., 2016 ). It has been proposed that the role of higher baseline GABA+ levels in better sensorimotor performance and enhanced retrieval of memorized information may be mediated by suppressing the interference induced by irrelevant stimuli (Li et al., 2022 ). Altogether, we speculate that in the TA‐VFB group, higher baseline M1 GABA+ levels promoted/facilitated building better memory representations in the motor cortex, and this could have been mediated by suppressing the interference induced by irrelevant stimuli (Heba et al., 2016 ; Li et al., 2022 ). Conversely, with respect to neurochemical dynamics, lowering GABA+ levels may promote plasticity by a release from inhibition and facilitation of neural interactions even though this typically refers to experimentally‐induced modulation of GABA+ levels in the SM1 to facilitate learning (Stagg et al., 2011 ). Other studies looking into the dynamics of SM1 GABA+ changes during task learning have observed a decrease in the MRS‐assessed GABA+ levels during short‐term (Chalavi et al., 2018 ; Floyer‐Lea et al., 2006 ; Kolasinski et al., 2019 ; Maes et al., 2021 ; Nettekoven et al., 2022 ) and long‐term motor learning (Sampaio‐Baptista et al., 2015 ), which may be consistent with a release from inhibition to promote motor learning. Furthermore, a study in older participants showed that participants with higher baseline GABA+ levels were more likely to exhibit a greater decrease in GABA+ levels during motor sequence training, which was linked to greater motor learning magnitude (King et al., 2020 ). From this perspective, higher resting‐state GABA+ levels in task‐related brain areas (as we reported here) may embed a larger window for training‐induced modulation to obtain a release from inhibition via a reduction of GABA to induce plasticity and learning. The observed negative association between DLPFC GABA+ levels and motor learning progress was not consistent with our preliminary hypothesis. Interestingly, this was only observed in participants of the TA‐VFB group. While we anticipated higher DLPFC GABA+ levels to support the building of distinct memory representations for the sub‐movements in M1, this was associated with lower progress during short‐ and long‐term learning. Positive associations between DLPFC GABA+ levels and performance/learning have been shown for various types of tasks in previous studies, but negative associations have been less prominently reported. As part of the prefrontal cortex, DLPFC plays a converging role between the inputs from the sensory processing areas and the outputs to the motor areas. It has also been implicated in numerous higher cognitive functions, such as task switching, planning, and attention control (Sakagami et al., 2006 ; Yoon et al., 2016 ). However, it is important to consider that the association between GABA+ levels and behavior might be contingent upon the particular function that is performed by the DLPFC in the different tasks under investigation. Thus far, to the best of our knowledge, no study has investigated the association between neurometabolite levels in the DLPFC and motor learning. However, Scholl et al. ( 2017 ) investigated the relationship between baseline GABA+ levels in the dorsal anterior cingulate cortex (dACC) and reward‐guided learning. Results revealed that lower baseline GABA+ levels in the dACC were associated with better reward‐guided learning (Scholl et al., 2017 ). It is important to note that skill learning may be partially distinct from reward learning in terms of underlying brain regions. Nevertheless, it is possible to conceive that lower baseline DLPFC levels may be associated with the more flexible exploration of the task space that is composed of different subtasks. In the present study, practice in the absence of concurrent FB may create more optimal conditions for active exploration of the task space. As such, lower GABA+ levels may promote flexible task space exploration while acquiring different task variants (Li et al., 2022 ). Finally, we hypothesized an association between baseline GABA+ levels in sensory‐processing regions and motor learning as a function of the sensory source that was prominently available to the participants during task training. Specifically, we anticipated that GABA+ levels in the MT/V5 would be associated with the amount of learning in the CA‐VFB group because participants of the latter group had access to real‐time augmented VFB during task training (vision‐based practice, externally‐guided). Conversely, a positive association between GABA+ levels in S1 and learning was expected in the TA‐VFB group because the absence of online visual FB forced participants to process the somatosensory consequences associated with movement production (proprioception‐based practice, internally‐guided). Please note that somatosensory input was equally available in the CA‐VFB group but its processing may have been suppressed as a result of overreliance on abundant online visual FB. Surprisingly, GABA+ levels in the S1 brain region were associated with learning gains in the CA‐VFB, but not the TA‐VFB group. This result can possibly be accounted for by focusing on the shift from performance in the presence of concurrent FB to FB removal when somatosensory input becomes more critical. Thus, a viable explanation is that those participants with higher baseline S1 GABA+ levels were better able to cope with this shift to non‐visually‐supported performance which requires extra processing of somatosensory information. Nevertheless, future research is required to confirm this hypothesis. Furthermore, it is clear that other brain areas can be considered in comparing internally‐guided versus externally‐guided movement performance conditions because the networks involved in these two types of control are clearly different (Debaere et al., 2003 ; Swinnen & Wenderoth, 2004 ). Previous work has also shown that learners who shift from visually‐supported to nonvisual performance conditions show temporary preservation of brain activity in MT/V5 even though the visual input is no longer available (Ronsse et al., 2011 ). This underscores the dominance of visual information processing, even in the absence of actual visual input, when weaning from visual input is required to shift to proprioception‐based performance (during the FBW condition). This was challenging as can be inferred from the CA‐VFB group's lower performance levels under the FBW conditions. One question that might arise here is why GABA+ levels from different brain regions could predict learning gains in the two experimental groups, that is, the S1 GABA+ levels in the CA‐VFB group and the M1 GABA+ levels in the TA‐VFB group. Considering that the S1‐M1 overlap rate was not significantly different between the groups and given that the overlap rate between the S1 and M1 voxels was less than 50% in both groups, we hypothesize that GABA+ levels in the non‐overlapping parts (i.e., the more frontal, motor‐related cortex for the M1 voxel and the more parietal, somatosensory processing‐related cortex for the S1 voxel) are possibly responsible for the differential associations observed between the learning gain and the GABA+ levels obtained from the M1 and S1 regions in each group. Taken together, whether higher or lower baseline GABA+ levels benefit performance or learning might depend on the function of the considered brain area in the execution of a specific task and the environmental training context. Moreover, in accounting for neurometabolite‐behavioral associations, GABA appears a more promising candidate than Glx which did not reveal any associations with learning capability. Limitations We observed positive associations between higher baseline GABA+ levels and initial/long‐term learning gain (the S1 area in CA‐VFB group and the M1 area in TA‐VFB group) as well as a negative association between baseline GABA+ levels in DLPFC and initial/long‐term learning gain in TA‐VFB group, suggesting an important role of baseline GABA+ levels in motor learning. Despite the new information provided by this study, some limitations need to be considered. First, besides GABA, Glu has been reported to be associated with human learning. However, based on the results we obtained under 3 T MR, we did not observe any significant correlation between the Glx levels and motor learning. Given that the Glx concentrations, measured under 3 T MR scanner, contain a large amount of Glutamine, disentangling the contribution of Glu in learning becomes challenging. Therefore, further studies investigating associations between neurometabolites and human learning are warranted. Second, establishing associations between GABA+ levels and motor learning is only a first step in exploring whether baseline neurometabolite levels predict learning. A better mechanistic understanding of underlying processes is required to push the envelope of causality.
CONCLUSION Levels of neurometabolites obtained during rest predict future progress with learning a motor skill across the short and longer term. The conditions under which motor tasks are trained partially determine which brain regions are relevant candidates for predicting learning. Specifically, GABA+ levels in the primary motor cortex (M1) showed a positive and GABA+ levels in the DLPFC showed a negative association with learning capacity under internally‐guided practice regimes in which proprioceptive information was prominently used. Under externally‐guided training regimes with real‐time augmented VFB provision, GABA+ levels in the primary somatosensory cortex (S1) were a dominant predictor of learning gains. These findings highlight the potential role of baseline GABA+ levels obtained from different task‐related brain areas in predicting initial and long‐term motor learning gains. As such, baseline GABA+ constitutes a potential biomarker for motor learning capacity in young adults.
Abstract Synaptic plasticity relies on the balance between excitation and inhibition in the brain. As the primary inhibitory and excitatory neurotransmitters, gamma‐aminobutyric acid (GABA) and glutamate (Glu), play critical roles in synaptic plasticity and learning. However, the role of these neurometabolites in motor learning is still unclear. Furthermore, it remains to be investigated which neurometabolite levels from the regions composing the sensorimotor network predict future learning outcome. Here, we studied the role of baseline neurometabolite levels in four task‐related brain areas during different stages of motor skill learning under two different feedback (FB) conditions. Fifty‐one healthy participants were trained on a bimanual motor task over 5 days while receiving either concurrent augmented visual FB (CA‐VFB group, N = 25) or terminal intrinsic visual FB (TA‐VFB group, N = 26) of their performance. Additionally, MRS‐measured baseline GABA+ (GABA + macromolecules) and Glx (Glu + glutamine) levels were measured in the primary motor cortex (M1), primary somatosensory cortex (S1), dorsolateral prefrontal cortex (DLPFC), and medial temporal cortex (MT/V5). Behaviorally, our results revealed that the CA‐VFB group outperformed the TA‐VFB group during task performance in the presence of augmented VFB, while the TA‐VFB group outperformed the CA‐VFB group in the absence of augmented FB. Moreover, baseline M1 GABA+ levels positively predicted and DLPFC GABA+ levels negatively predicted both initial and long‐term motor learning progress in the TA‐VFB group. In contrast, baseline S1 GABA+ levels positively predicted initial and long‐term motor learning progress in the CA‐VFB group. Glx levels did not predict learning progress. Together, these findings suggest that baseline GABA+ levels predict motor learning capability, yet depending on the FB training conditions afforded to the participants. Neurometabolites, such as gamma‐aminobutyric acid (GABA) and glutamate (Glu), play a critical role in synaptic plasticity and learning. In this study, we studied the role of baseline neurometabolite levels in four task‐related brain areas during different stages of motor skill learning under two different feedback (FB) conditions. Our findings suggest that baseline GABA+ constitutes a potential biomarker for motor learning capacity in young adults and this depends on the FB training conditions afforded to the participants. Li , H. , Chalavi , S. , Rasooli , A. , Rodríguez‐Nieto , G. , Seer , C. , Mikkelsen , M. , Edden , R. A. E. , Sunaert , S. , Peeters , R. , Mantini , D. , & Swinnen , S. P. ( 2024 ). Baseline GABA + levels in areas associated with sensorimotor control predict initial and long‐term motor learning progress . Human Brain Mapping , 45 ( 1 ), e26537 . 10.1002/hbm.26537
AUTHOR CONTRIBUTIONS Hong Li: Designed research, conducted research, analyzed data, wrote and revised manuscript. Sima Chalavi: Designed research, wrote and revised manuscript. Amirhossein Rasooli: Conducted research and revised manuscript. Geraldine Rodríguez Nieto: Revised manuscript. Caroline Seer: Revised manuscript. Mark Mikkelsen: Supported data analysis and revised manuscript. Richard A. E. Edden: Revised manuscript and supported data analysis. Dante Mantini: Revised manuscript. Stefan Sunaert: Secured operation of the MRI research equipment. Ron Peeters: Secured operation of the MRI data collection. Stephan P. Swinnen: Secured funding, designed research, wrote and revised manuscript. CONFLICT OF INTEREST STATEMENT The authors declare no competing financial interests. Supporting information
ACKNOWLEDGMENTS This work is supported by the Research Foundation Flanders (FWO) (G089818N and G039821N); the Excellence of Science grant (EOS, 30446199, MEMODYN); and the KU Leuven Research Fund (C16/15/070), awarded to SPS, DM, and coworkers. HL is supported by a doctoral fellowship from the China Scholarship Council (201906170063). SC is supported by a postdoctoral fellowship from FWO (K174216N). MM received salary support from NIH grant K99 EB028828. This project applies tools developed with support from NIH grants R01 EB016089, R01 EB023963, and P41 EB031771; RAEE also received salary support from these grants. The authors would like to thank René Clerckx for programming the tasks and for technical assistance. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
CC BY
no
2024-01-16 23:47:16
Hum Brain Mapp. 2023 Dec 22; 45(1):e26537
oa_package/4b/e2/PMC10789216.tar.gz
PMC10789218
38226136
Introduction Shoulder joint arthroplasty has been evolving for the last two to three decades. Anatomical total shoulder arthroplasty (TSA), resurfacing arthroplasty, and hemiarthroplasty dominated shoulder arthroplasty in the early decades. Reverse polarity total shoulder arthroplasty (RSA) has become more popular over the last decade [ 1 ]. A TSA needs a functional rotator cuff according to its design and biomechanics. One of the most common complications associated with TSA is humeral head subluxation, which is directly related to cuff tears [ 2 , 3 ]. It is very common to have cuff tears in the elderly population. The incidence is more than 50% at the age of 80 [ 4 ]. Therefore, TSA becomes a less viable option in the elderly population, and complication rates increase as patients get older. Reverse polarity total shoulder arthroplasty was introduced as a replacement option for cuff-deficient shoulders. It moves the centre of rotation of the shoulder medially and inferiorly. This creates an increased lever arm for the deltoid muscle, allowing it to act as a prime shoulder abductor. Glenoid base plate positioning is important in RSA. Poor placement leads to glenoid base plate loosening, which is one of the most common complications of RSA [ 5 , 6 ]. Improper positioning of the central peg and screws contributes to early loosening. Further screws can penetrate the cortex and increase the risk of neurovascular injury. Good glenoid base plate placement and fixation require good bone stock and anatomical landmarks of the glenoid. In the presence of advanced osteoarthritis or cuff tear arthropathy (CTA), both the bone stock and the glenoid landmarks are distorted. Glenoids become retroverted with significant bone loss. Sometimes bone cysts significantly compromise the bone. In these situations, conventional RSA becomes a challenging operation, and the outcome is compromised due to improper glenoid base plate positioning and fixation. Guided personalized surgery (GPS)-navigated RSA helps achieve better glenoid base plate positioning and fixation in these conditions [ 7 , 8 ]. The process involves a three-dimensional (3D) CT of the shoulder and a preoperative planning application to create GPS navigation based on the CT images. During the operation, the trackers are used to guide the position of the base plate and screws. There is evidence in the literature that suggests GPS-navigated RSA helps achieve good position of the glenoid base plate and secure peg and screw fixation [ 7 , 8 ]. We did a retrospective case series study to analyse the demographic data, implants, and functional results of GPS-navigated RSA at our institution.
Materials and methods A retrospective descriptive case series study was carried out at the Bedfordshire Hospitals NHS Trust, Bedford, UK. The study was registered in the Clinical Quality, Audit, and Effectiveness Department of the Bedfordshire Hospitals NHS Trust. The main objectives were to assess functional improvement after a GPS-navigated RSA, establish commonly used glenoid base plates and screws, and define the accuracy of the glenoid base plate and screws' placement. Our hospital is a low-volume district general hospital in England, where navigation was used only for challenging cases. In 2018, five years ago, our institute began using GPS-navigated RSA. All the patients who were planned for GPS-navigated RSA during the last five years were included. Patients who did not complete GPS-navigated RSA for any reason were excluded from the postoperative functional assessment. Preoperative modified anteroposterior (AP) and axillary lateral radiographs were analyzed for proximal migration and arthritic changes. Further, axial views of CT scans were used to measure the glenoid version and assess for bone stock. Intraoperative glenoid base plate placement under navigation was checked to assess the accuracy of the placement in comparison to the preoperative plan. Postoperative AP and axillary lateral views were analysed to assess the position of the peg and screws. The length of the screws used to fix the glenoid base plate was also studied. The Oxford Shoulder Score (OSS) was calculated from the patient-reported outcome measure form before the operation and at six months, one year, and then yearly at postoperative follow-up. Surgical technique The deltopectoral approach was used in all patients who were positioned on beach chairs. A GPS screen was fixed to the opposite side of the table without any obstructions so that trackers could be easily sensed. A static tracker was positioned on the coracoid, and the glenoid anatomy was acquired using a tracked pointer. After the completion of the glenoid mapping, glenoid reaming was done under navigation guidance with a tracker affixed to the reamer handle, giving real-time feedback on the angle and depth of reaming. The central screw and glenoid base plate were positioned using tracked instruments. All the base plates were fixed with four screws inserted under navigation for maximum purchase and optimum positioning. The rest of the procedure was like a standard RSA. IBM SPSS software version 26 (IBM Corp., Armonk, NY) was used for statistical analysis. Descriptive statistics were mainly used, and a paired t-test was used to compare the OSS before and six months after the operation.
Results Fifteen patients were planned for GPS-navigated RSA at our institution over the last five years. There was one case of attempted navigational RSA that was converted to a standard RSA due to a coracoid fracture when the coracoid tracker was being fixed. All the surgeries were performed by a single surgeon. Equinoxe Exactech (Exactech, Inc., Redditch, UK) implants were used in all the patients. Apart from the complication analysis, the patient who did not complete GPS-navigated RSA was excluded from the study. Ten female patients and four male patients were included in the study. The average age of the patients was 71.3 years, with a range of 57 years to 88 years. Two patients had CTA with proximal humerus migration and complete loss of subacromial space. The other 12 patients had primary osteoarthritis without radiological evidence of a cuff tear. A clinical decision was made to proceed with GPS-navigated RSA because of significant glenoid bone loss or retroversion. No single cut-off value for the retroversion was used during decision-making. The senior author, who was the primary surgeon, made a clinical decision considering all the factors. All the patients had retroverted glenoids ranging from two degrees to 35 degrees. The mean version was 13.6 degrees. Four patients had a retroversion of more than 20 degrees with severe posterior bone loss. One patient had severe central bone loss. Four patients had large bone cysts in the glenoid, compromising more than 30% of the bone stock. Five patients had eight-degree posteriorly augmented glenoid base plates. Three patients had 10-degree superiorly augmented glenoid base plates (Figure 1 ). In 10 of the 14 patients, the most commonly used glenosphere size was 38 mm. Four patients had 42-mm glenospheres. The glenoid version was accurately reproduced intraoperatively according to the preoperative plan in all patients. The range of screw lengths was 24 mm to 37 mm. The mean screw length was 28 mm. All patients had four screws to fix the glenoid base plate. The follow-up period was from six months to five years, depending on the date of the surgery. None of the patients were dropped out of follow-up. The OSSs of the patients are depicted in Table 1 . A paired sample t-test was used to assess the improvement in OSS over the first six months compared to the preoperative OSS recorded on the day of surgery (Table 2 ). There was no glenoid loosening or dislocations during the follow-up period. One patient developed a periprosthetic humerus fracture at the tip of the stem, which was managed with osteosynthesis. One patient developed a coracoid fracture one month postoperatively, which was managed non-operatively. One stress fracture of the acromion was reported, which was managed non-operatively as well.
Discussion Shoulder arthroplasty has evolved over the last four decades. Reverse shoulder arthroplasty is becoming more popular among shoulder surgeons worldwide. Indications have expanded over time. A definite indication is arthropathy of the shoulder in the absence of a functioning rotator cuff. Reverse shoulder arthroplasty is also preferred in inflammatory arthritis, non-reconstructable proximal humerus fractures in the elderly, primary osteoarthritis in the elderly population, and revision for failed hemiarthroplasty [ 9 - 13 ]. Reverse shoulder arthroplasty is preferred over TSA in the elderly population due to the high incidence of rotator cuff failure and proximal humeral migration. Reverse shoulder arthroplasty is not immune to complications. Glenoid base plate loosening is one of the most common complications [ 9 , 14 ]. It is crucial to get the glenoid base plate positioning and fixation optimized to improve the longevity of RSA [ 15 ]. Most of the time, careful exposure and adherence to the anatomical landmarks will lead to good glenoid base plate positioning. The usual technique is a guidewire inserted at the centre of the glenoid perpendicular to the glenoid. Reaming is done with cannulated reamers, which pass over the wire. When the normal anatomy is significantly distorted, achieving the correct glenoid base plate becomes difficult. It is further complicated by the perforation of the cortex by the screws, leading to poor fixation or neurovascular injuries. In addition to the distorted anatomy, poor glenoid bone stock also compromises glenoid fixation. This can be due to large glenoid cysts or glenoid bone erosion. Navigated RSA is useful in these situations to optimise the glenoid base plate positioning and fixation. A preoperative CT scan is done, including 3D reconstruction. This information is used to prepare a 3D model of the scapula and preoperative planning, which is then used to guide implant positioning. During the surgery, with the guidance of trackers and pointers, implant positioning is established as planned by the software. In our institution, GPS-navigated RSA was introduced in 2018. We use the Equinoxe Exactech shoulder system. Patients who were not suitable for conventional RSA were selected for GPS-navigated RSA. This helped to improve our service, and we were able to perform the procedure on patients who were not suitable for a conventional RSA. Fifteen selected patients were planned for GPS-navigated RSA out of a total of 156 RSA. Fourteen patients underwent GPS-navigated RSA, and one patient was converted to conventional RSA due to a coracoid fracture. A GPS-navigated surgery was not offered to everyone as it was not considered cost-effective for routine cases. All of the patients who underwent GPS-navigated RSA had distorted retroverted glenoids. During the operation, 10 out of 14 patients required an augmented glenoid base plate. All of our patients had four screws, and the average length of the screw was 28 mm. In comparison to the cadaveric study on GPS-navigated RSA, our average screw length was shorter [ 8 ]. This could be due to distorted anatomy in our patient group compared to normal cadaveric shoulders. The longest follow-up for our patient population was five years. The OSS increased to 38.86±4.22 in six months. This is an improvement of 21.64±7.175 compared to the preoperative value of 17.21±5.90. This is statistically significant, with a t-value of -11.287 and a p-value <0.05. All of the cases showed gradual improvement in OSS during the follow-up, except for the patient who had a periprosthetic humerus fracture at four years postoperative. Eng et al. did a systematic review of GPS-navigated TSA vs. conventional TSA [ 7 ]. We did not do a comparison study as our patients who underwent GPS-navigated RSA were more complex cases and hence not comparable to the non-GPS cohort. All of them had glenoid retroversion and distorted glenoid morphology, which would hinder or complicate conventional RSA. Eng et al. systematic review mostly describe the accuracy of the restoration of the glenoid version using navigation. Our patients did not have postoperative CT scans, and it was difficult to assess their version accurately based on radiographs. We accurately reproduced the version intraoperatively according to the preoperative plan for all 14 patients. Our outcome is comparable to the published literature for conventional RSA from the perspective of patient-reported outcome measures and incidence of complications [ 9 , 10 ]. The most common complication in our study population was fractures. This is comparable to reports in the literature [ 9 , 10 ]. This includes one postoperative and one intraoperative coracoid fracture. One acromion stress fracture and a humerus shaft periprosthetic fracture following a fall. All the fractures were in females. A small and osteoporotic coracoid may be a contributing factor. As the coracoid tracker is essential for the operation, it is crucial to be careful when fixing the coracoid tracker. We had to abandon GPS navigation for one patient due to a coracoid fracture. Acromion fractures are stress fractures due to increased activity of the deltoid after RSA. Gradual rehabilitation and muscle strengthening are important for these patients. There were no other significant complications in our group of patients. There were no infections or dislocations in our group of patients, which is contrary to the published incidence [ 9 , 10 ]. The smaller number of patients in our study may explain the differences in the complications profile. There are a number of limitations to the study. Being a low-volume centre, our study population is small. Our study is a non-randomised descriptive study. A randomised study to compare the outcome of navigated RSA to conventional RSA is recommended to further prove the effectiveness of navigated RSA. Our patients did not have postoperative CT scans; hence, the postoperative glenoid base plate version was not measured. The maximum follow-up duration was limited to five years. Longer follow-up is needed to further establish the long-term outcome.
Conclusions Guided personalized surgery-navigated RSA shows good short-term results in patients with distorted glenoid anatomy where conventional RSA would be challenging. It gives a good functional outcome within six months and gradual improvement thereafter. While it is essential to investigate the long-term results more thoroughly, our findings recommend the use of GPS-navigated RSA in selected patients with distorted glenoid anatomy. Further research work, including a comprehensive and representative sample, is recommended.
Introduction Reverse polarity shoulder arthroplasty (RSA) is an evolving surgery, and its indications have expanded over time. Apart from cuff tear arthropathy (CTA), it is recommended for complex proximal humerus fractures in the elderly, inflammatory arthritis, primary osteoarthritis in the elderly, and revision for failed hemiarthroplasty. Glenoid base plate placement and fixation are important to prevent complications, especially glenoid base plate loosening, dislocation, and scapular notching, and to improve longevity. Guided personalized surgery (GPS)-navigated RSA was devised to optimize the glenoid base plate position and fixation. Methodology A retrospective study was carried out in a low-volume district general hospital in England. All the patients who underwent GPS-navigated RSA were included. Their preoperative glenoid version, bone stock, glenoid base plate, and glenoid screw lengths were analysed. Preoperative and post-surgery patient-reported outcomes were gathered using the Oxford Shoulder Score (OSS) at six months and annually thereafter. Results Fourteen patients have undergone GPS-navigated RSA in our institute since 2018. Ten patients were female. All of them had a retroverted glenoid with a mean value of 13.6 degrees. Ten out of 14 patients had an augmented glenoid base plate. This included six eight-degree posterior augmentations, three 10-degree superior augmentations, and one extended cage peg. The follow-up period was six months to five years, depending on the date of surgery, and none of the patients dropped out of follow-up. The OSS revealed statistically significant improvement from preoperative values to six months postoperative, an improvement of 21.64±7.175. It also showed progressive improvement over time during postoperative follow-up, and the three-year mean was 47. The commonest complication was fractures, which happened in four cases. There were no infections or dislocations. Discussion Guided personalized surgery-navigated RSA was performed on selected patients at our institution when they were not suitable for conventional RSA due to distorted glenoid anatomy. Glenoid base plate positioning and fixation are important to optimize the outcome of RSA. Guided personalized surgery navigation is helpful in achieving optimum glenoid base placement, especially when the normal glenoid anatomy is distorted. There were no dislocations, glenoid base plate loosening, or scapular notching in the study group. There were four reported fractures, which was comparable with the published literature.
CC BY
no
2024-01-16 23:47:16
Cureus.; 15(12):e50622
oa_package/5e/df/PMC10789218.tar.gz
PMC10789223
38226316
Introduction Organ transplantation is the only curative intervention available for patients suffering from end-stage organ failure. Although it is of great benefit to the recipient, the donation act remains charitable and does not generally harbor direct benefits to the donor [ 1 ]. Hence, the regulation of organ donation and transplantation has been a hot topic from an ethical, medical, and legislative standpoint for the past few decades [ 2 ]. Organ trafficking, as well as the organ black market, has been criminalized and fought against [ 3 ]. Nowadays, each society is responsible for securing the optimal medical care for those in need of organs while ensuring the ethical conduct of organ donations. Hence, organ donation awareness is now of utmost importance, particularly in societies that suffer a large gap between supply and demand for transplantable organs [ 4 , 5 ]. Although Saudi Arabia ranks high amongst the adjacent nations in terms of organ donation and transplantation, there continue to be recognizable issues that are affecting the general organ donation perception, awareness, and acceptance, particularly when it comes to donation after death [ 6 - 9 ]. In a recent publication, misconceptions surrounding brain death were noted as a major barrier preventing consent in Arab and Saudi donors [ 10 ]. Researchers have tried several promotional strategies to improve the rates of organ donation acceptance among their communities. One common approach has been to appeal to the emotions of the viewer through stories of patients and donors [ 11 ]. On the other hand, enhancing knowledge about a specific topic is a cornerstone for many public health campaigns such as campaigns about diabetes, smoking cessation, and depression or suicide [ 12 - 14 ]. Targeting the knowledge gaps in organ donation and transplantation campaigns may be difficult due to the inherited complexity of its concepts. Hence, it is not commonly adopted. Several studies have looked at the association between knowledge about donation and the willingness to donate or register as an organ donor. Repeatedly, willingness and acceptance of the idea of organ donation have been observed to be associated with better knowledge about the topic [ 9 , 15 - 18 ]. This raises the question if adopting a health promotional strategy that focuses on enhancing the knowledge about organ donation and transplantation in educational campaigns is effective. In this study, we aimed to look at the acceptability and utility of an organ donation campaign that focused on tackling the knowledge gaps and widespread misconceptions about the topic as its main cornerstone. These gaps and misconceptions were identified through a pre-campaign literature search by the authors. Subsequently, a public organ donation and transplantation awareness campaign took place in a public shopping mall in Riyadh, Saudi Arabia, over two days to achieve this goal.
Materials and methods In January 2020, a two-day organ donation and transplantation awareness campaign was conducted in a large shopping mall in Riyadh, Saudi Arabia. The Institutional Review Board of King Abdullah International Medical Research Center, located in Riyadh, Kingdom of Saudi Arabia, approved the study (approval number: RYD-20-419812-109731). The campaign featured four sequential stations, each designed to provide information on a specific aspect of organ donation or address a common misconception. Public participation was voluntary, and data were collected through a self-administered paper questionnaire provided to participants at the beginning of their tour, which was collected upon completion of the activity (Figure 1 ). At the first station, participants watched a three-minute video containing interviews with individuals who shared their experiences with living organ donation and transplantation (Figure 1A ). This was followed by a factual presentation on the prevalence of organ failure in the community and the persistent gap between supply and demand. Next, participants moved to the second station, where a medical trainee explained the organ donation process for both deceased and living donors using low-cost models representing the body and various organs (Figure 1B ). Here, participants learned about the liver's regenerative ability after donation and the sufficiency of a single kidney to meet the body's needs. In the third station, participants were presented with common misconceptions about organ donation and transplantation, which were then debunked and clarified (Figure 1C ). These misconceptions included the reversibility of brain death, donor body mutilation, long-term harm to living donors, and Islamic religious views on organ donation and transplantation. Lastly, at the fourth station (Figure 1D ), participants had the opportunity to ask a transplant physician and/or surgeon any further questions or concerns. The total time to complete all stations ranged from 10-15 minutes. The self-administered questionnaire had three parts. The first part, completed before participating in the stations, consisted of nine questions that aimed to gather basic demographic information and assess participants' knowledge and perception of organ donation. The second part, completed after the tour, contained eight questions designed to identify the most significant motivators for participants to accept organ donation based on their campaign experience and any barriers that might affect their willingness to donate. The third part of the questionnaire included six questions aiming to capture participants' opinions on the utility and acceptability of the campaign format and whether it provided them with new information about organ donation and transplantation. The last two parts of the questionnaire used a five-point Likert scale for scoring. Although administered in Arabic, a translated English version of the questionnaire is available in the supplementary materials (See Appendices). Data were analyzed using IBM SPSS Statistics for Windows, Version 27.0 (Released 2020; IBM Corp., Armonk, New York, United States). After normality testing, continuous data were presented as means ± standard deviation, while categorical data were reported as absolute numbers and percentile proportions.
Results A total of 201 individuals aged 15 years and above participated in the campaign, with 167 of them completing all activities and submitting a filled questionnaire (83% response rate). The mean age of the participants was 27.2 ± 8.2 years, ranging from 15 to 63 years old. Males represented 61% of the participants. Additional sociodemographic variables are presented in Table 1 . Notably, even though 35% of the participants reported knowing a friend or a family member who suffered end-stage organ failure requiring transplantation, 55% of participants reported having no or minimal understanding of organ donation and transplantation processes. While most participants (70%) initially held a positive perspective of organ donation, 5-8% of participants expressed a negative perspective towards organ donation (Table 1 ). The primary sources of information on the topic of organ donation and transplantation were reported to be the Internet (54%), healthcare personnel through personal interaction or campaigns (38%), and friends or family members (26%). After receiving information from the four stations, participants rated the most persuasive reasons for engaging in organ donation as well as reasons they perceived to constitute significant barriers. The most common reasons for willingness to donate were the sense of population beneficence (89.1%), witnessing patient suffering (75.9%), the lack of alternative treatment options for patients with end-stage organ failure (75.3%), and the widespread prevalence of organ failure (72.8%) (Figure 3A ). Conversely, the most common barriers affecting donation willingness were the fear of organ failure after living donation (66.2%), concern about body image distortion after deceased donation (42.5%), the ambiguity surrounding the concept of brain death (35%), and conflicting religious opinions and perspectives (31.2%) (Figure 3B ). Upon completing the campaign, the vast majority of participants (92.9%) reported learning new information about organ donation. All of those reported that the newly acquired knowledge further improved their perspective toward organ donation. Furthermore, almost all participants (93.5%) felt that the campaign answered all their questions about organ donations, which encouraged 90.9% of them to support donation efforts by deciding to register as organ donors in the national registry. Interestingly, five of the eight participants who initially held a negative perception of organ donation reported considering registering as an organ donor by the end of the campaign.
Discussion The strategy of publicly addressing factual knowledge and correcting misconceptions to encourage organ donation is uncommon, particularly in the Middle East, where the complex interplay of cultural and religious factors surrounding the topic can potentially complicate the conversation. Although organ donation is endorsed as a religiously charitable act by the major religious bodies in the region; misconceptions regarding Islamic views continue to affect the public’s willingness to donate [ 19 ]. Several studies examining attitudes and perceptions towards organ donation have identified a lack of sufficient knowledge on the subject as a barrier as well as found a positive correlation between the level of knowledge about organ donation and willingness to donate [ 20 - 23 ]. Moreover, studies on related subjects indicate the potential advantages of integrating knowledge-enhancing strategies into health education campaigns focused on organ donation. Few available studies implying public knowledge-promoting strategies in organ donation have demonstrated positive outcomes when implemented. For instance, a Greek study employed an interactive online questionnaire to assess perceptions and attitudes toward cornea donation while educating participants on the process and value of cornea donation. The study revealed that improved knowledge significantly influenced a favorable change in attitudes toward cornea donation and increased willingness to become a cornea donor among the Greek population [ 24 ]. Another study conducted in London, Ontario, Canada, involved a pilot educational campaign to inform attendees of junior hockey league games about deceased organ donation. During the campaign, a modest increase in donor registrations was observed [ 25 ]. To our knowledge, there is no published data examining the utility of similar educational campaigns in the field of organ transplantation in our region. This study primarily highlights the acceptability of the approach among the population when executed appropriately. Despite the campaign's reliance on factual and scientific narratives accompanied by straightforward models for aiding explanation and visualization for non-medical individuals, participants' responses were predominantly positive. It is important to note that this outcome was achieved within a culture that tends to be reserved and unaccustomed to such an approach, especially concerning a sensitive topic like organ donation. The questionnaire asked the participants to declare their opinions before and then after participation in the campaign, revealing an improvement in perceptions. Eight participants engaged in the campaign with initial negative perceptions about organ transplantation. It was interesting to have five of them change their views by the end of the campaign to the extent that they reported their intention to register as organ donors. This, we believe, is reflective of the potential impact of the strategies followed in the educational campaigns. Although the results are encouraging, our study remains limited by the short timeframe, the number of participants, and the fact that it takes place in one location in Riyadh. As this was a researcher-made poll, all the data provided are subjective. For example, there are no objective numbers about how many people who joined or registered to be organ donors. Additionally, the voluntary nature of participation may have selectively attracted those who had a positive attitude or knowledge about the topic.
Conclusions Enhancing the public’s knowledge about organ donation and transplantation through focused education incorporated with clear messages improves their perception and represents a successful strategy to promote awareness of organ donation and transplant. However, the study is limited by its short timeframe, location, and subjective data. Future research should explore the impact of such campaigns on donor registrations and evaluate their effectiveness in different cultural contexts.
Introduction Organ transplantation is a critical intervention for patients with end-stage organ failure, but misconceptions and knowledge gaps often hinder organ donation. This study evaluates the acceptability and effectiveness of an organ donation campaign focusing on addressing knowledge gaps and misconceptions in Riyadh, Saudi Arabia. Methods A two-day awareness campaign was conducted in a shopping mall, featuring four stations providing information on various aspects of organ donation. Participants completed a self-administered, researcher-developed, questionnaire before and after the tour. Results Of the 201 participants, 167 completed the questionnaire (83% response rate). The majority (92.9%) reported learning new information and indicated that the knowledge improved their perspective on organ donation. A high percentage (93.5%) felt the campaign answered their questions, with 90.9% deciding to register as organ donors. Conclusion A knowledge-enhancing campaign can effectively improve public perception and promote awareness of organ donation and transplantation. However, the study is limited by its short timeframe, location, and subjective data. Future research should explore the impact of such campaigns on donor registrations and evaluate their effectiveness in different cultural contexts.
Appendices
CC BY
no
2024-01-16 23:47:16
Cureus.; 16(1):e52303
oa_package/2b/8f/PMC10789223.tar.gz
PMC10789228
37291855
In Canadian law, children are the only group of citizens who can be legally subjected to corporal punishment (CP). The United Nations Convention on the Rights of the Child insists on the necessity of protecting the child from all forms of physical and mental violence while in the care of the parents or legal guardians, including CP. Numerous organizations have condemned the use of CP and over 600 organizations have endorsed a Joint Statement on Physical Punishment of Children and Youth. 1 CP has not been demonstrated to yield positive outcomes and is associated with an increased risk of depression, anxiety, oppositional behaviors, alcohol and drug use, lower school engagement, slower cognitive development, peer isolation and suicide attempts. 2 , 3 To this day, one of the most contentious issues regarding CP is the question of whether there is in fact a causal relationship between CP and problematic behaviors in children. A meta-analysis conducted in 2002 found an association between parental spanking and negative outcomes. 4 However, many criticized the methodology of the studies used for this meta-analysis. First, it was argued that since most studies were cross-sectional or retrospective, a causal relationship could not be inferred. Critics mentioned the need for more longitudinal prospective studies to avoid overreliance on retrospective recall. Second, confounding factors and selection bias represent a fundamental problem when analysing nonrandomized studies in meta-analyses. Ethical concerns have made it impossible to evaluate the impact of CP by using an experimental randomized design. However, even when reviewing experimental studies that were conducted in the 1980s and 1990s, none reported CP to be more effective than other parenting strategies. 5 Third, it was suggested that better replications, robustness checks and falsification tests would give more statistical power to the studies. Next, it was suggested that CP should be more clearly defined and distinguished from physical abuse. Finally, it was mentioned that the negative outcomes from CP could not be generalized to all contexts, such as in combination with other punitive strategies or in other settings. Some scholars have reanalysed the results from the 2002 meta-analysis and have suggested that the effect size for the impact of CP on negative child outcomes is significantly smaller than the one for harsh physical punishment, although both are significant and positive. 6 Following these critics, subsequent studies aimed to not only determine if there was a mere association between CP and pediatric behavioral problems, but to identify if there was a causal link between the two. Measuring CP before the onset of adverse outcomes was therefore essential in establishing causality. In order to confirm this causal relationship without experimental studies, CP must be correlated with poor outcomes, parental use of CP must precede poor outcomes and the relationship should not be accounted for by other confounding factors. An effort was made to increase the number of longitudinal prospective studies, and specific designs were used to increase the strength of the causal relationship between CP and behavioral problems. New methodological approaches and rating scales assessing parenting behaviors have been designed over the years in attempt to address some of the limitations from previous measures in recording the incidence of CP. A more specific definition of spanking was also used to clearly distinguish it from physical abuse. When controlling the direction of the associations between spanking and negative outcomes, CP was associated with lower long-term obedience, more aggressive behavior, more mental health issues, lower cognitive performance, increased risk of abuse and poorer relationship between the parent and their child. 5 Recently, a 2016 meta-analysis reviewed 50 years of research on CP, finding that CP consistently predicted deleterious outcomes for children. 6 This study only selected studies that used a more restrictive definition of “spanking” and more sophisticated analytical techniques, such as longitudinal studies and rigorous control of variables. It supported that CP was associated with increased aggressive and antisocial behavior even when removing studies relying on potentially abusive parenting methods. Also, while 70% of the studies were cross-sectional or retrospective, the effect sizes did not differ from the longitudinal studies. Even more recently, a 2021 meta-analysis evaluating 69 prospective longitudinal studies confirmed previous findings that CP was a predictor of increased behavioral problems, further supporting evidence of a causal relationship. 7 There is a debate between choosing legislation either specifying limitations or a complete prohibition of CP. In Canada, the Supreme Court narrowed the application of Section 43 of the Criminal Code to “the use of minor force that is reasonable under the circumstances” and offered guidelines regarding the use of CP on children. CP should only be used by parents or caregivers, be transitory and trifling in nature, be administered between the ages of 2 and 12 years old. Furthermore, CP should not be given in retaliation for something a child did, cannot result in harm and cannot be used on a child who is incapable of learning from the situation. If physical punishment is given, the force must be minor and objects, such as belts or rulers, cannot be used. A recent study demonstrated that most cases of substantiated physical abuse in Canada would still fall within the guidelines provided by the Court. 8 Most cases of maltreatment involve the parents, with most children being usually between 2 and 12 years old. These cases also usually do not result in physical injury and do not involve the use of an object. Sweden was the first country to ban CP in 1979. Since then, 62 states have banned all types of CP on children. Countries prohibiting the use of CP have lower rates and faster reductions in the use of CP as well as a shift in parental attitudes towards CP. 9 In several jurisdictions, prohibition of CP serves more as an educational rather than a punitive role, as the goal is to enhance awareness and offer assistance in preventing CP. According to many, any legal prohibition should not be punitive, but geared towards providing additional resources for parents and families in need, especially when CP is part of traditional cultural disciplinary method. Being culturally sensitive and bridging cross-cultural differences are essential in creating a legislation which takes into consideration the complex and specific needs of the community without bias, while keeping in mind children's best interests. Beyond the debate regarding the harm caused by CP, we need to move the discussion of harm to the issue of ethical reasons for opposing CP. A false dichotomy seems to exist between CP and abuse that legitimizes physical aggression against youth. In fact, there is some irony in discipline attempts that try to reduce externalized behavior – such as aggression – by using spanking. In sum, there is no medical or scientific reason to support CP as a child rearing tool. In Canada, children are the only group of citizens who can still be legally subjected to CP under the Criminal Code C-46, Section 43. Therefore, we see no reason why this section of the Code should remain.
CC BY
no
2024-01-16 23:47:16
Can J Psychiatry. 2024 Feb 8; 69(2):77-78
oa_package/64/87/PMC10789228.tar.gz
PMC10789231
37563976
Introduction Mental health disorders and substance use are public health concerns in military and veteran populations worldwide. In 2002, the Canadian Community Health Survey Cycle 1.2, Canadian Forces Supplement (CCHS-CFS) was conducted and provided the first nationally representative mental health snapshot of the Canadian Armed Forces (CAF). 1 , 2 Data indicated that 14.9% of CAF personnel met the criteria for a past 12-month mental disorder and 34.4% had heavy alcohol use. 1 In 2013, the Canadian Forces Mental Health Survey provided an update regarding the mental health of the CAF and found the prevalence of past 12-month mental disorders and alcohol use disorder was 16.5% and 4.5%, respectively. 3 , 4 The prevalence of self-reported past 12-month mental disorders among Canadian veterans ranges from 6.2% to 9.2%. 5 Importantly, cannabis use was not assessed in these previous Canadian studies. Cannabis use among veterans remains an understudied public health priority requiring more research to better clarify risks and benefits in this population. 6 A recent review of the literature on cannabis use among veterans indicated that although causal relationships cannot be determined with the current evidence, potential harms related to cannabis may include increased likelihood of using other substances, mental disorders, and self-harm or suicidality. 7 Currently, there is limited knowledge specifically on cannabis use among veterans in Canada. What is known from a nationally representative United States (US) sample from 2015 to 2017 is that past 12-month cannabis use ranged from 6% to 34% among veterans depending on age group and sex and was lower than past 12-month alcohol use (64% to 91%). 8 Data from another nationally representative US veteran sample indicated that the lifetime and past 12-month prevalence of cannabis use among US veterans was 32.5% and 7.3%, respectively. 9 A recent review on cannabis use among veterans found that 93% of the available research involves US veterans. 7 The absence of similar Canadian data may be inhibiting effective Canadian health-care provision. Notably, Veteran Affairs Canada (VAC) reimbursements for medically authorized cannabis costs for 2019–2020 were $85 million covering 10 million grams of cannabis, which has increased substantially since 2011. 10 Preliminary Canadian veteran data indicates that reimbursements from VAC for cannabis use for medical purposes are related to mental and physical health conditions, chronic pain, poor functioning, distress, suicidal ideation, problems with finances, and problems with family. 11 Importantly, the US Department of Veterans Affairs health-care providers do not recommend the use of cannabis nor will Veterans Affairs pay for medical cannabis prescriptions from any source, 12 which further highlights the need for more Canadian research. Understanding how trauma may be related to cannabis use among veterans is another important knowledge gap. From a general population perspective, we know that child maltreatment is related to poor mental health and substance use. 13 – 17 Military personnel can be exposed to DRTEs during service, which may have an impact on their mental health and substance use. 1 , 18 More specifically, exposure to combat and/or peacekeeping on deployment has been associated with mental disorders among both serving men and women. 2 Nearly half (47%) of CAF Regular Force personnel had a history of child abuse, as compared to 33% of the general population, and a child abuse history and DRTEs were both individually and cumulatively associated with increased odds of suicidal behaviours 19 and mental disorders. 20 Importantly, child abuse history has also been associated with alcohol use disorder among serving CAF personnel. 4 From a theoretical perspective, cannabis might be used to self-medicate as a means of coping with child maltreatment histories and DRTEs. Further examination of these relationships may inform prevention efforts and provide clinical insights to increase knowledge about and reduce the potentially harmful use of cannabis among veterans. Although, it is necessary to note that veterans may use cannabis for reasons other than coping with child maltreatment or DRTEs. More work is also needed to better understand possible sex differences related to cannabis use among veterans. More specifically, knowledge gaps related to sex and cannabis use are problematic because substance use may not be the same for male and female veterans. 8 There are results from older US data wherein female veterans may have more substance misuse compared to male veterans, 21 but newer data specific to cannabis indicates that female veterans are less likely to use cannabis than male veterans. 9 , 22 As well, recent data among female US veterans indicates that 11% regularly used cannabis and cannabis use was related to alcohol use, posttraumatic stress disorder (PTSD) symptoms, childhood trauma, and sexual trauma. 23 Mental disorder and chronic pain may be important to consider when examining trauma and cannabis use. Past 12-month cannabis use was associated with increased odds of alcohol use disorder, opioid use disorder, drug use disorder, tobacco use disorder, any mood disorder, and any anxiety disorder among a representative sample of US veterans. 9 , 24 – 26 Similarly, other research using a representative US veteran sample found a statistically significant relationship between cannabis use and mental disorders among veterans with subthreshold/full PTSD. 25 Medical use of cannabis is also common among military and veteran populations for the management of PTSD symptoms and chronic pain. 27 More frequent cannabis use was found to be more likely among US veterans with chronic pain and cannabis use disorder was more likely among those reporting recent pain. 28 The current study objectives are as follows. First, to compute the prevalence of lifetime and past 12-month cannabis use. Second, to examine if veterans with (a) a history of child maltreatment and (b) DRTEs are more likely to use cannabis in the past 12 months after adjusting for sociodemographic variables, military variables, mental disorders, and chronic pain conditions, and whether these relationships differ by sex. Third, to determine whether a cumulative or interaction effect for child maltreatment history and DRTEs exists in relation to the past 12-month cannabis use.
Method Data and Sample Statistics Canada conducted the CCHS-CFS in 2002, which included a representative sample of 5,155 active duty Regular Forces CAF personnel. 29 In 2018, the Canadian Armed Forces Members and Veterans Mental Health Follow-up Survey (CAFVMHS) data were collected as a follow-up survey to the 2002 CCHS-CFS. Statistics Canada reinterviewed 68.7% ( n = 2,941 including n = 949 actively serving Regular Force CAF personnel and n = 1,992 veterans) of those eligible. 30 The current study only included veterans because cannabis use was only asked among veterans. More details related to the CAFVMHS have been published elsewhere. 30 , 31 Cannabis Use. Respondents were asked if they had ever tried marijuana or hashish and if they had used marijuana or hashish in the past 12 months. Respondents who indicated using marijuana or hashish only 1 time were coded into the ‘No’ group for both lifetime and past 12-month use since one-time users would be different from those using marijuana or hashish more often. The frequency of the past 12-month cannabis use was also assessed but was only examined descriptively due to limited statistical power with this variable. Child Maltreatment. Child maltreatment occurring before 16 years of age included physical abuse, sexual abuse, emotional abuse, neglect, and exposure to intimate partner violence (IPV). Physical abuse was assessed with 3 items and coded as present if the respondent reported being: (a) slapped on the face, head, or ears, or hit or spanked with something hard (3 or more times); (b) pushed, grabbed, or shoved, or having something thrown at the respondent to hurt them (3 or more times); and/or (c) kicked, bit, punched, choked, burned or physically attacked (1 time or more). 32 Sexual abuse was measured using 2 items and coded as present if the respondent reported: (a) attempted or forced into unwanted sexual activity by being threatened, held down, or hurt in some way (1 time or more) and/or (b) sexually touched, meaning unwanted sexual touching or grabbing, kissing, or fondling (1 time or more). Emotional abuse was assessed using 1 item and coded as present if the respondent reported that a parent or other adult in the home said mean or hurtful things that made the respondent upset or feel really bad about themselves (6 or more times). 32 Two items were used to assess neglect and included: (a) had to go without things the respondent needed, like food, clothes, shoes, or school supplies (1 time or more) and/or (b) had been left alone or unsupervised before 10 years of age (1 time or more). One item was used to assess exposure to IPV and was coded as present if the respondent ever saw or heard parents, step-parents, or guardians hitting each other or another adult in the home (3 or more times). 32 A child maltreatment variable (yes or no) was computed to measure experiencing any of the 5 child maltreatment types assessed. Deployment-Related Traumatic Events (DRTEs). Exposure to DRTEs during a CAF deployment was assessed using 10 dichotomous items (yes or no) using the deployment experiences scale. 31 If a respondent had never been deployed, they were coded as ‘No’ on all of the items. Two items were combined to assess military sexual trauma (i.e., ever sexually assaulted while on a CAF deployment and/or ever experienced any unwanted sexual touching while on a CAF deployment) and 8 items assessed exposure to other types of DRTEs. A dichotomous variable indicating any DRTE (yes or no) was computed based on whether the respondent reported exposure to 1 or more of any of the 10 DRTE items assessed. In addition, a 3-level deployment and DRTE variable were also computed to assess no deployment, deployment without DRTEs, and deployment with DRTEs. Mental Disorders. The World Health Organization version of the Composite International Diagnostic Interview based on DSM-IV diagnostic criteria was used to assess common mental disorders. 33 – 36 Past 12-month disorders included generalized anxiety disorder, panic disorder, social phobia, major depressive episode, and alcohol use disorder. An algorithm using several variables was computed to create a past 12-month PTSD diagnosis since a past-year diagnosis was not directly assessed. A past 12-month PTSD diagnosis was assessed using 3 criteria: (a) the presence of a composite international diagnostic interview (CIDI)-based PTSD diagnosis in the 16-year follow-up; (b) responding ‘yes’ to a single question that assessed whether the individual had PTSD-related reactions in the past 12 months; and (c) at least 3 of the 7 PTSD symptoms that were assessed in a past-year time frame. Chronic Pain Conditions. Any chronic pain condition was assessed based on whether the respondent self-reported having a physician or health-care provider diagnose them with any of the following conditions: arthritis, back problems, migraine headaches, and/or any gastrointestinal conditions (i.e., irritable bowel syndrome, inflammatory bowel disease, Crohn's disease, ulcerative colitis, or intestinal or stomach ulcers). Sociodemographic and Military Covariates. Sociodemographic and military covariates included sex (male, female), age (continuous), race/ethnicity (White, non-White), total past-year household income (<$50,000, $50,000 to $99,999, $100,000 to $149,999, $150,000 or more), the highest level of education (less than high school, high school diploma or equivalent, some postsecondary [less than a bachelor's degree], bachelor's degree or higher), and last military environment (air, land, and sea). Statistical Analyses Data were weighted in all analyses to ensure the estimates were representative of the original 2002 CAF study sample. Bootstrapping was the variance estimation technique employed to account for the complex survey design. First, descriptive statistics were computed to examine the prevalence of cannabis use and the distribution of sociodemographic and military covariates, child maltreatment history, and DRTEs by past 12-month cannabis use. Second, logistic regression models were computed to determine if child maltreatment histories and DRTEs were associated with an increased likelihood of past 12-month cannabis use after adjusting, first, for sociodemographic and military variables, adjusted odds ratio-1 (AOR-1) and, additionally, for mental disorders and chronic pain conditions (AOR-2). Sex differences in these relationships were also examined with interaction effects. Third, logistic regression models were computed to determine the interactive and cumulative effects of a child maltreatment history and DRTEs on past 12-month cannabis use. STATA software was used to conduct the statistical analysis.
Results Table 1 provides lifetime and past 12-month cannabis use prevalence including sex differences. The prevalence of lifetime and past 12-month cannabis use in the overall sample was 49.4% and 16.7%, respectively. Sex differences were noted for lifetime cannabis use with females being less likely to report compared to males (odds ratio [OR] = 0.71; 95% confidence interval (CI), 0.59 to 0.86). However, no statistically significant sex differences were found for past 12-month cannabis use. Among those who use cannabis in the past 12 months, 43.6% reported using it as much as every day. Table 2 provides the sociodemographic characteristics, military variables, and child maltreatment and DRTE histories stratified by past 12-month cannabis use. Older age, household income of $150,000 or more (compared to <$50,000), and having a bachelor's degree or higher (compared to less than a high school diploma) were all associated with a decreased likelihood of past 12-month cannabis use. Being separated/divorced/widowed (compared to married/common-law), experiencing any child maltreatment and any DRTEs were associated with an increased likelihood of past 12-month cannabis use. Table 3 provides findings of the relationship between child maltreatment types and DRTEs with past 12-month cannabis use. Physical abuse, sexual abuse, and neglect were associated with increased odds of past 12-month cannabis use after adjusting for sociodemographic and military variables (AOR-1 ranging from 1.53 to 2.41). Emotional abuse and exposure to IPV were not statistically significant but could reflect a Type II error due to underpowered models. When further adjusting for mental disorders and chronic pain conditions, physical abuse and sexual abuse remained statistically significant (AOR-2 = 1.64 and 2.31, respectively). No statistically significant interaction terms were found for sex and child maltreatment. All DRTEs were associated with past 12-month cannabis use in the models adjusting for sociodemographic and military variables except for ever receiving incoming artillery/rocket fire or ever having had difficulty distinguishing between combatants and noncombatants, possibly due to underpowered models (AOR-1 ranging from 1.42 to 2.45). When further adjusting for mental disorders and chronic pain conditions, all other DRTEs were no longer statistically significant. No statistically significant sex differences were found in these relationships when examining interaction effects. Table 4 provides the findings for the independent and interactive effects of child maltreatment and DRTEs on the past 12-month cannabis use. In individual models adjusting for sociodemographic and military variables, any child maltreatment (AOR-1 = 1.93; 95% CI, 1.37 to 2.72) and any DRTE (AOR-1 = 1.58; 95% CI, 1.11 to 2.23) were both individually associated with past 12-month cannabis use. When child maltreatment and DRTEs were both put into the same models with the same covariates, both child maltreatment (AOR-1 = 1.86; 95% CI, 1.32 to 2.63) and DRTEs (AOR-1 = 1.50; 95% CI, 1.06 to 2.13) were independently associated with past 12-month cannabis use. When further adjusting for mental disorders and any chronic pain condition, DRTEs became nonsignificant in both the independent models and when entered with any child maltreatment. However, any child maltreatment history remained statistically significant (AOR-2 = 1.88; 95% CI, 1.29 to 2.75) independent of DRTEs, mental disorders, and chronic pain. The interaction effect of any child maltreatment and any DRTEs were nonsignificant in all models. Table 5 provides the findings for the cumulative effects of child maltreatment and DRTEs during the past 12-month cannabis use. The 4 groups in the models included: (a) those with no child maltreatment history or DRTEs, (b) those with a child maltreatment history but no DRTEs, (c) those with DRTEs but no child maltreatment history, and (d) those with both child maltreatment history and DRTEs. In models adjusting for sociodemographic and military variables, child maltreatment with (AOR-1 = 2.89; 95% CI, 1.60 to 5.21) and without DRTEs (AOR-1 = 1.97; 95% CI, 1.03 to 3.79) was associated with past 12-month cannabis use. The association with past 12-month cannabis use was greater when experiencing both child maltreatment history and DRTEs together compared to DRTEs alone but was not statistically significantly different from child maltreatment alone. When further adjusting for mental disorders and any chronic pain condition, only child maltreatment and DRTEs were associated with an increased likelihood of past 12-month cannabis use (AOR-2 = 2.20; 95% CI, 1.17 to 4.11).
Discussion The current study found the overall prevalence of lifetime cannabis use among veterans to be lower among females compared to males, consistent with previous research from the US. 9 , 22 However, differences were not noted in the past 12-month cannabis use between male and female Canadian veterans. Any work in this area should include male and female veterans. The most robust association with the past 12-month cannabis use was the experience of childhood physical abuse and sexual abuse, independent of sociodemographic characteristics, military covariates, mental disorders, chronic pain conditions, and DRTEs. Any child maltreatment also remained independently associated with past 12-month cannabis use. The effect sizes of the relationship between child maltreatment and cannabis use in this veteran sample were similar in size to those found in general population samples 37 – 39 . Although, due to differences in the data, direct comparisons cannot be made. Importantly, the findings from the current study indicate how child maltreatment can continue to have impacts across the lifespan and also extends existing evidence from other studies that indicate that child maltreatment is associated with an increased likelihood of suicide-related behaviour and mental disorders in military samples. 19 , 20 Accordingly, prevention efforts aimed at reducing child maltreatment are of utmost importance, alongside evidence-based interventions that may reduce the onset of cannabis use. Persons working with military and veteran populations should understand the link between child maltreatment and cannabis use. Importantly, it should be noted that even though robust relationships were found between trauma and cannabis use, a large proportion of veterans using cannabis did not experience child maltreatment or DRTEs. This highlights the need to understand other reasons for cannabis use in this population. Experiencing trauma while in the military is also an important factor in understanding cannabis use. Notably, similar proportions of past 12-month cannabis use were found for those who never deployed and those who deployed without DRTEs, but a higher proportion for those who deployed with DRTEs although this OR did not quite reach statistical significance. Furthermore, almost all DRTEs were associated with the past 12-month cannabis use when adjusting for sociodemographic and military covariates. However, when accounting for mental disorders and any chronic pain condition, all findings were attenuated and became nonsignificant. This may indicate that the variance in the relationship between DRTEs and the past 12-month cannabis use is accounted for by mental disorders and chronic pain conditions. The present results align with a self-medication hypothesis: vulnerability to distress may arise from early or later traumas, subsequently manifest in diagnosable mental health disorders, which are subsequently self-medicated with cannabis (or other drugs). 40 Importantly this study was not able to assess if veterans were using cannabis specifically as a means of coping with mental health or physical health problems such as pain. More research in the area is warranted. When examining cumulative effects, child maltreatment alone without DRTEs was significantly associated with increased odds of cannabis use, whereas DRTEs without child maltreatment were not. However, cumulative effects were found, which indicated that having experienced both child maltreatment and DRTEs was linked with a greater likelihood of past 12-month cannabis use, compared to DRTEs without child maltreatment, but not statistically significantly different from child maltreatment only. Clinically, if treatment is needed for cannabis use, then child maltreatment, DRTEs, mental disorders, and pain conditions may all need to be understood to inform effective treatment strategies that address the complex interplay, either sequentially or concurrently. 41 In any case, the recent use of cannabis as a PTSD treatment may create complications with respect to assessment and treatment. 42 Limitations of the current research should be noted. First, child maltreatment was retrospectively assessed in adulthood, which may result in recall bias. However, there is evidence that shows retrospective recall in survey data to assess trauma in childhood is valid and reliable. 43 – 45 Second, because the data were cross-sectional, inferences regarding causation cannot be made. However, assessing the past 12-month cannabis use helps to establish a temporality of exposure to outcome. Third, the current study assessed the use of marijuana and hashish in early 2018, which was several months before cannabis became legal and more widely accessible in Canada. Thus, the current findings may represent conservative estimates of cannabis use since disclosure may have been less likely before legislative change. Additionally, this study was not able to distinguish between medical and recreational cannabis use or to assess those who would meet the criteria for cannabis use disorder. Entirely disentangling medical and recreational cannabis use would be difficult because both could be related to coping. The frequency of cannabis use could not be examined in models due to limited statistical power. Fourth, these data only allowed assessment of conditions associated with chronic pain, not chronic pain itself. Finally, sex differences for cumulative and interaction models could not be computed due to insufficient power. The current findings highlight the importance of a comprehensive understanding of trauma histories, including child maltreatment and DRTEs, and the impact these experiences may have on cannabis use and veteran health across the lifespan. Further work is needed to understand the role cannabis use may play related to self-medication or as a means of coping among veterans. From a public health perspective, the prevention of child maltreatment remains a priority and may, over time, lead to better health in the overall population including among veterans.
Objective Cannabis use among veterans in Canada is an understudied public health priority. The current study examined cannabis use prevalence and the relationships between child maltreatment histories and deployment-related traumatic events (DRTEs) with past 12-month cannabis use including sex differences among Canadian veterans. Method Data were drawn from the 2018 Canadian Armed Forces Members and Veterans Mental Health Follow-up Survey (response rate 68.7%; veterans only n = 1,992). Five child maltreatment types and 9 types of DRTEs were assessed in relation to the past 12-month cannabis use. Results The prevalence of lifetime and past 12-month cannabis use was 49.4% and 16.7%, respectively. Females were less likely than males to report lifetime cannabis use (41.9% vs. 50.4%; odds ratio [OR] 0.71; 95% CI, – 0.59 to 0.86). No sex differences were noted for past 12-month cannabis use (14.1% vs. 17.0%; OR 0.80; 95% CI, 0.60 to 1.07). Physical abuse, sexual abuse, neglect, any child maltreatment, most individual DRTEs, and any DRTE were associated with increased odds of past 12-month cannabis use after adjusting for sociodemographic and military variables. Some models were attenuated and/or nonsignificant after further adjustments for mental disorders and chronic pain conditions. Sex did not statistically significantly moderate these relationships. Cumulative effects of having experienced both child maltreatment and DRTEs compared to DRTEs alone increased the odds of past 12-month cannabis use. Statistically significant interaction effects between child maltreatment history and DRTE on cannabis use were not found. Conclusions Child maltreatment histories and DRTEs increased the likelihood of past 12-month cannabis use among Canadian veterans. A history of child maltreatment, compared to DRTEs, indicated a more robust relationship. Understanding the links between child maltreatment, DRTEs, and cannabis use along with mental disorders and chronic pain conditions is important for developing interventions and improving health outcomes among veterans. Abrégé Objectif L’utilisation du cannabis chez les anciens combattants du Canada est une priorité de santé publique sous-étudiée. La présente étude a examiné la prévalence de l’utilisation du cannabis et les relations entre les antécédents de la maltraitance d’enfants et les événements traumatisants liés au déploiement (ETLD) l’utilisation de cannabis des 12 derniers mois y compris les différences entre les sexes parmi les anciens combattants canadiens. Méthode Les données ont été tirées de l’Enquête de suivi 2018 sur la santé mentale des membres des Forces armées canadiennes et des vétérans (ESSMRFC, taux de réponse 68,7 %; vétérans seulement n = 1 992). Cinq types de maltraitance d’enfants et 9 types d’ETLD ont été évalués en relation aux 12 mois passés d’utilisation du cannabis, Résultats La prévalence de durée de vie et des 12 derniers mois de l’utilisation du cannabis était de 49,4 % et 16,7 %, respectivement. Les femmes étaient moins susceptibles que les hommes de déclarer une utilisation de durée de vie du cannabis (41,9 % c. 50,4 %; rapport de cotes 0,71, IC à 95 % – 0,59 à 0,86). Aucune différence entre les sexes n’a été notée dans les 12 derniers mois d’utilisation du cannabis (14,1 % c. 17,0 %; rapport de cotes 0,80, IC à 95 % = 0,60 à 1,07). Abus physique, abus sexuel, toute maltraitance d’enfants, la plupart des ETLD individuels, et tout ETLD étaient associés à des probabilités accrues d’utilisation du cannabis des 12 derniers mois après ajustement pour les variables sociodémographiques et militaires. Certains modèles ont été atténués et/ou non significatifs après d’autres ajustements pour les troubles mentaux et les affections de douleur chronique. Le sexe ne modérait pas statistiquement et significativement ces relations. Les effets cumulatifs d’avoir vécu à la fois la maltraitance d’enfants et les ETLD comparés aux ETLD seuls augmentaient les probabilités d’utilisation du cannabis des 12 derniers mois. Les effets d’interaction statistiquement significatifs entre les antécédents de maltraitance d’enfants et les ETLD sur l’utilisation du cannabis n’ont pas été observés. Conclusions Les antécédents de maltraitance d’enfants et les ETLD augmentaient la probabilité d’utilisation de cannabis au cours des 12 derniers mois chez les anciens combattants canadiens. Des antécédents de maltraitance d’enfants, comparés aux ETLD, indiquaient une relation plus robuste. Comprendre les liens entre la maltraitance d’enfants, les ETLD et l’utilisation du cannabis avec les troubles mentaux et les conditions de douleur chronique est important pour développer des interventions et améliorer les résultats de santé chez les anciens combattants.
Acknowledgements We would like to acknowledge the CAFVMHS team for all contributions related to this work.
CC BY
no
2024-01-16 23:47:16
Can J Psychiatry. 2024 Feb 11; 69(2):116-125
oa_package/39/01/PMC10789231.tar.gz
PMC10789233
38226367
By fusing literature data mining, high-performance simulations, and high-accuracy experiments, robotic AI-Chemist can achieve automated high-throughput production, classification, cleaning, association and fusion of data, and thus develop a multi-modal AI-ready database.
Artificial intelligence (AI) is being increasingly used in chemical research, not only to make accurate predictions, but also to extract hidden physical laws from the data. Massive high-quality data can give AI powerful capabilities, as demonstrated by the successes of AlphaFold and generative pre-trained transformer (GPT). Existing scientific data are multi-source, multi-type, non-quality-assured, decentralized, and difficult to form a synergy, making it the biggest obstacle to the application of AI in the scientific field. How to obtain high-quality scientific big data with uniform standards and broad coverage, and how to establish multi-modal AI-ready databases for training powerful scientific models, are important issues in combining science and AI. Scientific data mainly comes from literatures, theoretical calculations and experimental measurements. Therefore, the most effective way to establish a large scientific database is to automate the acquisition and examination of data by combining large-scale data mining, high-performance computational simulations, and high-precision robotic experiments. In recent years, pioneers have developed a variety of automated experimental platforms that are able not only to modify experimental conditions to optimize the experiment independently [ 1 ], but also to read the information in the literature to design the experimental scheme [ 2 ], and to perform theoretical calculations to assist in the analysis during the experiment [ 3 ]. At the same time, mobile robots can be used to perform more general instrumentation tasks and achieve automatic fabrication and characterization in complex and diverse experimental environments [ 4–6 ]. Recently, a robotic AI-Chemist platform has been developed, which can automatically read chemical literatures, intelligently design experimental processes, and perform the entire process of simulation-synthesis-characterization-testing experiments [ 5 ]. Based on this platform, it is expected to achieve high-throughput data acquisition, interactive calibration of theoretical and experimental data, and validation of literatures, and to establish an AI-ready database covering massive scientific data and integrating chemical knowledge. The process of establishing this database can be divided into five steps: high-throughput production, classification, cleaning, association and fusion (see Fig. 1 ). First, scientific data shall be high-throughput produced from literatures, simulations and experiments. For literatures, the multi-modal data such as text, images, charts are extracted by natural language processing and image recognition technologies. Through interpreting chemical entities and their correlations, the text annotation of spectrograms, tables and chemical symbols can be accomplished. For simulations, it is necessary to automate the construction of material structures and molecular models, and to automatically select appropriate calculation methods to generate large quantities of various physicochemical data. For experiments, unified data formats shall be developed to enable automatic collection and rapid analysis of data, and further to correlate data profiles from different viewpoints of the same sample through different instruments. In this way, robotic AI-Chemist can comprehensively acquire multi-level data on structures, properties, interactions and evolutions, and assign labels and logical semantics to data. Scientific data obtained from different substances, properties, experimental and computational conditions are not comparable with each other, thus classification is necessary to establish category-based data descriptions. Traditional classification relies on the labelling of data, but scientific data from different fields often lack consistent and comparable labels. Our recent research has shown that spectra are a universal, comparable, theoretically computable, experimentally measurable descriptor, and implicitly contain information about the structure and properties of entities [ 7 ]. Spectra-based clustering can well capture the similarity of entities, and classify substances into different categories with significantly different properties [ 8 ]. By classifying data from multiple perspectives, such as structural features, spectral features, experimental formulas, fabrication processes and data accuracy, we can precisely define the similarity of data and make comparisons within the same data category. An AI-ready database also needs to ensure data integrity and accuracy, for which machine-learning models with strong extrapolation capabilities must be established for data cleaning. The first step is to develop intelligent algorithms that can automatically extract reasonable combinations of features as descriptors. For example, spectral descriptors with physical meaning can reflect the similarity and evolution of the structures and properties of substances [ 9 ]. Interpretable AI models can construct quantitative mathematical formulas with few parameters by symbolic regression, giving it high robustness, transferability and predictive capability even with imperfect and small data sets [ 10 ]. Based on this, we can establish a scoring system that comprehensively evaluates the data in terms of source, credibility, integrity, reproducibility, accuracy, generation conditions, etc. By quantifying the quality of data, eliminating abnormal data points, filling in missing data points, verifying controversial data points through theoretical calculations and robotic experiments, we are able to significantly improve the quality of data. To achieve the alignment of multi-modal data, data associations need to be established by unifying and standardizing different representations of the same data. Although scientific data are diverse, they share a common material basis, such as different physical properties corresponding to the same molecular conformation and different spectra corresponding to the common vibrational mode. It is possible to take the material entity as the core of association, correlate its attribute data with the entity, and construct an association network between different data by analyzing the relationship between structures, spectra, components and properties. Extracting the common patterns in association network corresponding to the same material basis, we can form alignment criteria for multi-modal data. Furthermore, a knowledge graph can be established by extracting the temporal and logical relationships of entities and events. Based on the alignment criteria, data fusion can be performed to create a unified, efficient, scalable, structurally unambiguous and multimodal aligned data format, which integrates the characteristics of material structure, properties and reaction features, and is suitable as a unified input for AI models. Combining this database with theoretical simulation and machine-learning models, we can establish digital twin systems for material entities to achieve synergistic evolution of multi-dimensional data in space-time, and accurately predict and optimize the properties and evolutionary processes of matter. The multi-modal AI-ready database can fuse theoretical and experimental data of matter in different dimensions, provide precise data enriched with material properties and correlations for data-driven research in chemistry, materials science, biology, etc. It can also develop into a universal management system for scientific data, promoting multidisciplinary data exchange and facilitating interdisciplinary collaborations.
FUNDING This work was supported by the Innovation Program for Quantum Science and Technology (2021ZD0303303), the CAS Project for Young Scientists in Basic Research (YSBR-005), and the National Natural Science Foundation of China (22025304, 22033007, 22303088, 22203082 and 12227901). Conflict of interest statement . None declared.
CC BY
no
2024-01-16 23:47:16
Natl Sci Rev. 2023 Dec 27; 10(12):nwad332
oa_package/9e/67/PMC10789233.tar.gz